by Jimleuk
This n8n demonstrates how to build your own Qdrant MCP server to extend its functionality beyond that of the official implementation. This n8n implementation exposes other cool API features from Qdrant such as facet search, grouped search and recommendations APIs. With this, we can build an easily customisable and maintainable Qdrant MCP server for business intelligence. This MCP example is based off an official MCP reference implementation which can be found here - https://github.com/qdrant/mcp-server-qdrant How it works A MCP server trigger is used and connected to 5 custom workflow tools. We're using custom workflow tools as there is quite a few nodes required for each task. We use a mix of n8n supported Qdrant nodes for simple operations such as insert documents and similarity search, and HTTP node to hit the Qdrant API directly for Facet search, group search and recommendations. We use "Edit Field" and "Aggregate" nodes to return suitable responses to the MCP client. How to use This Qdrant MCP server allows any compatible MCP client to manage a Qdrant Collection by supporting select and create operations. You will need to have a collection available before you can use this server. Use the Prerequisite manual steps to get started! Connect your MCP client by following the n8n guidelines here - https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-langchain.mcptrigger/#integrating-with-claude-desktop Try the following queries in your MCP client: "Can you help me list the available companies in the collection?" "What do customers say about product deliveries from company X?" "What do customers of company X and company Y say about product ease of use?" Requirements Qdrant for vector store. This can be an a cloud-hosted instance or one you can self-host internally. MCP Client or Agent for usage such as Claude Desktop - https://claude.ai/download Customising this workflow Depending on what queries you'll receive, adjust the tool inputs to make it easier for the agent to set the right parameters. Not interested in Reviews? The techniques shared in this template can be used for other types of collections. Remember to set the MCP server to require credentials before going to production and sharing this MCP server with others!
by PollupAI
Who is this for? This workflow is ideal for individuals focused on nutrition tracking, meal planning, or diet optimization—whether you’re a health-conscious individual, fitness coach, or developer working on a healthtech app. It also fits well for anyone who wants to capture their meal data via voice or text, without manually entering everything into a spreadsheet. What problem is this workflow solving? Manually logging meals and breaking down their nutritional content is time-consuming and often skipped. This workflow automates that process using Telegram for input, OpenAI for natural language understanding, and Google Sheets for structured tracking. It enables users to record meals by typing or sending voice messages, which are transcribed, analyzed for nutrients, and automatically stored for tracking and review. What this workflow does This n8n automation lets users send either a text or voice message to a Telegram bot describing their meal. The workflow then: Receives the Telegram message Checks if it’s a voice message • If yes: Downloads the audio file and transcribes it using OpenAI • If no: Uses the text input directly Sends the meal description to OpenAI to extract a structured list of ingredients and nutritional details Parses and stores the results in Google Sheets Responds via Telegram with a personalized confirmation message A testing interface also allows you to simulate prompts and view structured outputs for development or debugging. Setup Create a Telegram bot via BotFather and note the API token. Create an empty Google Sheet and store the sheet ID in the environment. Set up your OpenAI credentials in the n8n credential manager. Customize the “List of Ingredients and Nutrients” node with your prompt if needed. (Optional) Use the “Testing” section to simulate messages and refine outputs before going live. How to customize this workflow to your needs • Enhance prompts in the OpenAI node to improve the structure and accuracy of responses. • Add new fields in the Google Sheet and corresponding logic in the parser if you want more detail. • Adjust the Telegram response to provide motivational feedback, dietary tips, or summaries. • Upgrade to the “Pro” version mentioned in the contact section for USDA database integration and complete nutrient breakdowns. This is a lightweight, AI-powered meal logging automation that transforms voice or text into actionable nutrition data—perfect for making healthy eating easier and more data-driven. See my other workflows here
by Mary Newhauser
Build a Weekly AI Trend Alerter with arXiv and Weaviate Ditch the endless scroll for AI trends. Meet Archi, your personal AI research assistant that hits you up once a week with everyone you need to know. 🧑🏽🔬 This workflow scrapes AI and machine learning article abstracts from arXiv, enriches them with topic categories using a LLM, and embeds them in a Weaviate vector store. The vector store is then used as a tool for agentic RAG to write a concise, easy-to-read summary of the week in AI research. The final output is a short, weekly email sent to the address of your choice that summarizes key AI research trends and future research directions, with links directly to the most interesting and impactful arXiv papers of the week. Who it's for This workflow is for anyone who can't keep up with all the latest AI advances. Coding skills are not required. How it works This is a contiguous workflow that can be summarized in two main parts: a data pipeline that fetches and embeds articles in Weaviate, and an agentic workflow that generates a weekly email summary. Part 1: Automatically fetch newly published articles on a weekly basis Fetch article abstracts (and metadata) from arXiv's free API Pre-process abstract data Enrich each article with a primary topic, secondary topics, and estimated potential impact of the research using a LLM Post-process data Insert data and embeddings into Weaviate Part 2: Use an AI Agent and Weaviate to generate a weekly summary email Add Weaviate as a Tool to an AI agent node Query Weaviate, agentically, to generate a report on the most important research trends of the week Post-process data Send the summary via email Prerequisites An existing Weaviate cluster. You can view instructions for setting up a local cluster with Docker here or a Weaviate Cloud cluster here. API keys to generate embeddings and power chat models. We use a combination of OpenRouter and OpenAI models. Feel free to switch out the models as you like. An email address with STMP privileges. This is the address the email will come from. In this demo we use a personal Gmail address. You can create a new credential to link a STMP Account using these instructions. Self-hosted n8n instance. See this video for how to get set up in just three minutes. How to run the workflow Go through the prerequisites, creating a Weaviate cluster (can be local or cloud), downloading self-hosted n8n, creating STMP privileges for your email account, and adding your API keys and other credentials. Select the embedding and chat models you'd like to use. Enter the email addresses you want to send the email from and to. Let it rip. Workflow output The output for this workflow is a weekly email that summarizes key research trends and future research directions based on AI and ML papers published on arXiv. Here's an example of a summary email: Hey there, Here's a quick rundown of the key trends in Machine Learning research from the past week. * Key Research Trends This Week* This week saw significant advancements in retrieval-augmented systems, foundation models for specialized domains, and techniques balancing efficiency with performance. Advanced RAG Architectures**: Researchers are developing sophisticated RAG frameworks that go beyond simple document retrieval, with AdaPCR introducing passage combination retrieval and UrbanMind proposing a framework for urban intelligence with multilevel optimization. Foundation Models for Tabular Data**: The Real-TabPFN shows that targeted continued pre-training on real-world datasets can significantly boost the performance of foundation models for tabular data, outperforming models trained on broader, potentially noisier datasets. Efficiency-Focused Techniques**: Researchers are developing resourceful methods that maintain performance without expensive computations, like logit reweighting for topic-focused summarization and strategic querying for privacy-preserving personalization. * Future Research Directions* Based on current trends, we expect to see the following developments in the near future: Explainable RAG Systems**: Following the source attribution work in RAG systems, we can expect more research into making complex retrieval systems transparent and explainable for users. Cross-Domain and Cross-Modal Fusion**: The promising performance of vision-language and code-specialized LLMs in retrieval tasks points toward unified retrievers capable of handling text, code, images, and multimodal content. Data-Centric Synthetic Generation**: As shown by work on synthetic relational tabular data, we'll likely see more sophisticated approaches to generating high-quality synthetic data for pre-training foundation models in specialized domains. This week highlights how researchers are making AI more efficient, explainable, and applicable to specialized domains. Look out for more developments in RAG systems, tabular foundation models, and privacy-preserving AI techniques in the coming weeks. Until next week, Archi Want to make it better? Feel free to tweak, build on, or completely reconfigure this workflow. If you come up with something cool, let us know and we might just share it with our community! 💚
by Karol
How it works This workflow automates publishing content from any RSS feed directly to Facebook and Instagram. It reads new RSS entries, extracts the article content, generates a short social-media-friendly summary using an AI model, and then creates an AI-generated image based on the topic. The post is uploaded to Facebook and Instagram (via Graph API) and logged in Google Sheets for reference. Finally, a Telegram bot sends you a notification with links to the published posts. Set up steps Insert your RSS feed URL in the RSS Feed Trigger node. Configure Google Sheets credentials and replace the example sheet with your own. In Supabase Config, insert your Supabase URL and bucket name. In Facebook/Instagram nodes, replace [INSERT_YOUR_SITE_ID] with your own page or account ID. Connect your Facebook Graph API credentials (remove hardcoded tokens). Connect your OpenAI / Anthropic / Gemini credentials for text and image generation. Set up your Telegram Bot credentials if you want to receive notifications. Notes • Sticky notes inside the workflow explain each section (RSS trigger, filtering, content generation, posting, logging, notifications). • No credentials are saved in the template – you must connect your own before running. • All generated content (text + images) is fully automated but can be customized (e.g. change AI prompts for your preferred style).
by Ficky
Build a Redis-Powered CRUD App with HTML Frontend This workflow demonstrates how to use n8n to build a complete, self-contained CRUD (Create, Read, Update, Delete) application without relying on any external server or hosting. It not only acts as the backend, handling all CRUD operations through Webhook endpoints, but also serves a fully functional HTML Single Page Application (SPA) directly via a webhook response. Redis is used as a lightweight data store, providing fast and simple key-value storage with auto-incremented IDs. Because both the frontend (HTML app) and backend (API endpoints) are managed entirely within a single n8n workflow, you can quickly prototype or deploy small tools without additional infrastructure. This approach is ideal for: Rapidly creating no-code or low-code applications Running fully browser-based tools served directly from n8n Teaching or demonstrating n8n + Redis integration in a single workflow Features Add new item with auto-incremented ID Edit existing item Delete specific item Reset all data (clear storage and reset autoincrement id) Single HTML frontend for demonstration (no framework required) Setup Instructions 1. Prerequisites Before importing and running the workflow, make sure you have: A running n8n instance (self-hosted or cloud) A running Redis server (local or remote) 2. API Path Setup For the REST API, use a consistent path. For example, if you choose items as the path: 2a. Get All Items** Method: GET Endpoint: items 2b. Add Item** Method: POST Endpoint: items 2c. Edit Item** Method: PUT Endpoint: items 2d. Delete Item** Method: DELETE Endpoint: items 2e. Reset Items** Method: POST Endpoint: items-reset 3. Configure the API URL Set the API URL in the SET API URL node. Use your n8n webhook URL, for example: https://yourn8n.com/webhook/items 4. Run the HTML App Once everything is set: Open the webhook URL for the HTML app in a browser. The CRUD interface will load and connect to the API endpoints automatically. You can now add, edit, delete, or reset items directly from the web interface. Workflows 1. Render the HTML CRUD App This webhook serves a self-contained HTML Single Page Application (SPA) for basic CRUD operations. The HTML content is returned directly in the webhook response. This setup is ideal for lightweight, browser-based tools without external hosting. How to Use Open the webhook URL in a browser The CRUD interface will load and connect to the data source via API calls Before using, make sure to edit the api_url in the SET API URL node to match your webhook endpoint 2a. REST API: Get All Items This webhook handles retrieving all saved items from Redis. Each item is returned with its corresponding ID and associated data (e.g., name). This endpoint is used by the HTML CRUD App to display the full list of items. Method**: GET Function**: Fetches all items stored in Redis and returns them as a JSON array 2b. REST API: Add Item This webhook handles the Add Item functionality. This endpoint is typically called by the HTML CRUD App when adding a new item. Method**: POST Request Body**: { "name": "item name" } Function**: Generates an auto-incremented ID using Redis and saves the data under that ID 2c. REST API: Edit Item This webhook handles updating an existing item in Redis. Method**: PUT Request Body**: { "id": 1, "name": "Updated Item Name" } Function**: Finds the item by the given id and updates its data in Redis 2d. REST API: Delete Item This webhook handles deleting a specific item from Redis. Method**: DELETE Request Body**: { "id": 1 } Function**: Removes the item with the given id from Redis 2e. REST API: Reset Items This webhook handles resetting all data in the application. Method**: POST Function**: Deletes all stored items from Redis Resets the auto-increment ID by deleting the data in Redis
by Tharwat Mohamed
🚀 AI Resume Screener (n8n Workflow Template) An AI-powered resume screening system that automatically evaluates applicants from a simple web form and gives you clear, job-specific scoring — no manual filtering needed. ⚡ What the workflow does 📄 Accepts CV uploads via a web form (PDF) 🧠 Extracts key info using AI (education, skills, job history, city, birthdate, phone) 🎯 Dynamically matches the candidate to job role criteria stored in Google Sheets 📝 Generates an HR-style evaluation and a numeric score (1–10) 📥 Saves the result in a Google Sheet and uploads the original CV to Google Drive 💡 Why you’ll love it FeatureBenefitAI scoringInstantly ranks candidate fit without reading every CVGoogle Sheet-drivenEasily update job profiles — no code changesFast setupConnect your accounts and you're live in ~15 minsScalableWorks for any department, team, or organizationDeveloper-friendlyExtend with Slack alerts, translations, or automations 🧰 Requirements 🔑 OpenAI or Google Gemini API Key 📄 Google Sheet with 2 columns: Role, Profile Wanted ☁️ Google Drive account 🌐 n8n account (self-hosted or cloud) 🛠 Setup in 5 Steps Import the workflow into n8n Connect Google Sheets, Drive, and OpenAI or Gemini Add your job roles and descriptions in Google Sheets Publish the form and test with a sample CV Watch candidate profiles and scores populate automatically 🤝 Want help setting it up? Includes free setup guidance by the creator — available by email or WhatsApp after purchase. I’m happy to assist you in customizing or deploying this workflow for your team. 📧 Email: tharwat.elsayed2000@gmail.com 💬 WhatsApp: +20106 180 3236
by David Ashby
🛠️ Philips Hue Tool MCP Server Complete MCP server exposing all Philips Hue Tool operations to AI agents. Zero configuration needed - all 4 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Philips Hue Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Philips Hue Tool tool with full error handling 📋 Available Operations (4 total) Every possible Philips Hue Tool operation is included: 🔧 Light (4 operations) • Delete a light • Get a light • Get many lights • Update a light 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Philips Hue Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Philips Hue Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Floyd Mahou
🔍 How it works This workflow turns WhatsApp into a smart email command center using AI. Users can speak or type instructions like: "Send a follow-up to Claire” "Write a draft email to Claire to confirm tomorrow’s meeting at 5 PM” "What is the name of Claire's firm?” The agent transcribes voice notes, extracts intent with GPT, interacts with Gmail (send, draft, search), and replies with a confirmation via WhatsApp — either as text or a voice message. ⚙️ Key Modules Used WhatsApp Business Webhook (Meta) OpenAI Whisper (voice transcription) GPT (intent + content generation) Gmail (search, draft, send) Airtable (contact lookup + memory logging) 🧠 Memory Layer (Optional) The agent logs key fields in Airtable: Recipient email Company / job title And more... This creates a lightweight "gut memory” so the agent feels context-aware. 🗺️ Setup Steps Connect WhatsApp Business API (via Meta Developer Console) Add OpenAI and Gmail credentials in n8n Link your Airtable base for contacts and logging 🧩 Best Use Cases Hands-free email reply while commuting Fast Gmail access for busy consultants / solopreneurs Custom business agents for service-based professionals ⏱️ Estimated Setup Time 30–60 minutes ✅ Requirements WhatsApp Business Cloud access OpenAI API Key Gmail or Google Workspace Airtable account (free plan OK) n8n instance (cloud or self-hosted with HTTPS)
by Jah coozi
Universal Digital Device Support Assistant Transform any device manual into an intelligent AI assistant that provides 24/7 support for your users. This template works with ANY household appliance, electronic device, or technical equipment. 🎯 Use Cases Manufacturers**: Provide instant support for your products Support Teams**: Reduce ticket volume with AI-powered answers Smart Homes**: Centralized help for all devices Personal Use**: Never lose a manual again ✨ Features Universal Compatibility**: Works with any device type Multi-Language Support**: Serve global customers Intelligent Search**: Semantic understanding of user queries Context Awareness**: Remembers conversation history Easy Setup**: Just upload your manual and go 🛠️ What's Included Webhook Endpoint: Receive user queries via API AI Agent: Processes questions intelligently Vector Database: Stores and searches manuals Memory System: Maintains conversation context Upload Pipeline: Easy manual ingestion 📋 Setup Instructions Add Your Credentials: OpenAI API key (or alternative LLM) Pinecone API key (or alternative vector DB) Upload Device Manuals: Use the manual upload trigger Paste manual text or upload PDF System automatically indexes content Configure Webhook: Set your preferred endpoint path Enable CORS if needed Deploy and share URL Optional Customization: Adjust chunk size for your content Modify system prompts for your brand Add additional tools or integrations 🔧 Supported Devices (Examples) Kitchen Appliances (ovens, dishwashers, coffee machines) Home Entertainment (TVs, sound systems, gaming consoles) Smart Home Devices (thermostats, cameras, lights) Computer Equipment (printers, routers, monitors) Power Tools & Garden Equipment Medical Devices And many more! 🌐 Integration Options Embed in your website Connect to chat platforms Mobile app integration Voice assistant compatibility Email support automation 📈 Benefits Reduce support costs by 70% Available 24/7 in multiple languages Consistent, accurate responses Scales infinitely Improves with usage 🔐 Privacy & Security Your data stays in your control Can be deployed on-premise GDPR compliant architecture No data sharing between devices 💡 Pro Tips Upload manuals in sections for better accuracy Include troubleshooting guides and FAQs Add model numbers and specifications Regular updates keep content fresh Start providing world-class device support today!
by Solido AI
How it works: This bot operates in a continuous WhatsApp monitoring loop. It analyzes messages to detect keywords in common questions (like hours, prices, and location) and sends automatic replies with predefined information. For unrecognized questions, it directs the user to manual assistance. Set up steps: The initial setup involves integrating with the WhatsApp API, registering keywords and their respective responses, and defining the fallback flow. It takes only a few minutes to have the bot running with essential information.
by Brian Coyle
Description Candidate Engagement | Resume Screening | AI Voice Interviews | Applicant Insights This intelligent n8n workflow automates the process of extracting and scoring resumes received through a company career page, populating a Notion database with AI insights where the recruiter or hiring manager can automatically invite the applicant to an instant interview with an Elevenlabs AI voice agent. After the agent conducts the behavior-based interview, the workflow scores the overall interview against customizable evaluation criteria and updates the Notion database with AI insights about the applicant. AI Powered Resume Screening & Voice AI that interviews like a Recruiter! AI Insights in Notion dashboard Who is this for? HR teams, recruiters, and talent acquisition professionals This workflow is ideal for HR teams, recruiters, and talent acquisition professionals looking for a foundational, extensible framework to automate early stage recruiting. Whether you're exploring AI for the first time or scaling automation across your hiring process, this template provides a base for screening, interviewing, and tracking candidates—powered entirely by n8n, Elevenlabs, Notion, and LLM integrations. Be sure to consult State and Country regulations with respect to AI Compliance, AI Bias Audits, AI Risk Assessment, and disclosure requirements. What problem is this workflow solving? Manually screening resumes and conducting initial interviews slows down hiring. This template automates: Resume assessment against job description. Scheduling first and second round interviews. First-round AI-led behavioral interviews with AI scoring assessment. Centralized tracking of AI assessments in Notion. What this does This customizable tool, configured to manage 3 requisitions in parallel, automates the application process, resume screen, and first round behavioral interviews. Pre-screen Applicants with AI Immediately screens and scores applicant’s resume against the job description. The AI Agent generates a score and an AI assessment, adding both to the applicant's profile in Notion. Notion automatically notifies hiring manager when a resume receives a score of 8 or higher. Voice AI that Interviews like a Recruiter AI Voice agent adapts probing questions based on applicant’s response and intelligently dives deeper into skill and experience to assess answers against a scoring rubric for each question. AI Applicant Insights in Notion Get detailed post-interview AI analysis, including interview recordings and question-by-question scoring breakdowns to help identify who you should advance to the next stage in the process. AI insight provided in Notion ATS dashboard with drag and drop to advance top candidates to the next interview stage. How it works Link to Notion Template Notion Career Page: Notion Career Page published to web, can be integrated with your preferred job board posting system. Notion Job Posting: Gateway for applicants to apply to active requisitions with ‘Click to Apply’ button. Application Form: N8N webform embedded into Notion job posting captures applicant information and routes for AI processing. AI Agent evaluates resume against job description AI Agent evaluates resume against the job description, stored in Notion, and scores the applicant on a scale of 1 to 10, providing rationale for score. Creates ATS record in Notion with assessment and score Workflow creates the applicant record in the Notion ATS where Recruiters and Hiring Managers see applicants in a filtered view, sorted by AI generated resume score. Users can automatically advance applicants to the next step in process (AI Conversation interview) with drag and drop functionality. Invites applicant to an Instant AI Interview Dragging the applicant to AI Interview step in the Notion ATS dashboard triggers Notion automation that sends the applicant an email with a link to the Elevenlabs Conversation AI Agent. The AI Conversation Agent is provided with instructions on how to conduct the behavior-based interview, including probing questions, for the specific role. AI Conversation Agent Behavior Based Interview The email link resolves to an ElevenLabs AI Conversation agent that has been instructed to interview applicants using pre-defined interview questions, scoring rubric, job description, and company profile. The Elevenlabs agent assesses the applicant on a scale of 1 to 5 for each interview question and provides an overall assessment of the interview based on established evaluation criteria. Click to hear AI Voice Agent in action Example: Role: IT Support Analyst Mark: Elevenlabs AI Agent instructed to interview applicants for specific role Gemini: Google AI coached to answer questions as an IT Support Analyst being interviewed Updates Notion record with Interview Assessment and Score All results—including the conversation transcript, interview scores, and rationale for assessment are automatically added back to the applicant’s profile in Notion where the Hiring Manager can validate the AI assessment by skimming through the embedded audio file. AI Interview Overall Score: 1 to 5 based on response to all questions and probes. AI Agent confirms that it was able to evaluate the interview using the assigned rubric. AI Interview Criteria Score: Success/Failure based on response to individual interview questions. Invites applicant to second interview with Hiring Manager Dragging the applicant to the ‘Hiring Manager Interview’ step in the Notion ATS dashboard triggers a Notion automation that sends an email with a link to the Hiring Manager’s calendar scheduling solution. Configuration and Set Up Accounts & API Keys You wil need accounts and credentials for: n8n (hosted or self-hosted) Elevenlabs (for AI Conversation Agent) Gemini (for LLM model access) Google Drive (to back up applicant data) Calendly (to automate interview scheduling) Gmail (to automate interview scheduling) Data / Documents to implement Job Descriptions for each role Interview questions for each role Evaluation criteria for each interview question Notion Set Up Customize your Notion Career Page Link to Free Notion Template that enables workflow: Update Notion job description database -update job description(s) for each role -add interview questions to the job description database page in Notion -add evaluation criteria to the job description database page in Notion -edit each ‘Click to Apply’ button in the job description template so it resolves to the corresponding N8N 'Application Form' webform production URL (detail provided below) Notion Applicant Tracker In the Applicant Tracker database, update position titles, tab headings, in the custom database view (Notion) so it reflects the title of the position you are posting. Edit the filter for each tab so it matches the position title. Notion Email Automation Update Notion automation templates used to invite applicants to the AI Interview and Hiring Manager interview. Note: Trigger email automation by dragging applicant profile to the next Applicant Comm Status in the Applicant Tracker. AI Interview invite template: revise position title to reflect the title of the role you are posting; include the link to your Conversation AI Agent for that role in the email body. Note: each unique role will use an Elevenlabs AI conversation agent designed for that role. Hiring Manager Interview invite template: revise position title to reflect the title of the role you are posting; include the link to your Calendly page or similar solution provider to automate interview scheduling. N8N Configuration Workflow 1 Application Forms (3 Nodes - one for each job) Update the N8N form title and description to match the job description you configured in Notion. Confirm Job Code in Applicant Form node matches Job Code in Notion for that position. Edit the Form Response to customize the message you want displayed after applicant clicks submit. Upload CV - Google Drive Authenticate your Google Drive account and select the folder that will be used to store resumes Get Job Description - Notion Authenticate your Notion account and select your Career Page from the list of databases that contain your job descriptions. Applicant Data Backup - Google Sheet Create a Google Sheet where you will track applicant data for AI Compliance reporting requirements. Open the node in n8n and use the field names in the node as Google Sheet column headings. Workflow 2 Elevenlabs Web Hook (Node 1) Edit the Web Hook POST node and copy your production URL that is displayed in the Node. This URL is entered into the Elevenlabs AI Conversation Agent post-call webhook described below. AI Agent Authenticate your LLM model (Gemini in this example) and add your Notion database as a tool to pull the evaluation_criteria hosted in Notion for the specific role. Extract Audio Create an Elevenlabs API key for your conversation agent and enter that key as a json header for the Extract Audio node Upload Audio to Drive - Google Drive Authenticate your Google Drive account and select the folder that will be used to store the audio file. Elevenlabs Configuration Create an Elevenlabs account Create Conversation AI Agent Add First Message and Systems Prompt: Design your ‘First Message’ and ‘Systems Prompt’ that guides the AI agent conducting the interview. Tool Tip: provide instruction that limits the number of probes per interview question. Knowledge Base: Upload your role specific interview questions and job description, using the same text that is stored in your Notion Career page for the role. You can also add a document about your company and instruct the Elevenlabs agent to answer questions about culture, strategy, and company growth. Analysis: Evaluation Criteria: Add your evaluation criteria, less than 2000 characters, for each interview question / competency. Analysis: Data Collection: Add the following elements, using the exact character string represented below. phone_number_AI_screen "capture applicant's phone number provided at the start of the conversation and share this as a string, integers only." full_name "capture applicant's full name provided at the start of the conversation." Advanced: Max Duration Set the max duration for interview in seconds. The AI Agent will timeout at the max duration. Conversation AI Widget: Customize your AI Conversation Agent landing page, including the position tile and company name. AI Conversation Agent URL: Copy the AI Conversation Agent URL and add it to your Notion email template triggered by the AI Interview email automation. Use a custom AI Agent URL for each distinct job description. Enable your Elevenlabs Post-Call Webhook for your Conversation Agent: Log into your Elevenlabs account and go to Conversational AI Settings and click on Post-Call Web Hook. This is where you enter the production URL from the N8N Web Hook node (Workflow 2). This sends the AI Voice Agent output to your n8n workflow which feeds back to your Notion dashboard.
by Jimleuk
This n8n template is one of a 3-part series exploring use-cases for clustering vector embeddings: Survey Insights Customer Insights Community Insights This template demonstrates the Community Insights scenario where HN commments can be quickly grouped by similarity and an AI agent can generate insights on those groupings. With this workflow, Researchers or HN users can quickly breakdown community consensus on a particular topic and identify frequently mentioned positives and negatives. Sample Output: https://docs.google.com/spreadsheets/d/e/2PACX-1vQXaQU9XxsxnUIIeqmmf1PuYRuYtwviVXTv6Mz9Vo6_a4ty-XaJHSeZsptjWXS3wGGDG8Z4u16rvE7l/pubhtml How it works HN comments are imported via the Hacknews API node. Comments are then inserted into a Qdrant collection carefully tagged with the Hackernews API metadata. Comments are then fetched and are put through a clustering algorithm using the Python Code node. The Qdrant points are returned in clustered groups. Each group is looped to fetch the payloads of the points and feed them to the AI agent to summarise and generate insights for. The resulting insights and raw responses are then saved to the Google Spreadsheet for further analysis by the researcher or the HN user. Requirements Works best with lots of comments! Qdrant Vectorstore for storing embeddings. OpenAI account for embeddings and LLM. Customising the Template Adjust clustering parameters which make sense for your data. Adjust sentimentality setting if comments are overwhelmingly negative at times.