by Avkash Kakdiya
How it works This workflow captures idea submissions from a webhook and enriches them using AI. It extracts key fields like Title, Tags, Submitted By, and Created date in IST format. The cleaned data is stored in a Notion database for centralized tracking. Finally, a confirmation message is posted in Slack to notify the team. Step-by-step Step-by-step 1. Capture and process submission Webhook** – Receives idea submissions with text and user ID. AI Agent & OpenAI Model** – Enrich and structure the input into Title, Tags, Submitted By, and Created fields. Code** – Extracts clean data, formats tags, and prepares the entry for Notion. 2. Store in Notion Add to Notion** – Creates a new database entry with mapped fields: Title, Submitted By, Tags, Created. 3. Notify in Slack Send Confirmation (Slack)** – Posts a confirmation message with the submitted idea title. Why use this? Centralizes idea collection directly into Notion for better organization. Eliminates manual formatting with AI-powered data structuring. Ensures consistency in tags, submitter info, and timestamps. Provides instant team-wide visibility via Slack notifications. Saves time while keeping idea management streamlined and transparent.
by Avkash Kakdiya
How it works This workflow automatically generates and publishes marketing blog posts to WordPress using AI. It begins by checking your PostgreSQL database for unprocessed records, then uses OpenAI to create SEO-friendly, structured blog content. The content is formatted for WordPress, including categories, tags, and meta descriptions, before being published. After publishing, the workflow updates the original database record to track processing status and WordPress post details. Step-by-step Trigger workflow** Schedule Trigger – Runs the workflow at defined intervals. Fetch unprocessed record** PostgreSQL Trigger – Retrieves the latest unprocessed record from the database. Check Record Exists – Confirms the record is valid and ready for processing. Generate AI blog content** OpenAI Chat Model – Processes the record to generate blog content based on the title. Blog Post Agent – Structures AI output into JSON with title, content, excerpt, and meta description. Format and safeguard content** Code Node – Prepares structured data for WordPress, ensuring categories, tags, and error handling. Publish content and update database** WordPress Publisher – Publishes content to WordPress with proper categories, tags, and meta. Update Database – Marks the record as processed and stores WordPress post ID, URL, and processing timestamp. Why use this? Automates end-to-end blog content generation and publishing. Ensures SEO-friendly and marketing-optimized posts. Maintains database integrity by tracking published content. Reduces manual effort and accelerates content workflow. Integrates PostgreSQL, OpenAI, and WordPress seamlessly for scalable marketing automation.
by Connor Provines
Analyze email performance and optimize campaigns with AI using SendGrid and Airtable This n8n template creates an automated feedback loop that pulls email metrics from SendGrid weekly, tracks performance in Airtable, analyzes trends across the last 4 weeks, and generates specific recommendations for your next campaign. The system learns what works and provides data-driven insights directly to your email creation process. Who's it for Email marketers and growth teams who want to continuously improve campaign performance without manual analysis. Perfect for businesses running regular email campaigns who need actionable insights based on real data rather than guesswork. Good to know After 4-6 weeks, expect 15-30% improvement in primary metrics Requires at least 2 weeks of historical data to generate meaningful analysis System improves over time as it learns from your audience Implementation time: ~1 hour total How it works Schedule trigger runs weekly (typically Monday mornings) Pulls previous week's email statistics from SendGrid (delivered, opens, clicks, rates) Updates the previous week's record in Airtable with actual performance data GPT-4 analyzes trends across the last 4 weeks, identifying patterns and opportunities Creates a new Airtable record for the upcoming week with specific recommendations: what to test, how to change it, expected outcome, and confidence level Your email creation workflow pulls these recommendations when generating new campaigns After sending, the actual email content is saved back to Airtable to close the loop How to set up Create Airtable base: Make a table called "Email Campaign Performance" with fields for week_ending, delivered, unique_opens, unique_clicks, open_rate, ctr, decision, test_variable, test_hypothesis, confidence_level, test_directive, implementation_instruction, subject_line_used, email_body, icp, use_case, baseline_performance, success_metric, target_improvement Configure SendGrid: Add API key to the "SendGrid Data Pull" node and test connection Set up Airtable credentials: Add Personal Access Token and select your base/table in all Airtable nodes Add OpenAI credentials: Configure GPT-4 API key in the "Previous Week Analysis" node Test with sample data: Manually add 2-3 weeks of data to Airtable or run if you have historical data Schedule weekly runs: Set workflow to trigger every Monday at 9 AM (or after your weekly campaign sends) Integrate with email creation: Add an Airtable search node to your email workflow to retrieve current recommendations, and an update node to save what was sent Requirements SendGrid account with API access (or similar ESP with statistics API) Airtable account with Personal Access Token OpenAI API access (GPT-4) Customizing this workflow Use different email platform**: Replace SendGrid node with Mailchimp, Brevo, or any ESP that provides statistics API—adjust field mappings accordingly Add more metrics**: Extend Airtable fields to track bounce rate, unsubscribe rate, spam complaints, or revenue attribution Change analysis frequency**: Adjust schedule trigger for bi-weekly or monthly analysis instead of weekly Swap AI models**: Replace GPT-4 with Claude or Gemini in the analysis node Multi-campaign tracking**: Duplicate the workflow for different campaign types (newsletters, promotions, onboarding) with separate Airtable tables
by Yusuke Yamamoto
This n8n template demonstrates a multi-modal AI recipe assistant that suggests delicious recipes based on user input, delivered via Telegram. The workflow can uniquely handle two types of input: a photo of your ingredients or a simple text list. Use cases are many: Get instant dinner ideas by taking a photo of your fridge contents, reduce food waste by finding recipes for leftover ingredients, or create a fun and interactive service for a cooking community or food delivery app! Good to know This workflow uses two different AI models (one for vision, one for text generation), so costs will be incurred for each execution. See OpenRouter Pricing or your chosen model provider's pricing page for updated info. The AI prompts are in English, but the final recipe output is configured to be in Japanese. You can easily change the language by editing the prompt in the Recipe Generator node. How it works The workflow starts when a user sends a message or an image to your bot on Telegram via the Telegram Trigger. An IF node intelligently checks if the input is text or an image. If an image is sent, the AI Vision Agent analyzes it to identify ingredients. A Structured Output Parser then forces this data into a clean JSON list. If text is sent, a Set node directly prepares the user's text as the ingredient list. Both paths converge, providing a standardized ingredient list to the Recipe Generator agent. This AI acts as a professional chef to create three detailed recipes. Crucially, a second Structured Output Parser takes the AI's creative text and formats it into a reliable JSON structure (with name, difficulty, instructions, etc.). This ensures the output is always predictable and easy to work with. A final Set node uses a JavaScript expression to transform the structured recipe data into a beautiful, emoji-rich, and easy-to-read message. The formatted recipe suggestions are sent back to the user on Telegram. How to use Configure the Telegram Trigger with your own bot's API credentials. Add your AI provider credentials in the OpenAI Vision Model and OpenAI Recipe Model nodes (this template uses OpenRouter, but it can be swapped for a direct OpenAI connection). Requirements A Telegram account and a bot token. An AI provider account that supports vision and text models, such as OpenRouter or OpenAI. Customising this workflow Modify the prompt in the Recipe Generator to include dietary restrictions (e.g., "vegan," "gluten-free") or to change the number of recipes suggested. Swap the Telegram nodes for Discord, Slack, or a Webhook to integrate this recipe bot into a different platform or your own application. Connect to a recipe database API to supplement the AI's suggestions with existing recipes.
by Rohit Dabra
WooCommerce AI Agent — n8n Workflow (Overview) Description: Turn your WooCommerce store into a conversational AI assistant — create products, place orders, run reports and manage coupons using natural language via n8n + an MCP Server. Key features Natural-language commands mapped to WooCommerce actions (products, orders, reports, coupons). Structured JSON outputs + lightweight mapping to avoid schema errors. Calls routed through your MCP Server for secure, auditable tool execution. Minimal user prompts — agent auto-fetches context and asks only when necessary. Extensible: add new tools or customize prompts/mappings easily. Demo of the workflow: Youtube Video 🚀 Setup Guide: WooCommerce + AI Agent Workflow in n8n 1. Prerequisites Running n8n instance WooCommerce store with REST API keys OpenAI API key MCP server (production URL) 2. Import Workflow Open n8n dashboard Go to Workflows → Import Upload/paste the workflow JSON Save as WooCommerce AI Agent 3. Configure Credentials OpenAI Create new credential → OpenAI API Add your API key → Save & test WooCommerce Create new credential → WooCommerce API Enter Base URL, Consumer Key & Secret → Save & test MCP Client In MCP Client node, set Server URL to your MCP server production URL Add authentication if required 4. Test Workflow Open workflow in editor Run a sample request (e.g., create a test product) Verify product appears in WooCommerce 5. Activate Workflow Once tested, click Activate in n8n Workflow is now live 🎉 6. Troubleshooting Schema errors** → Ensure fields match WooCommerce node requirements Connection issues** → Re-check credentials and MCP URL
by Rahul Joshi
Description Automatically compare candidate resumes to job descriptions (PDFs) from Google Drive, generate a 0–100 fit score with gap analysis, and update Google Sheets—powered by Azure OpenAI (GPT-4o-mini). Fast, consistent screening with saved reports in Drive. 📈📄 What This Template Does Fetches job descriptions and resumes (PDF) from Google Drive. 📥 Extracts clean text from both PDFs for analysis. 🧼 Generates an AI evaluation (score, must-have gaps, nice-to-have bonuses, summary). 🤝 Parses the AI output to structured JSON. 🧩 Delivers a saved text report in Drive and updates a Google Sheet. 🗂️ Key Benefits Saves time with automated, consistent scoring. ⏱️ Clear gap analysis for quick decisions. 🔍 Audit-ready reports stored in Drive. 🧾 Centralized tracking in Google Sheets. 📊 No-code operation after initial setup. 🧑💻 Features Google Drive search and download for JDs and resumes. 📂 PDF-to-text extraction for reliable parsing. 📝 Azure OpenAI (GPT-4o-mini) comparison and scoring. 🤖 Robust JSON parsing and error handling. 🛡️ Automatic report creation in Drive. 💾 Append or update candidate data in Google Sheets. 📑 Requirements n8n instance (cloud or self-hosted). Google Drive credentials in n8n with access to JD and resume folders (e.g., “JD store”, “Resume_store”). Azure OpenAI access with a deployed GPT-4o-mini model and credentials in n8n. Google Sheets credentials in n8n to append or update candidate rows. PDFs for job descriptions and resumes stored in the designated Drive folders. Target Audience Talent acquisition and HR operations teams. 🧠 Recruiters (in-house and agencies). 🧑💼 Hiring managers seeking consistent shortlisting. 🧭 Ops teams standardizing candidate evaluation records. 🗃️ Step-by-Step Setup Instructions Connect Google Drive and Google Sheets in n8n Credentials and verify folder access. 🔑 Add Azure OpenAI credentials and select GPT-4o-mini in the AI node. 🧠 Import the workflow and assign credentials to all nodes (Drive, AI, Sheets). 📦 Set folder references for JDs (“JD store”) and resumes (“Resume_store”). 📁 Run once to validate extraction, scoring, report creation, and sheet updates. ✅
by Davide
🤖📈 This workflow is my personal solution for the Agentic Arena Community Contest, where the goal is to build a Retrieval-Augmented Generation (RAG) AI agent capable of answering questions based on a provided PDF knowledge base. Key Advantages ✅ End-to-End RAG Implementation Fully automates the ingestion, processing, and retrieval of knowledge from PDFs into a vector database. ✅ Accuracy through Multi-Layered Retrieval Combines embeddings, Qdrant search, and Cohere reranking to ensure the agent retrieves the most relevant policy information. ✅ Robust Evaluation System Includes an automated correctness evaluation pipeline powered by GPT-4.1 as a judge, ensuring transparent scoring and continuous improvement. ✅ Citation-Driven Compliance The AI agent is instructed to provide citations for every answer, making it suitable for high-stakes use cases like policy compliance. ✅ Scalability and Modularity Can easily integrate with different data sources (Google Drive, APIs, other storage systems) and be extended to new use cases. ✅ Seamless Collaboration with Google Sheets Both the evaluation set and the results are integrated with Google Sheets, enabling easy monitoring, iteration, and reporting. ✅ Cloud and Self-Hosted Flexibility Works with self-hosted Qdrant on Hetzner, Mistral Cloud for OCR, and OpenAI/Cohere APIs, combining local control with powerful cloud AI services. How it Works Knowledge Base Ingestion (The "Setup" Execution): When started manually, the workflow first clears an existing Qdrant vector database collection. It then searches a specified Google Drive folder for PDF files. For each PDF found, it performs the following steps: Uploads the file to the Mistral AI API. Processes the PDF using Mistral's OCR service to extract text and convert it into a structured markdown format. Splits the text into manageable chunks. Generates embeddings for each text chunk using OpenAI's model. Stores the embeddings in the Qdrant vector store, creating a searchable knowledge base. Agent Evaluation (The "Testing" Execution): The workflow is triggered by an evaluation Google Sheet containing questions and correct answers. For each question, the core AI Agent is activated. This agent: Uses the RAG tool to search the pre-populated Qdrant vector store for relevant information from the PDFs. Employs a Cohere reranker to refine the search results for the highest quality context. Leverages a GPT-4.1 model to generate an answer based strictly on the retrieved context. The agent's answer is then passed to an "LLM as a Judge" (another GPT-4.1 instance), which compares it to the ground truth answer from the evaluation sheet. The judge provides a detailed score (1-5) based on factual correctness and citation accuracy. Finally, both the agent's answer and the correctness score are saved back to a Google Sheet for review. Set up Steps To implement this solution, you need to configure the following components and credentials: Configure Core AI Services: OpenAI API Credentials: Required for the main AI agent, the judge LLM, and generating embeddings. Mistral AI API Credentials: Necessary for the OCR service that processes PDF files. Cohere API Credentials: Used for the reranker node that improves retrieval quality. Google Service Accounts: Set up OAuth for Google Sheets (to read questions and save results) and Google Drive (to access the PDF source files). Set up the Vector Database (Qdrant): This workflow uses a self-hosted Qdrant instance. You must deploy and configure your own Qdrant server. Update the Qdrant Vector Store and RAG nodes with the correct API endpoint URL and credentials for your Qdrant instance. Ensure the collection name (agentic-arena) is created or matches your setup. Connect Data Sources: PDF Source: In the "Search PDFs" node, update the folderId parameter to point to your own Google Drive folder containing the contest PDFs. Evaluation Sheet: In the "Eval Set" node, update the documentId to point to your own copy of the evaluation Google Sheet containing the test questions and answers. Results Sheet: In the "Save Eval" node, update the documentId to point to the Google Sheet where you want to save the evaluation results. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Moka Ouchi
How it works This workflow automates the creation and management of a daily space-themed quiz in your Slack workspace. It's a fun way to engage your team and learn something new about the universe every day! Triggers Daily:** The workflow automatically runs at a scheduled time every day. Fetches NASA's Picture of the Day:** It starts by fetching the latest Astronomy Picture of the Day (APOD) from the official NASA API, including its title, explanation, and image URL. Generates a Quiz with AI:** Using the information from NASA, it prompts a Large Language Model (LLM) like OpenAI's GPT to create a unique, multiple-choice quiz question. Posts to Slack:** The generated quiz is then posted to a designated Slack channel. The bot automatically adds numbered reactions (1️⃣, 2️⃣, 3️⃣, 4️⃣) to the message, allowing users to vote. Waits and Tallies Results:** After a configurable waiting period, the workflow retrieves all reactions on the quiz message. A custom code node then tallies the votes, identifies the users who answered correctly, and calculates the total number of participants. Announces the Winner:** Finally, it posts a follow-up message in the same channel, revealing the correct answer, a detailed explanation, and mentions all the users who got it right. Set up steps This template should take about 10-15 minutes to set up. Credentials: NASA: Add your NASA API credentials in the Get APOD node. You can get a free API key from the NASA API website. OpenAI: Add your OpenAI API credentials in the OpenAI: Create Quiz node. Slack: Add your Slack API credentials to all the Slack nodes. You'll need to create a Slack App with the following permissions: chat:write, reactions:read, and reactions:write. Configuration: In the Workflow Configuration node, set your channelId to the Slack channel where you want the quiz to be posted. You can also customize the quizDifficulty, llmTone, and answerTimeoutMin to fit your audience. Activate Workflow: Once configured, simply activate the workflow. It will run automatically at the time specified in the Schedule Trigger node (default is 21:00 daily). Requirements An n8n instance A NASA API Key An OpenAI API Key A Slack App with the appropriate permissions and API credentials
by David Olusola
AI Resume Screening with GPT-4o & Google Drive - Automated Hiring Pipeline How it works Transform your hiring process with this intelligent automation that screens resumes in minutes, not hours. The workflow monitors your Gmail inbox, processes resume attachments using AI analysis, and delivers structured candidate evaluations to a centralized Google Sheets dashboard. Key workflow steps: Email Detection - Monitors Gmail for resume attachments (PDF, DOCX, TXT) File Processing - Uploads to Google Drive and extracts text content AI Analysis - GPT-4o evaluates candidates against job requirements Data Extraction - Pulls contact info and key qualifications automatically Results Logging - Saves structured analysis to Google Sheets for team review Set up steps Total setup time: 15-20 minutes Required Credentials (5 minutes) Gmail account with OAuth2 access Google Drive API credentials Google Sheets API access OpenAI API key for GPT-4o Configuration Steps (10 minutes) Connect Gmail trigger - Authorize email monitoring Set up Google Drive folder - Choose destination for resume files Create tracking spreadsheet - Copy the provided Google Sheets template Add OpenAI credentials - Insert your API key for AI analysis Customize job description - Update the role requirements in the "Job Description" node Optional Customization (5 minutes) Modify AI scoring criteria in the recruiter prompt Adjust candidate information extraction fields Customize Google Sheets column mapping No coding required - All configuration happens through the n8n interface using pre-built nodes and simple dropdown selections. Template Features Smart File Handling Supports PDF, Word documents, and plain text resumes Automatic format conversion and text extraction Intelligent routing based on file type AI-Powered Analysis GPT-4o evaluation against job requirements Structured scoring with strengths/weaknesses breakdown Risk and opportunity assessment for each candidate Actionable next-steps recommendations Seamless Integration Direct Gmail inbox monitoring Automatic Google Drive file organization Real-time Google Sheets dashboard updates Clean data extraction for CRM integration Professional Output Standardized candidate scoring (1-10 scale) Detailed justification for each evaluation Contact information extraction Resume quality validation Perfect for HR teams, recruiting agencies, and growing companies looking to streamline their hiring pipeline with intelligent automation.
by Guillaume Duvernay
Create truly authoritative articles that blend your unique, internal expertise with the latest, most relevant information from the web. This template orchestrates an advanced "hybrid research" content process that delivers unparalleled depth and credibility. Instead of a simple prompt, this workflow first uses an AI planner to deconstruct your topic into key questions. Then, for each question, it performs a dual-source query: it searches your trusted Lookio knowledge base for internal facts and simultaneously uses Linkup to pull fresh insights and sources from the live web. This comprehensive "super-brief" is then handed to a powerful AI writer to compose a high-quality article, complete with citations from both your own documents and external web pages. 👥 Who is this for? Content Marketers & SEO Specialists:** Scale the creation of authoritative content that is both grounded in your brand's facts and enriched with timely, external sources for maximum credibility. Technical Writers & Subject Matter Experts:** Transform complex internal documentation into rich, public-facing articles by supplementing your core knowledge with external context and recent data. Marketing Agencies:** Deliver exceptional, well-researched articles for clients by connecting the workflow to their internal materials (via Lookio) and the broader web (via Linkup) in one automated process. 💡 What problem does this solve? The Best of Both Worlds:** Combines the factual reliability of your own knowledge base with the timeliness and breadth of a web search, resulting in articles with unmatched depth. Minimizes AI "Hallucinations":** Grounds the AI writer in two distinct sets of factual, source-based information—your internal documents and credible web pages—dramatically reducing the risk of invented facts. Maximizes Credibility:* Automates the inclusion of source links from *both** your internal knowledge base and external websites, boosting reader trust and demonstrating thorough research. Ensures Comprehensive Coverage:** The AI-powered "topic breakdown" ensures a logical structure, while the dual-source research for each point guarantees no stone is left unturned. Fully Automates an Expert Workflow:** Mimics the entire process of an expert research team (outline, internal review, external research, consolidation, writing) in a single, scalable workflow. ⚙️ How it works This workflow orchestrates a sophisticated, multi-step "Plan, Dual-Research, Write" process: Plan (Decomposition): You provide an article title and guidelines via the built-in form. An initial AI call acts as a "planner," breaking down the main topic into an array of logical sub-questions. Dual Research (Knowledge Base + Web Search): The workflow loops through each sub-question and performs two research actions in parallel: It queries your Lookio assistant to retrieve relevant information and source links from your uploaded documents. It queries Linkup to perform a targeted web search, gathering up-to-date insights and their source URLs. Consolidate (Brief Creation): All the retrieved information—internal and external—is compiled into a single, comprehensive research brief for each sub-question. Write (Final Generation): The complete, source-rich brief is handed to a final, powerful AI writer (e.g., GPT-5). Its instructions are clear: write a high-quality article based only on the provided research and integrate all source links as hyperlinks. 🛠️ Setup Set up your Lookio assistant: Sign up at Lookio, upload your documents to create a knowledge base, and create a new assistant. In the Query Lookio Assistant node, paste your Assistant ID in the body and add your Lookio API Key for authentication (we recommend a Bearer Token credential). Connect your Linkup account: In the Query Linkup for AI web-search node, add your Linkup API key for authentication (we recommend a Bearer Token credential). Linkup's free plan is very generous. Connect your AI provider: Connect your AI provider (e.g., OpenAI) credentials to the two Language Model nodes. Activate the workflow: Toggle the workflow to "Active" and use the built-in form to generate your first hybrid-research article! 🚀 Taking it further Automate Publishing:* Connect the final *Article result* node to a *Webflow* or *WordPress** node to automatically create draft posts in your CMS. Generate Content in Bulk:* Replace the *Form Trigger* with an *Airtable* or *Google Sheet** trigger to generate a batch of articles from your content calendar. Customize the Writing Style:* Tweak the system prompt in the final *New content - Generate the AI output** node to match your brand's tone of voice, prioritize internal vs. external sources, or add SEO keywords.
by Guillaume Duvernay
Go beyond basic Retrieval-Augmented Generation (RAG) with this advanced template. While a simple RAG setup can answer straightforward questions, it often fails when faced with complex queries and can be polluted by irrelevant information. This workflow introduces a sophisticated architecture that empowers your AI agent to think and act like a true research assistant. By decoupling the agent from the knowledge base with a smart sub-workflow, this template enables multi-query decomposition, relevance-based filtering, and an intermediate reasoning step. The result is an AI agent that can handle complex questions, filter out noise, and synthesize high-quality, comprehensive answers based on your data in Supabase. Who is this for? AI and automation developers:** Anyone building sophisticated Q&A bots, internal knowledge base assistants, or complex research agents. n8n power users:** Users looking to push the boundaries of AI agents in n8n by implementing production-ready, robust architectural patterns. Anyone building a RAG system:** This provides a superior architectural pattern that overcomes the common limitations of basic RAG setups, leading to dramatically better performance. What problem does this solve? Handles complex questions:** A standard RAG agent sends one query and gets one set of results. This agent is designed to break down a complex question like "How does natural selection work at the molecular, organismal, and population levels?" into multiple, targeted sub-queries, ensuring all facets of the question are answered. Prevents low-quality answers:* A simple RAG agent can be fed irrelevant information if the semantic search returns low-quality matches. This workflow includes a crucial *relevance filtering** step, discarding any data chunks that fall below a set similarity score, ensuring the agent only reasons with high-quality context. Improves answer quality and coherence:* By introducing a dedicated *"Think" tool**, the agent has a private scratchpad to synthesize the information it has gathered from multiple queries. This intermediate reasoning step allows it to connect the dots and structure a more comprehensive and logical final answer. Gives you more control and flexibility:** By using a sub-workflow to handle data retrieval, you can add any custom logic you need (like filtering, formatting, or even calling other APIs) without complicating the main agent's design. How it works This template consists of a main agent workflow and a smart sub-workflow that handles knowledge retrieval. Multi-query decomposition: When you ask the AI Agent a complex question, its system prompt instructs it to first break it down into an array of multiple, simpler sub-queries. Decoupling with a sub-workflow: The agent doesn't have direct access to the vector store. Instead, it calls a "Query knowledge base" tool, which is a sub-workflow. It sends the entire array of sub-queries to this sub-workflow in a single tool call. Iterative retrieval & filtering (in the sub-workflow): The sub-workflow loops through each sub-query. For each one, it queries your Supabase Vector Store. It then checks the similarity score of the returned data chunks and uses a Filter node to discard any that are not highly relevant (the default is a score > 0.4). Intermediate reasoning step: The sub-workflow returns all the high-quality, filtered information to the main agent. The agent is then instructed to use its Think tool to review this information, synthesize the key points, and structure a plan for its final, comprehensive answer. Setup Connect your accounts: Supabase: In the sub-workflow ("RAG sub-workflow"), connect your Supabase account to the Supabase Vector Store node and select your table. OpenAI: Connect your OpenAI account in two places: to the Embeddings OpenAI node (in the sub-workflow) and to the OpenAI Chat Model node (in the main workflow). Customize the agent's purpose: In the main workflow, edit the AI Agent's system prompt. Change the context from a "biology course" to whatever your knowledge base is about. Adjust the relevance filter: In the sub-workflow, you can change the 0.4 threshold in the Filter node to be more or less strict about the quality of the information you want the agent to use. Activate the workflow and start asking complex questions! Taking it further Integrate different vector stores:** The logic is decoupled. You can easily swap the Supabase Vector Store node in the sub-workflow with a Pinecone, Weaviate, or any other vector store node without changing the main agent's logic. Add more tools:** Give the main agent other capabilities, like a web search a way to interact with your tech stack. The agent can then decide whether to use its internal knowledge base, search the web, or both, to answer a question. Better prompting:** You could further work on the Agent's system prompt to increase its capacity to provide high-quality answers by being even better at leveraging the provided chunks.
by Cheng Siong Chin
How It Works Every day at 8 AM, the workflow automatically retrieves the latest F1 data—including driver standings, qualifying results, race schedules, and circuit information. All sources are merged into a unified dataset, and driver performance metrics are computed using historical trends. An AI agent, enhanced with vectorized race history, evaluates patterns and generates race-winner predictions. When the confidence score exceeds the defined threshold, the system pushes an automated Slack alert and records the full analysis in the database and Google Sheets. Setup Steps Update the workflow configuration with: newsApiUrl, weatherApiUrl, historicalYears, and confidenceThreshold. Connect PostgreSQL using the schema: prediction_date, predicted_winner, confidence_score, prediction_source, data_version, full_analysis. Provide the Slack channel ID for sending high-confidence alerts. Specify the Google Sheets document ID and sheet name for prediction logging. Test connectivity to the Ergast API (no authentication required). Prerequisites OpenAI account (GPT-4o access), Slack workspace admin access, PostgreSQL instance, Google Sheets account, n8n instance with LangChain community nodes enabled. Customization Extend by adding constructor predictions (modify AI prompt). Integrate Discord or Teams instead of Slack. Benefits Saves time by automating data collection, improves accuracy using multiple performance metrics and historical patterns.