by zahir khan
Screen resumes & save candidate scores to Notion with OpenAI This template helps you automate the initial screening of job candidates by analyzing resumes against your specific job descriptions using AI. 📺 How It Works The workflow automatically monitors a Notion database for new job applications. When a new candidate is added: It checks if the candidate has already been processed to avoid duplicates. It downloads the resume file (supporting both PDF and DOCX formats). It extracts the raw text and sends it to OpenAI along with the specific job description and requirements. The AI acts as a "Senior Technical Recruiter," scoring the candidate on skills, experience, and stability. Finally, it updates the Notion entry with a fit score (0-100), a one-line summary, detected skills, and a detailed analysis. 📄 Notion Database Structure You will need two databases in Notion: Jobs (containing descriptions/requirements) and Candidates (containing resume files). Candidates DB Fields:** AI Comments (Text), Resume Score (Text), Top Skills Detected (Text), Feedback (Select), One Line Summary (Text), Resume File (Files & Media). Jobs DB Fields:** Job Description (Text), Requirements (Text). 👤 Who’s it for This workflow is for recruiters, HR managers, founders, and hiring teams who want to reduce the time spent on manual resume screening. Whether you are handling high-volume applications or looking for specific niche skills, this tool ensures every resume gets a consistent, unbiased first-pass review. 🔧 How to set up Create the required databases in Notion (as described above). Import the .json workflow into your n8n instance. Set up credentials for Notion and OpenAI. Link those credentials in the workflow nodes. Update Database IDs: Open the "Fetch Job Description" and "On New Candidate" nodes and select your specific Notion databases. Run a test with a sample candidate and validate the output in Notion. 📋 Requirements An n8n instance (Cloud or Self-hosted) A Notion account OpenAI API Key (GPT-4o or GPT-4 Turbo recommended for best reasoning) 🧩 How to customize the workflow The system is fully modular. You can: Adjust the Persona:** In the Analyze Candidate agent nodes, edit the system prompt to change the "Recruiter" persona (e.g., make it stricter or focus on soft skills). Change Scoring:** Modify the scoring matrix in the prompt to weight "Education" or "Experience" differently. Filter Logic:** Add a node to automatically disqualify candidates below a certain score (e.g., < 50) and move them to a "Rejected" status in Notion. Multi-language:** Update the prompt to translate summaries into your local language if the resume is in English.
by Jaruphat J.
⚠️ Note: All sensitive credentials should be set via n8n Credentials or environment variables. Do not hardcode API keys in nodes. Who’s it for Marketers, creators, and automation builders who want to generate UGC-style ad images automatically from a Google Sheet. Ideal for e‑commerce SKUs, agencies, or teams that need many variations quickly. What it does (Overview) This template turns a spreadsheet row into ad images ready for campaigns. Zone 1 — Create Ad Image**: Reads product rows, downloads image, analyzes it, generates prompts, appends results back into Google Sheet. Zone 2 — Create Image (Fal nano‑banana)**: Generates ad image variations, polls Fal.ai API until done, uploads to Drive, and updates sheet with output URLs. Requirements Fal.ai API key** (env: FAL\_KEY) Google Sheets / Google Drive** OAuth2 credentials OpenAI (Vision/Chat)** for image analysis A Google Sheet with columns for product and output Google Drive files set to Anyone with link → Viewer so APIs can fetch them How to set up Credentials: Add Google Sheets + Google Drive (OAuth2), Fal.ai (Header Auth with Authorization: Key {{\$env.FAL\_KEY}}), and OpenAI. Google Sheet: Create sheets with the following headers. Sheet: product product_id | product_name | product_image_url | product_description | campaign | brand_notes | constraints | num_variations | aspect_ratio | model_target | status Sheet: ad_image scene_ref | product_name | prompt | status | output_url Import the workflow: Use the provided JSON. Confirm node credentials resolve. Run: Start with Zone 0 to verify prompt-only flow, then test Zone 1 for image generation. Zone 1 — Create Ad Image (Prompt-only) Reads product row, normalizes Drive link, analyzes image, generates structured ad prompts, appends to ad_image sheet. Zone 2 — Create Image (Fal nano‑banana) Reads product row, converts Drive link, generates image(s) with Fal nano‑banana, polls until complete, uploads to Drive, updates sheet. Node settings (high‑level) Drive Link Parser (Set) {{ (() => { const u = $json.product || ''; const q = u.match(/[?&]id=([\-\w]{25,})/); const d = u.match(/\/d\/([\-\w]{25,})/); const any = u.match(/[\-\w]{25,}/); const id = q?.[1] || d?.[1] || (any ? any[0] : ''); return id ? 'https://drive.google.com/uc?export=view&id=' + id : ''; })() }} How to customize the workflow Adjust AI prompts to change ad style (luxury, cozy, techy). Change aspect ratio for TikTok/IG/Shorts (9:16, 1:1, 16:9). Extend Sheet schema for campaign labels, audiences, hashtags. Add distribution (Slack/LINE/Telegram) after Drive upload. Troubleshooting JSON parameter needs to be valid JSON** → Ensure expressions return objects, not strings. 403 on images** → Make Drive files public (Viewer) and convert links. Job never completes* → Check status_url, retry with -fast models or off‑peak times. Template metadata Uses:** Google Sheets, Google Drive, HTTP Request, Wait/If/Switch, Code, OpenAI Vision/Chat, Fal.ai models (nano‑banana) Visuals Workflow Diagram Example Product Image Product Image - nano Banana
by Jaruphat J.
LINE OCR Workflow to Extract and Save Thai Government Letters to Google Sheets This template automates the extraction of structured data from Thai government letters received via LINE or uploaded to Google Drive. It uses Mistral AI for OCR and OpenAI for information extraction, saving results to a Google Sheet. Who’s it for? Thai government agencies or teams receiving official documents via LINE or Google Drive Automation developers working with document intake and OCR Anyone needing to extract fields from Thai scanned letters and store structured info What it does This n8n workflow: Receives documents from two sources: LINE webhook (via Messaging API) Google Drive (new file trigger) Checks file type (PDF or image) Runs OCR with Mistral AI (Document or Image model) Uses OpenAI to extract key metadata such as: book_id subject recipient (to) signatory date, contact info, etc. Stores structured data in Google Sheets Replies to LINE user with extracted info or moves files into archive folders (Drive) How to Set It Up Create a Google Sheet with a tab named data and the following columns Example Google Sheet: book_id, date, subject, to, attach, detail, signed_by, signed_by_position, contact_phone, contact_email, download_url Set up required credentials: googleDriveOAuth2Api googleSheetsOAuth2Api httpHeaderAuth for LINE Messaging API openAiApi mistralCloudApi Define environment variables: LINE_CHANNEL_ACCESS_TOKEN GDRIVE_INVOICE_FOLDER_ID GSHEET_ID MISTRAL_API_KEY Deploy webhook to receive files from LINE Messaging API (Path: /line-invoice) Monitor Drive uploads using Google Drive Trigger How to Customize the Workflow Adjust the information extraction schema in the OpenAI Information Extractor node to match your document layout Add logic for different document types if you have more than one format Modify the LINE Reply message format or use Flex Message Update the Move File node if you want to archive to a different folder Requirements n8n self-hosted or cloud instance Google account with access to Drive and Sheets LINE Developer Account OpenAI API key Mistral Cloud API key Notes Community nodes used: @n8n/n8n-nodes-base.mistralAi This workflow supports both document images and PDF files File handling is done dynamically via MIME type
by Dele Odufuye
N8n OpenAI-Compatible API Endpoints Transform your n8n workflows into OpenAI-compatible API endpoints, allowing you to access multiple workflows as selectable AI models through a single integration. What This Does This workflow creates two API endpoints that mimic the OpenAI API structure: /models - Lists all n8n workflows tagged with aimodel (or any other tag of your choice) /chat/completions - Executes chat completions with your selected workflows, supporting both text and stream responses Benefits Access Multiple Workflows: Connect to all your n8n agents through one API endpoint instead of creating separate pipelines for each workflow. Universal Platform Support: Works with any application that supports OpenAI-compatible APIs, including OpenWebUI, Microsoft Teams, Zoho Cliq, and Slack. Simple Workflow Management: Add new workflows by tagging them with aimodel . No code changes needed. Streaming Support: Handles both standard responses and streaming for real-time agent interactions . How to Use Download the workflow JSON file from this repository Import it into your n8n instance Tag your workflows with aimodel to make them accessible through the API Create a new OpenAI credential in n8n and change the Base URL to point to your n8n webhook endpoints . Learn more about OpenAI Credentials Point your chat applications to your n8n webhook URL as if it were an OpenAI API endpoint Requirements n8n instance (self-hosted or cloud) Workflows you want to expose as AI models Any OpenAI-compatible chat application Documentation For detailed setup instructions and implementation guide, visit https://medium.com/@deleodufuye/how-to-create-openai-compatible-api-endpoints-for-multiple-n8n-workflows-803987f15e24. Inspiration This approach was inspired by Jimleuk’s workflow on n8n Templates.
by 中崎功大
Smart Irrigation Scheduler with Weather Forecast and Soil Analysis Summary Automated garden and farm irrigation system that uses weather forecasts and evapotranspiration calculations to determine optimal watering schedules, preventing water waste while maintaining healthy plants. Detailed Description A comprehensive irrigation management workflow that analyzes weather conditions, forecasts, soil types, and plant requirements to make intelligent watering decisions. The system considers multiple factors including expected rainfall, temperature, humidity, wind speed, and days since last watering to determine if irrigation is needed and how much. Key Features Multi-Zone Management**: Support for multiple irrigation zones with different plant and soil types Weather-Based Decisions**: Uses OpenWeatherMap current conditions and 5-day forecast Evapotranspiration Calculation**: Simplified Penman method for accurate water loss estimation Rain Forecast Skip**: Automatically skips watering when significant rain is expected Plant-Type Specific**: Different requirements for flowers, vegetables, grass, and shrubs Soil Type Consideration**: Adjusts for clay, loam, and sandy soil characteristics Urgency Classification**: High/medium/low priority based on moisture levels Optimal Timing**: Adjusts watering time based on temperature and wind conditions IoT Integration**: Sends commands to smart irrigation controllers Historical Logging**: Tracks all decisions in Google Sheets Use Cases Home garden automation Commercial greenhouse management Agricultural operations Landscaping company scheduling Property management with large grounds Water conservation projects Required Credentials OpenWeatherMap API key Slack Bot Token Google Sheets OAuth IoT Hub API (optional) Node Count: 24 (19 functional + 5 sticky notes) Unique Aspects Uses OpenWeatherMap node (rarely used in templates) Uses Split Out node for loop-style processing of zones Uses Filter node for conditional routing Uses Aggregate node to collect results Implements evapotranspiration calculation using Code node Comprehensive multi-factor decision logic Workflow Architecture [Daily Morning Check] [Manual Override Trigger] | | +----------+-------------+ | v [Define Irrigation Zones] | v [Split Zones] (Loop) / \ v v [Get Current] [Get 5-Day Forecast] \ / +----+----+ | v [Merge Weather Data] | v [Analyze Irrigation Need] / \ v v [Filter Needing] [Aggregate All] \ / +----+----+ | v [Generate Irrigation Schedule] | v [Has Irrigation Tasks?] (If) / \ Has Tasks No Tasks / | \ | Sheets[Slack] [Log No Action] \ | / | +---+---+-----------+ | v [Respond to Webhook] Configuration Guide Irrigation Zones: Edit "Define Irrigation Zones" with your zone data (coordinates, plant/soil types) Water Thresholds: Adjust waterThreshold per zone based on plant needs OpenWeatherMap: Add API credentials in the weather nodes Slack Channel: Set to your garden/irrigation channel IoT Integration: Configure endpoint URL for your smart valve controller Google Sheets: Connect to your logging spreadsheet Decision Logic The system evaluates: Expected rainfall in next 24 hours (skip if >5mm expected) Soil moisture estimate based on days since watering + evapotranspiration Plant-specific minimum and ideal moisture levels Temperature adjustments for hot days Scheduled watering frequency by plant type Wind speed for optimal watering time
by Theodoros Mastromanolis
Who is this for Travel agencies, freelance travel planners, or anyone who wants to automate personalized trip planning by combining real-time hotel and flight data with AI-generated recommendations. What this workflow does Collects travel details (airports, dates, travelers) through an n8n form Scrapes the top 5 hotels from Booking.com sorted by review score via Apify Scrapes the best available flights from Google Flights via Apify Generates restaurant, attraction, and day-by-day itinerary recommendations using OpenAI Merges all results into a formatted Google Doc and returns the link to the user How to set up Create an Apify account and add your API token as both an "Apify API" credential and an "HTTP Query Auth" credential (parameter name: token) Add your OpenAI API key as an "OpenAI" credential Connect your Google account via OAuth2 and update the folderId in the "Create Document" node to your Google Drive folder Activate the workflow and share the form URL Requirements Apify account with API token (for Booking.com and Google Flights scrapers) OpenAI API key Google account with Docs and Drive access How to customize Swap the form trigger for a webhook or chatbot input Change the output from Google Docs to email, Slack, or Notion Adjust the OpenAI prompt to focus on budget travel, luxury, or specific interests
by Rohit Dabra
🧩 Zoho CRM MCP Server Integration (n8n Workflow) 🧠 Overview This n8n flow integrates Zoho CRM with an MCP (Model Context Protocol) Server and OpenAI Chat Model, enabling AI-driven automation for CRM lead management. It allows an AI Agent to create, update, delete, and fetch leads in Zoho CRM through natural language instructions. ▶️ Demo Video Watch the full demo here: 👉 YouTube Demo Video ⚙️ Core Components | Component | Purpose | | ---------------------- | -------------------------------------------------------------------------------------------------- | | MCP Server Trigger | Acts as the entry point for requests sent to the MCP Server (external systems or chat interfaces). | | Zoho CRM Nodes | Handle CRUD operations for leads (create, update, delete, get, getAll). | | AI Agent | Uses the OpenAI Chat Model and Memory to interpret and respond to incoming chat messages. | | OpenAI Chat Model | Provides the LLM (Large Language Model) intelligence for the AI Agent. | | Simple Memory | Stores short-term memory context for chat continuity. | | MCP Client | Bridges communication between the AI Agent and the MCP Server for bi-directional message handling. | 🧭 Flow Description 1. Left Section (MCP Server + Zoho CRM Integration) Trigger:** MCP Server Trigger — receives API requests or chat events. Zoho CRM Actions:** 🟢 Create a lead in Zoho CRM 🔵 Update a lead in Zoho CRM 🟣 Get a lead in Zoho CRM 🟠 Get all leads in Zoho CRM 🔴 Delete a lead in Zoho CRM Each of these nodes connects to the Zoho CRM credentials and performs the respective operation on Zoho CRM’s “Leads” module. 2. Right Section (AI Agent + Chat Flow) Trigger:** When chat message received — initiates flow when a message is received. AI Agent Node:** Uses: OpenAI Chat Model → for natural language understanding and generation. Simple Memory → to maintain context between interactions. MCP Client → to call MCP actions (which include Zoho CRM operations). This creates a conversational interface allowing users to type things like: > “Add a new lead named John Doe with email john@acme.com” The AI agent interprets this and routes the request to the proper Zoho CRM action node automatically. ⚙️ Step-by-Step Configuration Guide 🧩 1. Import the Flow In n8n, go to Workflows → Import. Upload the JSON file of this workflow (or paste the JSON code). Once imported, you’ll see the structure as in the image. 🔐 2. Configure Zoho CRM Credentials You must connect Zoho CRM API to n8n. Go to Credentials → New → Zoho OAuth2 API. Follow Zoho’s official n8n documentation. Provide the following: Environment: Production Data Center: e.g., zoho.in or zoho.com depending on your region Client ID and Client Secret — from Zoho API Console (https://api-console.zoho.com/) Scope: ZohoCRM.modules.leads.ALL Redirect URL: Use the callback URL shown in n8n (copy it before saving credentials) Click Connect and complete the OAuth consent. ✅ Once authenticated, all Zoho CRM nodes (Create, Update, Delete, etc.) will be ready. 🔑 3. Configure OpenAI API Key In n8n, go to Credentials → New → OpenAI API. Enter: API Key: from https://platform.openai.com/account/api-keys Save credentials. In the AI Agent node, select this OpenAI credential under Model. 🧠 4. Configure the AI Agent Open the AI Agent node. Choose: Chat Model: Select your configured OpenAI Chat Model. Memory: Select Simple Memory. Tools: Add MCP Client as the tool. Configure AI instructions (System Prompt) — for example: You are an AI assistant that helps manage leads in Zoho CRM. When the user asks to create, update, or delete a lead, use the appropriate tool. Provide confirmations in natural language. 🧩 5. Configure MCP Server A. MCP Server Trigger Open the MCP Server Trigger node. Note down the endpoint URL — this acts as the API entry point for external requests. It listens for incoming POST requests from your MCP client or chat interface. B. MCP Client Node In the AI Agent, link the MCP Client node. Configure it to send requests back to your MCP Server endpoint (for 2-way communication). > 🔄 This enables a continuous conversation loop between external clients and the AI-powered CRM automation system. 🧪 6. Test the Flow Once everything is connected: Activate the workflow. From your chat interface or Postman, send a message to the MCP Server endpoint: { "message": "Create a new lead named Alice Johnson with email alice@zoho.com" } Observe: The AI Agent interprets the intent. Calls Zoho CRM Create Lead node. Returns a success message with lead ID. 🧰 Example Use Cases | User Query | Action Triggered | | ------------------------------------------------- | ----------------------- | | “Add John as a lead with phone number 9876543210” | Create lead in Zoho CRM | | “Update John’s company to Acme Inc.” | Update lead in Zoho CRM | | “Show me all leads from last week” | Get All Leads | | “Delete lead John Doe” | Delete lead | 🧱 Tech Stack Summary | Layer | Technology | | ---------------------- | ---------------------------- | | Automation Engine | n8n | | AI Layer | OpenAI GPT Chat Model | | CRM | Zoho CRM | | Communication Protocol | MCP (Model Context Protocol) | | Memory | Simple Memory | | Trigger | HTTP-based MCP Server | ✅ Best Practices 🔄 Refresh Tokens Regularly — Zoho tokens expire; ensure auto-refresh setup. 🧹 Use Environment Variables for API keys instead of hardcoding. 🧠 Fine-tune System Prompts for better AI understanding. 📊 Enable Logging for request/response tracking. 🔐 Restrict MCP Server Access with an API key or JWT token.
by MUHAMMAD SHAHEER
Overview This workflow helps you automatically collect verified business leads from Google Search using SerpAPI — no coding required. It extracts company names, websites, emails, and phone numbers directly from search results and saves them into Google Sheets for easy follow-up or CRM import. Perfect for marketers, freelancers, and agencies who want real, usable leads fast — without manual scraping or paid databases. How It Works SerpAPI Node performs a Google search for your chosen keyword or niche. Split Out Node separates each result for individual processing. HTTP Request Node optionally visits each site for deeper data extraction. Code Node filters, validates, and formats leads using smart parsing logic. Google Sheets Node stores the final structured data automatically. All steps include sticky notes with configuration help. Setup Steps Setup takes about 5–10 minutes: Add your SerpAPI key (replace the placeholder). Connect your Google Sheets account. Update the search term (e.g., “Plumbers in New York”). Run the workflow and watch leads populate your sheet in real time.
by Madame AI
Real-Time MAP Enforcement & Price Violation Alerts using BrowserAct & slack This n8n template automates MAP (Minimum Advertised Price) enforcement by monitoring reseller websites and alerting you instantly to price violations and stock issues. This workflow is essential for brand owners, manufacturers, and compliance teams who need to proactively monitor their distribution channels and enforce pricing policies. How it works The workflow runs on a Schedule Trigger (e.g., hourly) to continuously monitor product prices. A Google Sheets node fetches your list of resellers, product URLs, and the official MAP price (AP_Price). The Loop Over Items node ensures that each reseller's product is checked individually. A pair of BrowserAct nodes navigate to the reseller's product page and reliably scrape the current live price. A series of If nodes check for violations: The first check (If1) looks for "NoData," signaling that the product is Out of Stock, and sends a specific Slack alert. The second check (If) compares the scraped price to your MAP price, triggering a detailed Slack alert if a MAP violation is found. The workflow loops back to check the next reseller on the list. Requirements BrowserAct** API account for web scraping BrowserAct* "MAP (Minimum Advertised Price) Violation Alerts*" Template BrowserAct** n8n Community Node -> (n8n Nodes BrowserAct) Google Sheets** credentials for your price list Slack** credentials for sending alerts Need Help? How to Find Your BrowseAct API Key & Workflow ID How to Connect n8n to Browseract How to Use & Customize BrowserAct Templates How to Use the BrowserAct N8N Community Node Workflow Guidance and Showcase I Built a Bot to Catch MAP Violators (n8n + BrowserAct Workflow)
by Madame AI
Automated E-commerce Store Monitoring for New Products Using BrowserAct This n8n template is an advanced competitive intelligence tool that automatically monitors competitor E-commerce/Shopify stores and alerts you the moment they launch a new product. This workflow is essential for e-commerce store owners, product strategists, and marketing teams who need real-time insight into what their competitors are selling. Self-Hosted Only This Workflow uses a community contribution and is designed and tested for self-hosted n8n instances only. How it works The workflow runs on a Schedule Trigger to check for new products automatically (e.g., daily). A Google Sheets node fetches your master list of competitor store links from a central sheet. The workflow loops through each competitor one by one. For each competitor, a Google Sheets node first creates a dedicated tracking sheet (if one doesn't exist) to store their product list history. A BrowserAct node then scrapes the competitor's current product list from their live website. The scraped data is saved to the competitor's dedicated tracking sheet. The workflow then fetches the newly scraped list and the previously stored list of products. A custom Code node (labeled "Compare Datas") performs a difference check to reliably detect if any new products have been added. If a new product is detected, an If node triggers an immediate Slack alert to your team, providing real-time competitive insight. Requirements BrowserAct** API account for web scraping BrowserAct* "Competitors Shopify Website New Product Monitor*" Template BrowserAct** n8n Community Node -> (n8n Nodes BrowserAct) Google Sheets** credentials for storing and managing data Slack** credentials for sending alerts Need Help? How to Find Your BrowseAct API Key & Workflow ID How to Connect n8n to Browseract How to Use & Customize BrowserAct Templates How to Use the BrowserAct N8N Community Node Workflow Guidance and Showcase Automatically Track Competitor Products | n8n & Google Sheets Template
by Pramod Kumar Rathoure
A RAG Chatbot with n8n and Pinecone Vector Database Retrieval-Augmented Generation (RAG) allows Large Language Models (LLMs) to provide context-aware answers by retrieving information from an external vector database. In this post, we’ll walk through a complete n8n workflow that builds a chatbot capable of answering company policy questions using Pinecone Vector Database and OpenAI models. Our setup has two main parts: Data Loading to RAG – documents (company policies) are ingested from Google Drive, processed, embedded, and stored in Pinecone. Data Retrieval using RAG – user queries are routed through an AI Agent that uses Pinecone to retrieve relevant information and generate precise answers. 1. Data Loading to RAG This workflow section handles document ingestion. Whenever a new policy file is uploaded to Google Drive, it is automatically processed and indexed in Pinecone. Nodes involved: Google Drive Trigger** Watches a specific folder in Google Drive. Any new or updated file triggers the workflow. Google Drive (Download)** Fetches the file (e.g., a PDF policy document) from Google Drive for processing. Recursive Character Text Splitter** Splits long documents into smaller chunks (with a defined overlap). This ensures embeddings remain context-rich and retrieval works effectively. Default Data Loader** Reads the binary document (PDF in this setup) and extracts the text. OpenAI Embeddings** Generates high-dimensional vector representations of each text chunk using OpenAI’s embedding models. Pinecone Vector Store (Insert Mode)** Stores the embeddings into a Pinecone index (n8ntest), under a chosen namespace. This step makes the policy data searchable by semantic similarity. 👉 Example flow: When HR uploads a new Work From Home Policy PDF to Google Drive, it is automatically split, embedded, and indexed in Pinecone. 2. Data Retrieval using RAG Once documents are loaded into Pinecone, the chatbot is ready to handle user queries. This section of the workflow connects the chat interface, AI Agent, and retrieval pipeline. Nodes involved: When Chat Message Received** Acts as the webhook entry point when a user sends a question to the chatbot. AI Agent** The core reasoning engine. It is configured with a system message instructing it to only use Pinecone-backed knowledge when answering. Simple Memory** Keeps track of the conversation context, so the bot can handle multi-turn queries. Vector Store QnA Tool** Queries Pinecone for the most relevant chunks related to the user’s question. In this workflow, it is configured to fetch company policy documents. Pinecone Vector Store (Query Mode)** Acts as the connection to Pinecone, fetching embeddings that best match the query. OpenAI Chat Model** Refines the retrieved chunks into a natural and concise answer. The model ensures answers remain grounded in the source material. Calculator Tool** Optional helper if the query involves numerical reasoning (e.g., leave calculations or benefit amounts). 👉 Example flow: A user asks “How many work-from-home days are allowed per month?”. The AI Agent queries Pinecone through the Vector Store QnA tool, retrieves the relevant section of the HR policy, and returns a concise answer grounded in the actual document. Wrapping Up By combining n8n automation, Pinecone for vector storage, and OpenAI for embeddings + LLM reasoning, we’ve created a self-updating RAG chatbot. Data Loading pipeline** ensures that every new company policy document uploaded to Google Drive is immediately available for semantic search. Data Retrieval pipeline** allows employees to ask natural language questions and get document-backed answers. This setup can easily be adapted for other domains — compliance manuals, tax regulations, legal contracts, or even product documentation.
by Ziana Mitchell
Workflow Introduction This workflow is specifically for Self-Hosted N8N Docker users. The purpose of this workflow is to: Take an inventory of all workflows with credentials (Workflow -> Credential 1, Credential 2, etc.), connecting credential names to workflow names Create a credentials impact map (Credential -> Workflow 1, Workflow 2, etc.), connecting workflow names to credential IDs and names This workflow drastically minimizes the amount of time spent trying to figure out which workflows will break if a credential is updated or deleted by bridging the gap between the public API (workflow metadata) and the internal sqlite database (hidden version labels and credential mappings). In essence this tool audits your n8n environment without touching your sensitive encrypted data, keeping you API Keys 100% secure. It features dynamic time saving trackers, set for what I estimate to be the amount of time it would take to manually complete every individual thing this workflow does in less than 5 minutes. Limitations This workflow is optimized for the Standard N8N Execution Mode. Due to the high-frequency data lookups required for this tool, it is not compatible with Task Runners (N8N_RUNNERS_ENABLED=true). For maximum stability and to prevent "Database Locked" errors, please ensure your instance is running in the default configuration. If you use Method 1 (Direct Insertion) in the API Key Instructions below your N8N API key will not appear as connected to this workflow in the resulting Credential Impact Map. Same goes for all other credentials directly inserted into a HTTP Request node across your N8N environment. Setting it up - Requirements Environment Variables Required: These enviornment variables are required in order for the version_labels and flow_names code nodes to work. NODE_FUNCTION_ALLOW_BUILTIN=* NODE_FUNCTION_ALLOW_EXTERNAL=sqlite3 Note: If you need access to multiple external functions across your n8n instance, add them as a string like this: NODE_FUNCTION_ALLOW_EXTERNAL=sqlite3, better-sqlite3 Credentials Needed: n8n account > n8n self-hosted API key Google Sheets account Google Drive account The nodes that need manually inputted credentials or node parameter information are: Create folder, flowsList, credsList, and finalFlowEdits Note: If you create your credentials before openning any process nodes, your credentials will be auto connected to the associated nodes. However, this will not happen for the n8n self-hosted API keys. If you have multiple Google Sheets or Google Drive accounts you will need to point the associated nodes to the specific one you want to use. API Key Instructions How to get: Settings -> API -> Create API Key -> Name it -> Copy it Copy the API key and save it somewhere secure. You won't be able to see it again!!**_ How to use (Two Methods): Note: The node is set up for Method 1 by default Method 1 (Direct Insertion). In the flowsList and credsList nodes: Authentication: None Send Headers: on Name: X-N8N-API-KEY Value: <YOUR N8N API KEY> Method 2 (N8N Credential). In the flowsList and credsList nodes: Authentication: Generic Credential Type Generic Auth Type: Header Auth Header Auth: Select "Create new credential" (If you already have a Header Auth for your N8N API key select it here) 2a. In the credential creation area: Name: X-N8N-API-KEY Value: <YOUR N8N API KEY> Queries and Body aren't needed. Endpoints: flowsList: http://localhost:5678/api/v1/workflows or https://<your webhook domain>/api/v1/workflows credsList: http://localhost:5678/api/v1/credentials or https://<your webhook domain>/api/v1/credentials The Workflow Processes Global Actions Executes on on a set schedule (weekly on Fridays at 2pm) 1a. A deactivated execute on command node is included for testing, or for when updated files are needed before the scheduled date/time Creates a Drive folder to house the resulting spreadsheets using the current date and time as the name 2a. This workflow creates the folders inside a specified parent folder. 2b. This node also sets the folder color You are welcome to change this to your preferred color. Branch 1: Workflow Credentials Inventory Process Creates a google spreadsheet with the current date in the name Gets the list of all workflows from the public API Splits the list into individual items for easier processing Gets the version labels for active workflows from the internal sqlite database Sets fields for the workflow's name, id, published status, latest published version label, current version's id (whether the workflow is published or not) Filters out the workflows that don't have any nodes with credentials Splits out the list of nodes for easier processing Filters out the nodes that don't have associated credentials Sets fields for the node names, and credential names Summarize the output from the Nodes editor, based on workflows 10a. Append nodeNames 10b. Append credetntials 10c. Count the nodes Clean up the summarized information, converting it back to recognizible clean json format Set fields for spreadsheet aethestics This step can be skipped, however, I didn't like the visual look of the direct from CleanJson to credInventory Appends header row and data rows to the spreadsheet created in Step 1. Move the file into the folder created at the start of the workflow from whereever google put it by default Track the estimated time it would have taken to manually do every step, in a time saved node Branch 2: Credential Impact Map Process Creates a spreadsheet with the current date in the name Gets a list of all credentials from the public API Splits the list into individual items for easier processing Sorts the credentials by name in ascending order (A->Z) in case they come from the split out of order Gets a list of all workflows with nodes associated with each credential ID from the internal sqlite database Set fields for the credential name, id, type, created and updated timestamps, and list of workflow names Appends the header row and data rows to the spreadsheet created in Step 1 of this branch Move the file into the folder created at the start of the workflow from whereever google put it by default Track the estimated time it would have taken to manually do every step, in a time saved node