by Abdullahi Ahmed
Title RAG AI Agent for Documents in Google Drive → Pinecone → OpenAI Chat (n8n workflow) Short Description This n8n workflow implements a Retrieval-Augmented Generation (RAG) pipeline + AI agent, allowing users to drop documents into a Google Drive folder and then ask questions about them via a chatbot. New files are indexed automatically to a Pinecone vector store using OpenAI embeddings; the AI agent loads relevant chunks at query time and answers using context plus memory. Why this workflow matters / what problem it solves Large language models (LLMs) are powerful, but they lack up-to-date, domain-specific knowledge. RAG augments the LLM with relevant external documents, reducing hallucination and enabling precise answers. (Pinecone) This workflow automates the ingestion, embedding, storage, retrieval, and chat logic — with minimal manual work. It’s modular: you can swap data sources, vector DBs, or LLMs (with some adjustments). It leverages the built-in AI Agent node in n8n to tie all the parts together. (n8n) How to get the required credentials | Service | Purpose in Workflow | Setup Link | What you need / steps | | ------------------------- | ------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- | | Google Drive (OAuth2) | Trigger new file events & download the file | https://docs.n8n.io/integrations/builtin/credentials/google/oauth-generic/ | Create a Google Cloud OAuth app, grant it Drive scopes, get client ID & secret, configure redirect URI, paste into n8n credentials. | | Pinecone | Vector database for embeddings | https://docs.n8n.io/integrations/builtin/credentials/pinecone/ | Sign up at Pinecone, in dashboard create an index, get API key + environment, paste into n8n credential. | | OpenAI | Embeddings + chat model | https://docs.n8n.io/integrations/builtin/credentials/openai/ | Log in to OpenAI, generate a secret API key, paste into n8n credentials. | You’ll configure these under n8n → Credentials → New Credential, matching credential names referenced in your workflow nodes. Detailed Walkthrough: How the Workflow Works Here’s a step-by-step of what happens inside your workflow (matching your JSON): 1. Google Drive Trigger Watches a specified folder in Google Drive. Whenever a new file appears (fileCreated event), the workflow is triggered (polling every minute). You must set the folder ID (in “folderToWatch”) to the Drive folder you want to monitor. 2. Download File Takes the file ID from the trigger and downloads the file content (binary). 3. Indexing Path: Embeddings + Storage (This path only runs when new files arrive) The file is sent to the Default Data Loader node (via the Recursive Character Text Splitter) to break it into chunks with overlap (so context is preserved). Each chunk is fed into Embeddings OpenAI to convert text into embedding vectors. Then Pinecone Vector Store (insert mode) ingests the vector + text metadata into your Pinecone index. This ensures your vector store stays up-to-date with files you drop into Drive. 4. Chat / Query Path (Triggered by user chat via webhook) When a chat message arrives via When Chat Message Received, it gets passed into the AI Agent node. Before generation, the AI Agent calls the Pinecone Vector Store1 set in “retrieve-as-tool” mode, which runs a vector-based retrieval using the user query embedding. The relevant text chunks are pulled as tools/context. The OpenAI Chat Model node is linked as the language model for the agent. Simple Memory** node provides conversational memory (keeping history across messages). The agent combines retrieved context + memory + user input and instructs the model to produce a response. 5. Connections / Flow Logic The Embeddings OpenAI node’s output is wired into Pinecone Vector Store (insert) and also into Pinecone Vector Store1 (so the same embeddings can be used for retrieval). The AI Agent has tool access to Pinecone retrieval and memory. The Download File node triggers the insert path. The When chat message triggers the agent path. Similar Workflows / Inspirations & Comparisons To help understand how your workflow fits into what’s already out there, here are a few analogues: n8n Blog: “Build a custom knowledge RAG chatbot”** — they show a workflow that ingests documents from external sources, indexes them in Pinecone, and responds to queries via n8n + LLM. (n8n Blog) Index Documents from Google Drive to Pinecone** — this is nearly identical for the ingestion part: trigger on Drive, split, embed, upload. (n8n) Build & Query RAG System with Google Drive, OpenAI, Pinecone** — shows the full RAG + chat logic, same pattern. (n8n) Chat with GitHub API Documentation (RAG)** — demonstrates converting API spec into chunks, embedding, retrieving, and chatting. (n8n) Community tutorials & forums** talk about using the AI Agent node with tools like Pinecone, and how the RAG part is often built as a sub-workflow feeding an agent. (n8n Community) What sets your workflow apart is your explicit combination: Google Drive → automatic ingestion → chat agent with tool integration + memory. Many templates show either ingestion or chat, but fewer show them combined cleanly with n8n’s AI Agent. Suggested Published Description (you can paste/adjust) > RAG AI Agent for Google Drive Documents (n8n workflow) > > This workflow turns a Google Drive folder into a live, queryable knowledge base. Drop PDF, docx, or text files into the folder → new documents are automatically indexed into a Pinecone vector store using OpenAI embeddings → you can ask questions via a webhook chat interface and the AI agent will retrieve relevant text, combine it with memory, and answer in context. > > Credentials needed > > * Google Drive OAuth2 (see: https://docs.n8n.io/integrations/builtin/credentials/google/oauth-generic/) > * Pinecone (see: https://docs.n8n.io/integrations/builtin/credentials/pinecone/) > * OpenAI (see: https://docs.n8n.io/integrations/builtin/credentials/openai/) > > How it works > > 1. Drive trigger picks up new files > 2. Download, split, embed, insert into Pinecone > 3. Chat webhook triggers AI Agent > 4. Agent retrieves relevant chunks + memory > 5. Agent uses OpenAI model to craft answer > > This is built on the core RAG pattern (ingest → retrieve → generate) and enhanced by n8n’s AI Agent node for clean tool integration. > > Inspiration & context > This approach follows best practices from existing n8n RAG tutorials and templates, such as the “Index Documents from Google Drive to Pinecone” ingestion workflow and “Build & Query RAG System” templates. (n8n) > > You're free to swap out the data source (e.g. Dropbox, S3) or vector DB (e.g. Qdrant) as long as you adjust the relevant nodes. If you like, I can generate a polished Markdown README for you (with badges, diagrams, instructions) ready for GitHub/n8n community publishing. Do you want me to build that? [1]: https://www.pinecone.io/learn/retrieval-augmented-generation/?utm_source=chatgpt.com "Retrieval-Augmented Generation (RAG) - Pinecone" [2]: https://n8n.io/integrations/agent/?utm_source=chatgpt.com "AI Agent integrations | Workflow automation with n8n" [3]: https://blog.n8n.io/rag-chatbot/?utm_source=chatgpt.com "Build a Custom Knowledge RAG Chatbot using n8n" [4]: https://n8n.io/workflows/4552-index-documents-from-google-drive-to-pinecone-with-openai-embeddings-for-rag/?utm_source=chatgpt.com "Index Documents from Google Drive to Pinecone with OpenAI ... - N8N" [5]: https://n8n.io/workflows/4501-build-and-query-rag-system-with-google-drive-openai-gpt-4o-mini-and-pinecone/?utm_source=chatgpt.com "Build & Query RAG System with Google Drive, OpenAI GPT-4o-mini ..." [6]: https://n8n.io/workflows/2705-chat-with-github-api-documentation-rag-powered-chatbot-with-pinecone-and-openai/?utm_source=chatgpt.com "Chat with GitHub API Documentation: RAG-Powered Chatbot ... - N8N"
by Fady Bekkar
This automation allows you to track feature requests in Notion, create GitHub issues automatically, and notify your team via email based on issue status. It's ideal for technical and functional teams who collaborate on project delivery using Notion and GitHub. 🔹 SECTION 1: Detect and Sort Issues from Notion Combining: Schedule Trigger + Notion Database + Field Mapping + Status Routing ⏰ 1. Schedule Trigger 🔧 Node Type: Schedule Trigger (you can use a webhook trigger if you are on Notion paid plan) 💬 Description: Triggers the workflow every X minutes to check for new or updated Notion database pages. 📑 2. Get Many Database Pages (Notion) 🔧 Node Type: Notion → Get All Database Pages 📋 What it does: Fetches all rows (pages) from a Notion database that represents tasks or feature requests. ✏️ 3. Sort Issues Fields 🔧 Node Type: Set 📋 Goal: Restructures or cleans data fields such as Title, Status, Labels, and Repository. 🔀 4. Switch: Issue Status Decision 🔧 Node Type: Switch 🎯 What it does: Separates logic based on the Status of the Notion item: If status is "To develop" → proceed to create issue Else → send notification to the team 🔹 SECTION 2: GitHub Issue Creation (IF "To develop") Combining: GitHub Node + Notion Update 🐙 5. Create an Issue (GitHub) 🔧 Node Type: GitHub → Create Issue ⚙️ What it does: Creates a new issue on the GitHub repo defined in the Notion row. 📥 Inputs: Uses dynamic fields: Title, Description, Labels, Repository. 🧩 6. Set Status and Issue URL (Notion Update) 🔧 Node Type: Notion → Update Database Page 🧠 Role: Updates the status of the issue in Notion to In progress and stores the created GitHub Issue URL. 🔹 SECTION 3: Notify Team on Already In-Progress Items (IF NOT "To develop") Combining: Notion Users + Filtering + Email Grouping + Gmail 👥 7. Get Many Users (Notion Users) 🔧 Node Type: Notion → Get All Users 📥 What it does: Retrieves the list of team members (to be notified). 🧠 8. Map Notion Users 🔧 Node Type: Set 📋 Role: Maps and formats data for each user (e.g., Name, Email, Role). 🧹 9. Exclude Bot 🔧 Node Type: Switch 🚫 What it does: Excludes automation/bot users (e.g., notifications@noreply). 🧮 10. Group Recipients 🔧 Node Type: Aggregate 🎯 Goal: Collects all user emails into a single array to send one email to all recipients. 📬 11. Send a Message (Gmail) 🔧 Node Type: Gmail → Send Email
by Kurt Bijl
🎬 Social Media Content Generator Workflow Overview Automated social media content creation from video transcripts 🎯 Trigger: Airtable Webhook Action**: Receives webhook from Airtable automation Data**: RecordId and action type (e.g., "post-ig") Purpose**: Starts the content generation pipeline 📊 Step 1: Fetch Record Node**: Airtable (Get Record) Action**: Retrieves full record data using RecordId Data**: Name, transcript, and other fields 📁 Step 2: Create Google Drive Folder Node**: Google Drive (Create Folder) Action**: Creates blue folder in /tutorials directory Name**: Uses record Name field Updates**: Stores folder ID back to Airtable 🤖 Step 3: AI Content Analysis Node**: AI Agent with Google Gemini 2.5 Flash Input**: Video transcript from Airtable Structured Output**: JSON with all social formats: YouTube title & description YouTube thumbnail text Twitter thread (array) LinkedIn post Instagram caption TikTok caption YouTube Shorts caption Relevant tags 💾 Step 4: Save Transcript File Node**: Google Drive (Create from Text) Action**: Saves transcript as text file Location**: Inside the created folder Name**: Uses record Name field 📋 Step 5: Update Airtable Results Node**: Airtable (Update Record) Data**: All AI-generated social media content Special**: Twitter thread array joined with newlines 🎯 Result: Complete social media content suite ready for multi-platform publishing, organized in Google Drive with all data stored in Airtable.
by keisha kalra
Try It Out! This n8n template helps you create SEO-optimized Blog Posts for your businesses website or for personal use. Whether you're managing a business or helping local restaurants improve their digital presence, this workflow helps you build SEO-Optimized Blog Posts in seconds using Google Autocomplete and People Also Ask (SerpAPI). Who Is It For? This is helpful for people looking to SEO Optimize either another person's website or their own. How It Works? You start with a list of blog inspirations in Google Sheets (e.g., “Best Photo Session Spots”). The workflow only processes rows where the “Status” column is not marked as “done”, though you can remove this condition if you’d like to process all rows each time. The workflow pulls Google Autocomplete suggestions and PAA questions using: A custom-built SEO API I deployed via Render (for Google Autocomplete + PAA), SerpAPI (for additional PAA data). These search insights are merged. For example, if your blog idea is “Photo Session Spots,” the workflow gathers related Google search phrases and questions users are asking. Then, GPT-4 is used to draft a full blog post based on this data. The finished post is saved back into your Google Sheet. How To Use Fill out the “Blog Inspiration” column in your Google Sheet with the topics you want to write about. Update the OpenAI prompt in the ChatGPT node to match your tone or writing style. (Tip: Add a system prompt with context about your business or audience.) You can trigger this manually, or replace it with a cron schedule, webhook, or other event. Requirements A SerpAPI account to get PAA An OpenAI account for ChatGPT Access to Google Sheets and n8n How To Set-Up? Your Google Sheet should have three columns: "Blog Inspiration", "Status" → set this to “done” when a post has been generated, "Blog Draft" → this is automatically filled by the workflow. To use the SerpAPI HTTP Request node: 1. Drag in an HTTP Request node, 2. Set the Method and URL depending on how you're using SerpAPI: Use POST to run the actor live on each request. Use GET to fetch from a static dataset (cheaper if reusing the same data). 3. Add query parameters for your SerpAPI key and input values. 4. Test the node. Refer to this n8n documentation for more help! https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolserpapi/. The “Autocomplete” node connects to a custom web service I built and deployed using Render. I wrote a Python script (hosted on GitHub) that pulls live Google Autocomplete suggestions and PAA questions based on any topic you send. This script was turned into an API and deployed as a public web service via Render. Anyone can use it by sending a POST request to: https://seo-api2.onrender.com/get-seo-data (the URL is already in the node). Since this is hosted on Render’s free tier, if the service hasn’t been used in the past ~15 minutes, it may “go to sleep.” When that happens, the first request can take 10–30 seconds to respond while it wakes up. Happy Automating!
by Harshil Agrawal
This workflow appends, lookup, updates, and reads data from a Google Sheet spreadsheet. Set node: The Set node is used to generate data that we want to add to Google Sheets. Depending on your use-case you might have data coming from a different source. For example, you might be fetching data from a WebHook call. Add the node that will fetch the data that you want to add to the Google Sheet. Use can then use the Set node to set the data that you want to add to the Google Sheets. Google Sheets node: This node will add the data from the Set node in a new row to the Google Sheet. You will have to enter the Spreadsheet ID and the Range to specify which sheet you want to add the data to. Google Sheets1 node: This node looks for a specific value in the Google Sheet and returns all the rows that contain the value. In this example, we are looking for the value Berlin in our Google Sheet. If you want to look for a different value, enter that value in the Lookup Value field, and specify the column in the Lookup Column field. Set1 node: The Set node sets the value of the rent by $100 for the houses in Berlin. We pass this new data to the next nodes in the workflow. Google Sheets2 node: This node will update the rent for the houses in Berlin with the new rent set in the previous node. We are mapping the rows with their ID. Depending on your use-case, you might want to map the values with a different column. To set this enter the column name in the Key field. Google Sheets3 node: This node returns the information from the Google Sheet. You can specify the columns that should get returned in the Range field. Currently, the node fetches the data for columns A to D. To fetch the data only for columns A to C set the range to A:C. This workflow can be broken down into different workflows each with its own use case. For example, we can have a workflow that appends new data to a Google Sheet, and another workflow that lookups for a certain value and returns that value. You can learn to build this workflow on the documentation page of the Google Sheets node.
by CustomJS
This n8n workflow illustrates how to convert PDF files into text with the PDF Toolkit from www.customjs.space. @custom-js/n8n-nodes-pdf-toolkit Notice Community nodes can only be installed on self-hosted instances of n8n. What this workflow does Change** the requested HTML to PDF.. Extract** text from the PDF. Use** a Code node to handle URLs that point to PDF files. Convert** the PDF to text. Requirements Self-hosted** n8n instance. CustomJS API key** for converting PDF to text. HTML** Data to convert PDF files. Code node** for handling URL that indicates PDF file. Workflow Steps: Manual Trigger: Runs with user interaction. HTML to PDF: Request HTML Data Convert HTML to PDF Convert PDF to Text: Convert the generated Text from PDF Usage Get API key from customJS Sign up to customJS platform. Navigate to your profile page Press "Show" button to get API key Set Credentials for CustomJS API on n8n Copy and paste your API key generated from CustomJS here. Design workflow A Manual Trigger for starting workflow. HTTP Request Nodes for downloading PDF files. Code node for handling URL that indicates PDF file. Convert PDF to Text. You can replace logic for triggering and returning results. For example, you can trigger this workflow by calling a webhook and get a result as a response from webhook. Simply replace Manual Trigger and Write to Disk nodes.
by Kalyxi Ai
🚀 Automate News Discovery & Publishing with GPT-4, Google Search API & Slack 🎯 Overview Automated content publishing system that discovers industry news, transforms it into original articles using GPT-4, and publishes across multiple channels with SEO optimization and intelligent duplicate prevention. ✨ Key Features 🤖 Smart Query Generation** - AI agent generates unique search queries while checking Google Sheets to avoid duplicates 🔍 News Discovery** - Uses Google Custom Search API to find recent articles (last 7 days) 🧠 Content Intelligence** - Processes search results and skips anti-bot protected sites automatically 📝 GPT-4 Article Generation** - Creates professional, SEO-optimized news articles in Reuters/Bloomberg style 📢 Multi-Channel Publishing** - Publishes to CMS with automatic Slack notifications 📊 Comprehensive Tracking** - Logs all activity to Google Sheets for analytics and duplicate prevention 🔄 How It Works ⏰ Scheduled Trigger runs every 8 hours to maintain consistent content flow 🤖 AI Agent generates targeted search queries for your niche while checking historical data 🔍 Google Search finds recent articles and extracts metadata (title, snippet, source) 🛡️ Smart Content Handler bypasses sites with anti-bot protection, using search snippets instead ⚡ GPT-4 Processing transforms snippets into comprehensive 2000+ word articles with proper formatting 🚀 Publishing Pipeline formats content for CMS with SEO metadata and publishes automatically 📱 Notification System sends detailed Slack updates with article metrics 📈 Activity Logging tracks all published content to prevent future duplicates 🔧 Setup Requirements 📋 Prerequisites Google Custom Search API key and Search Engine ID OpenAI GPT-4 API access Google account for tracking spreadsheet Slack workspace for notifications CMS or website with API endpoint for publishing 🛠️ Step-by-Step Setup Step 1: 🔎 Google Custom Search Configuration Go to Google Custom Search Engine Create a new search engine Configure to search the entire web Copy your Search Engine ID (cx parameter) Get your API key from Google Cloud Console Step 2: 📊 Google Sheets Template Setup Create a Google Sheet with these required columns: Column A:** timestamp - ISO date format (YYYY-MM-DD HH:MM:SS) Column B:** query - The search query used Column C:** title - Published article title Column D:** url - Published article URL Column E:** status - Publication status (success/failed) Column F:** word_count - Final article word count Template URL: Copy this Google Sheets template Step 3: 🔑 Credential Configuration Set up the following credentials in n8n: 📊 Google Sheets API - OAuth2 connection to your Google account 🤖 OpenAI API - Your GPT-4 API key 📱 Slack Webhook - Webhook URL for your notification channel 🔍 Custom Search API - Your Google Custom Search API key Step 4: ⚙️ Workflow Customization Modify these key parameters to fit your needs: 🎯 Search Topic:** Edit the AI agent prompt to focus on your industry ⏰ Publishing Schedule:** Adjust the cron trigger (default: every 8 hours) 📝 Article Length:** Modify GPT-4 prompt for different word counts 🌐 CMS Endpoint:** Update the publishing node with your website's API 🎨 Customization Options 🎯 Content Targeting Modify the AI agent's search query generation to focus on specific industries Adjust date restrictions (currently set to last 7 days) Change the number of search results processed per run ✍️ Article Style Customize GPT-4 prompts for different writing styles (formal, casual, technical) Adjust article length requirements Modify SEO optimization parameters 📡 Publishing Channels Add additional CMS endpoints for multi-site publishing Configure different notification channels (Discord, Teams, etc.) Set up social media auto-posting integration 💡 Use Cases 📰 Automated news websites 📝 Industry blog content generation 🔍 SEO content pipeline automation 📊 News aggregation and republishing 📈 Content marketing automation 🛠️ Technical Notes Workflow includes error handling for anti-bot protection Duplicate prevention through Google Sheets tracking Rate limiting considerations for API usage Automatic retry logic for failed requests 🆘 Support For setup assistance or customization help, refer to the workflow's internal documentation nodes or contact the template creator.
by Miquel Colomer
🎯 Precision Prospecting: Automate LinkedIn Lead Gen with n8n & Bright Data 📝 Overview This workflow turns n8n into an AI-powered prospector, automatically searching Google for LinkedIn profiles, scraping profile data via Bright Data, and summarizing key details. Ideal for sales and recruitment teams seeking targeted lead lists without manual research. 🎥 Workflow in Action Want to see this workflow in action? You have a chat window output below: 🔑 Key Features AI Chat Trigger**: Start prospecting via conversational prompts. Contextual Memory**: Retains the last 20 messages for coherent dialogue. Automated Google Search**: Generates site-restricted queries and fetches the top result. Bright Data Scraping**: Synchronously scrapes LinkedIn profile details by URL. Intelligent Filtering**: Extracts only valid LinkedIn profile links. Limit Control**: Returns a single, most relevant profile per request. LLM Summary**: Uses GPT-4o-mini to interpret and present scraped data. 🚀 How It Works (Step-by-Step) Prerequisites: n8n ≥ v1.0 with community nodes: install n8n-nodes-brightdata (not verified community node). API credentials: OpenAI, Bright Data (web unlocker zone “web\_unlocker1”). Webhook endpoint for chat trigger. Node Configuration: When chat message received (chatTrigger): Fires on user prompt. Simple Memory1 (memoryBufferWindow): Stores the last 20 chat messages. AI Prospector Agent (agent): Orchestrates search logic. Get 1 Google Result (brightData): Performs a Google search with site:linkedin.com/in. Get Links from Body (html): Extracts all `` hrefs from the search result page. Extract Links (splitOut): Splits out individual link entries. Filter only LinkedIn Profiles (filter): Ensures the URL contains “linkedin.com/” and starts with “https\://”. Limit (limit): Restricts output to the first valid profile URL. Search LinkedIn URI (toolWorkflow): Passes the URL to a secondary workflow to fetch the first link. Get LinkedIn Profile Data (brightDataTool): Scrapes the profile JSON. OpenAI Chat Model (lmChatOpenAi): Summarizes and formats the scraped data. Workflow Logic: User asks for a person by company & name, company & position, or LinkedIn URL. Agent builds a Google query (e.g., site:linkedin.com/in bright data cmo) and calls “Get 1 Google Result.” Extracted links are filtered and limited to the top valid profile. If user provided a direct LinkedIn URL, Agent skips search and scrapes immediately. Scraped profile JSON is passed to GPT-4o-mini to generate a concise summary. Testing & Optimization: Trigger via Execute Workflow for dry runs. Inspect intermediate node outputs in n8n’s Execution panel. Adjust maxIterations or memory window length for performance. Tune Bright Data zone or country settings to optimize scraping speed. Deployment & Monitoring: Activate the workflow and expose its webhook URL. Use n8n’s built-in Alerts or external monitoring (e.g., Slack notifications) on failures. Rotate credentials via n8n’s Credential Vault when needed. Version-control workflow via duplicates or Git-backed n8n instances. ✅ Pre-requisites OpenAI Account**: API key for GPT-4o-mini. Bright Data Account**: Zone “web\_unlocker1” and dataset gd_l1viktl72bvl7bjuj0. n8n Version**: v1.0+ with community nodes installed. Permissions**: Webhook access, Credential Vault read/write. 👤 Who Is This For? Sales teams automating outbound LinkedIn prospecting. Recruiters sourcing candidates without manual scraping. Marketing ops looking to enrich CRM with accurate profile data. 📈 Benefits & Use Cases Efficiency**: Reduces hours of manual search and data entry to seconds. Accuracy**: Filters out non-LinkedIn links and ensures high-quality results. Scalability**: Handle multiple prospect requests concurrently via chat or API. Integration**: Easily hook into CRMs or email sequencers downstream. Workflow created and verified by Miquel Colomer https://www.linkedin.com/in/miquelcolomersalas/ and N8nHackers https://n8nhackers.com
by Amjid Ali
AI Chatbot with Conditional Execution for Cost Efficiency Description This n8n workflow implements an AI-powered chatbot that only runs when a chat is initiated on a website. By introducing a conditional step, the workflow ensures that AI tokens are not consumed unnecessarily, making it a cost-efficient and resource-optimized solution. The chatbot, named Sophia, serves as an interactive assistant for SyncBricks. It helps users with guest posting services, YouTube review videos, IT consultancy, and online courses while collecting user details step by step. The chatbot ensures that inquiries are properly logged and confirmed before proceeding to AI-driven responses. This template is ideal for businesses, service providers, and content creators who want to optimize AI token usage while delivering personalized, interactive engagement with their users. Features Conditional Execution – The AI chatbot only activates when a chat is initiated, avoiding unnecessary API calls. AI-Powered Conversations – Uses Google Gemini AI to generate human-like responses. Step-by-Step Data Collection – Ensures structured user input, requesting name, email, and request type sequentially. Memory Buffer for Context Awareness – Maintains conversation context using a window buffer memory system. Multiple Service Offerings – Supports inquiries related to: Guest Posting Services YouTube Review Videos Online Courses on Udemy IT Consultancy Services Automated Confirmation Messages – After collecting user details, sends a confirmation message summarizing the request. How It Works Chat Message Trigger The workflow starts only when a chat message is received from the website. This ensures no AI token is consumed unless a user initiates a chat. Condition Check: Is Chat Input Provided? The workflow checks if chat input is non-empty. If the chat input is empty, the workflow stops, ensuring no unnecessary API usage. If a message is detected, the chatbot continues processing. AI-Powered Chat Response The chatbot, Sophia, generates personalized responses using Google Gemini AI. AI ensures structured conversation flow by collecting: User’s Full Name Email ID Request Type Memory Buffer for Context Retention A Window Buffer Memory system stores chat history and retrieves previous responses to ensure context-aware conversations. Response Optimization Checks memory to avoid asking the same question twice. If details are already provided, Sophia moves directly to processing the request. Confirmation & User Engagement After collecting the required details, Sophia summarizes the request as follows: "Got it [Name], your request is [Request Type]. I will be sending the details to your email ID: [Email]. Hold on while I send confirmation." Final Confirmation Message Ensures the user receives a proper acknowledgment of their inquiry. Prerequisites Before using this workflow, make sure you have: n8n Instance (Cloud or Self-Hosted) Google Gemini API Key (For AI-generated responses) Webhook Integration (To trigger the chatbot from your website) Use Cases Businesses & Enterprises – AI-powered lead qualification for services. Bloggers & Content Creators – Automated guest post inquiry handling. YouTube Influencers & Educators – AI chatbot to promote courses and review services. Marketing Agencies – Lead generation chatbot without excessive AI token consumption. E-Commerce & Consulting Services – AI-driven personalized customer engagement. Nodes Used in This Workflow Chat Trigger (Webhook) – Initiates only when a user sends a chat message. Conditional Check (If Node) – Ensures AI is only used when a chat is initiated. AI Agent (Google Gemini AI) – Generates intelligent chatbot responses. Memory Buffer (Context Retention) – Stores user inputs for context-aware conversations. Important Start with n8n Learn n8n with Amjid Get n8n Book What is Proxmox Creator Information Developed by: Amjid Ali Website: SyncBricks Email: amjid@amjidali.com LinkedIn: Amjid Ali YouTube: SyncBricks Support & Contributions If you find this workflow helpful, consider supporting my work: Donate via PayPal For full courses on n8n, visit: Course by Amjid Final Thoughts This n8n workflow ensures optimal AI token usage while engaging users with an intelligent chatbot. By integrating conditional execution, it prevents unnecessary API calls, making it cost-effective and efficient for businesses looking to automate chat-based customer interactions. Let me know if you need any modifications!
by Rajneesh Gupta
IP Reputation Check & Threat Summary using Splunk + VirusTotal + AlienVault + n8n This workflow automates IP reputation analysis using Splunk alerts, enriches data via VirusTotal and AlienVault OTX, and generates actionable threat summaries for SOC teams — all without any coding. What It Does When a Splunk alert contains a suspicious IP: Ingests the IP** from the Splunk alert via webhook. Performs dual threat enrichment** using: VirusTotal IP reputation & tags. AlienVault OTX pulses, reputation & WHOIS. Merges & processes** threat intel data. Generates a rich HTML summary** for analyst review. Routes action based on severity**: Sends Slack alert for suspicious IPs. Creates an incident in ServiceNow. Emails a formatted HTML report to the SOC inbox. Tech Stack Used Splunk** – SIEM alert source VirusTotal API** – Reputation check & analysis stats AlienVault OTX API** – Community threat intel & pulse info n8n** – For orchestration, merging, summary generation Slack, Gmail, ServiceNow** – For SOC notifications and ticketing Ideal Use Case Perfect for security teams wanting to: Automatically validate IP reputation from SIEM logs Get quick context from multiple threat feeds Generate email-ready reports and escalate high-risk IPs Included Nodes Webhook (Splunk) Function nodes for IOC extraction and intel processing HTTP Request (VirusTotal & AlienVault) Merge + Switch nodes for conditional logic Gmail, Slack, ServiceNow integration Tips Add your VirusTotal and AlienVault credentials in n8n's credential manager. Use the Switch node to route based on your internal threat score logic. Easily extend this to include AbuseIPDB or GreyNoise for deeper enrichment.
by Eumentis
What It Does This workflow automatically runs when a new email is received in the user's Gmail account. It sends the email content to OpenAI (GPT-4.1-mini), which intelligently determines whether the message requires action. If the email is identified as actionable, the workflow sends a structured alert message to the user in Microsoft Teams. This keeps the user informed of high-priority emails in real time without the need to manually check every message. The workflow does not log any execution data, ensuring that email content remains secure and unreadable by others. How It Works Trigger on New Email**: The workflow is triggered automatically when a new email is received in the user's Gmail account. Email Evaluation with OpenAI**: The email content is sent to GPT-4.1-MINI, which evaluates whether the message requires user action. Filter Actionable Emails**: Only emails identified as actionable by the AI are allowed to proceed through the rest of the workflow. Send Notification to Teams**: For actionable emails, the workflow sends a structured alert message to the user in Microsoft Teams chat via a Power Automate webhook. Prerequisites Gmail IMAP Credentials OpenAI API Key Microsoft Teams Webhook URL Power Automate Flow to send message to Teams chat How to Set It Up 1. Set Up Power Automate Workflow 1.1 Open Workflow Power Automate in Microsoft Teams Open the Workflow app from Microsoft Teams. If it's not already added, go to Apps → search "Workflow" → click Add → open it. 1.2 Create a New Flow Click New Flow → select Create from blank. 1.3 Add a Trigger: When a Teams webhook request is received In the trigger setup, set Who can trigger the flow? to Anyone. After saving the flow, a webhook URL will be generated — this URL will be used in n8n workflow. 1.4 Add Action: Parse JSON Set Content to: Body Use the following schema: { "type": "object", "properties": { "from": { "type": "string" }, "receivedAt": { "type": "string" }, "subject": { "type": "string" }, "message": { "type": "string" } } } 1.5 Add Action: Get an @mention token for a user Set the User field to the Microsoft Teams email address of the person to notify (e.g. yourname@domain.com). 1.6 Add Action: Post message in a chat or channel In this action, configure the following: Post as: Flow bot Post in: Chat with Flow bot Recipient: Your Microsoft Teams email address (e.g., yourname@domain.com) Paste the following code into the Message (in code view): Hello @{outputs('Get_an_@mention_token_for_a_user')?['body/atMention']}, You have received a new email at your email address @{body('Parse_JSON')?['recipientEmail']} that requires your attention: From: @{body('Parse_JSON')?['sender']} Received On: @{body('Parse_JSON')?['date']} Subject: @{body('Parse_JSON')?['subject']} Please review the message at your earliest convenience. Click here to search this mail in your mailbox 1.7 Save and Enable the Flow Click Save. Turn the flow On. The webhook URL is now active and available in the first trigger step, copy it to use in n8n. Need help with the setup? Feel free to contact us 2. Configure IMAP Email Trigger First, enable 2‑Step Verification in your Google Account and generate an App Password for n8n. Then, in the IMAP node → Create Credential to connect using the following details: • User: your Gmail address • Password: the App Password • Host: imap.gmail.com • Port: 993 • SSL/TLS: Enabled Follow the n8n documentation to complete the setup. 3. Configure OpenAI Integration Add your OpenAI API key as a credential in n8n. Follow the n8n documentation to complete the setup. 4. Set Up HTTP Request to Trigger Power Automate Workflow Paste generated Webhook URL from the Power Automate workflow into the URL field of the HTTP Request node. 5. Disable Execution Logging for Privacy To ensure that email content is not stored in logs and remains fully secure, you can disable execution logging in n8n: In the n8n Workflow Editor, click on the three dots (•••) in the top right corner and select Settings. In the settings panel: Set Save manual executions to: Do not save Set Save successful production executions to: Do not save Set Save failed production executions to: Do not save if you also want to avoid logging errors Save the changes. Refer to the official n8n documentation for more details: 6. Activate the Workflow Set the workflow status to Active in n8n so it runs automatically when a new mail is received in Gmail. Need Help? Contact us for support and custom workflow development.
by Charles
Modern AI systems are powerful but pose privacy risks when handling sensitive data. Organizations need AI capabilities while ensuring: ✅ Sensitive data never leaves secure environments ✅ Compliance with regulations (GDPR, HIPAA, PCI, SOX) ✅ Real-time decision making about data sensitivity ✅ Comprehensive audit trails for regulatory review The Concept: Intelligent Data Classification + Smart Routing The goal of this concept is to build the foundations of the safe and compliant use of LLMs in Agentic workflows by automatically detecting sensitive data, applying sanitization rules, and intelligently routing requests through secure processing channels. This workflow will analyze the user's chat or webhook input and attempt to detect PII using the Enhanced PII Pattern Detector. If detected, the workflow will process that input via a series of Compliance, Auditing, and Security steps which log and sanitizes the request prior to any LLM being pinged. Why Multi-Tier Routing? Traditional systems use binary decisions (sensitive/not sensitive). Our 3-tier approach provides: ✅ Granular Security: Critical PII gets maximum protection ✅ Performance Optimization: Clean data gets full cloud capabilities ✅ Cost Efficiency: Expensive local processing only when needed ✅ User Experience: Maintains conversational flow across security levels Why Context-Aware Detection? Regex patterns alone miss contextual sensitivity. Our approach: ✅ Catches Intent: "Bank account" discussion is sensitive even without account numbers ✅ Reduces False Negatives: Medical discussions stay secure even without explicit medical IDs ✅ Proactive Protection: Identifies sensitive contexts before PII is shared ✅ Compliance Alignment: Matches how regulations actually define sensitive data Why Risk Scoring vs Binary Classification? Binary PII detection creates artificial boundaries. Risk scoring provides: ✅ Nuanced Decisions: Multiple low-risk patterns might aggregate to high risk ✅ Adaptive Thresholds: Organizations can adjust sensitivity based on their needs ✅ Better UX: Users aren't unnecessarily restricted for low-risk scenarios ✅ Audit Transparency: Clear reasoning for every routing decision Why Comprehensive Monitoring? Privacy systems require trust and verification: ✅ Compliance Proof: Audit trails demonstrate regulatory compliance ✅ Performance Optimization: Identify bottlenecks and improve efficiency ✅ Security Validation: Ensure no sensitive data leakage occurs ✅ Operational Insights: Understand usage patterns and system health How to Install: All that you will need for this workflow are credentials for your LLM providers such as Ollama, OpenRouter, OpenAI, Anthropic, etc. This workflow is customizable and allows the user to define the best LLM and storage/memory solutions for their specific use case.