by David Ashby
Complete MCP server exposing 2 BIN Lookup API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add BIN Lookup API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the BIN Lookup API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.bintable.com/v1 • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (2 total) 🔧 Balance (1 endpoints) • GET /balance: Check Balance 🔧 {Bin} (1 endpoints) • GET /{bin}: Lookup for bin 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native BIN Lookup API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Paul
🚀 Google Search Console MCP Server 📋 Description This n8n workflow serves as a Model Context Protocol (MCP) server, connecting MCP-compatible AI tools (like Claude) directly to the Google Search Console APIs. With this workflow, users can automate critical SEO tasks and manage Google Search Console data effortlessly via MCP endpoints. Included Functionalities: 📌 List Verified Sites 📌 Retrieve Detailed Site Information 📌 Access Search Analytics Data 📌 Submit and Manage Sitemaps 📌 Request URL Indexing OAuth2 is fully supported for secure and seamless API interactions. 🛠️ Setup Instructions 🔑 Prerequisites n8n instance** (cloud or self-hosted) Google Cloud project with enabled APIs: Google Search Console API Web Search Indexing API OAuth2 Credentials from Google Cloud ⚙️ Workflow Setup Step 1: Import Workflow Open n8n, select "Import from JSON", and paste this workflow JSON. Step 2: Configure OAuth2 Credentials Navigate to Settings → Credentials. Add new credentials (Google OAuth2 API): Client ID and Client Secret from Google Cloud Scopes: https://www.googleapis.com/auth/webmasters.readonly https://www.googleapis.com/auth/webmasters https://www.googleapis.com/auth/indexing Step 3: Configure Webhooks Webhook URLs auto-generate in MCP Server Trigger node. Ensure webhooks are publicly accessible via HTTPS. Step 4: Testing Test your endpoints with sample HTTP requests to confirm everything is working correctly. 🎯 Usage Examples List Sites**: Fetch all verified Search Console sites. Get Site Info**: Get detailed information about a particular site. Search Analytics**: Pull metrics such as clicks, impressions, and rankings. Submit Sitemap**: Automatically submit sitemaps. Request URL Indexing**: Trigger Google's indexing for specific URLs instantly. 🚩 Use Cases & Applications SEO automation workflows AI-driven SEO analytics Real-time website performance monitoring Automated sitemap management
by Audun
Who is this for? Security professionals Developers Individuals interested in data breach awareness Use Case Automated monitoring for new breaches Proactive identity protection Demonstration of simple cache mechanism What this workflow does Checks the Have I Been Pwned API every 15 minutes for the latest breaches. Compares new breach data against previously notified breaches. Demonstrates a simple cache mechanism to track previously seen breaches. How the Cache Functionality Works Read from Cache**: Retrieves the last known breach from cache.json to avoid redundant alerts for the same breach. Compare Against Current Breach**: The workflow checks if the latest fetched breach differs from the cached one. Update the Cache**: If a new breach is detected, it updates cache.json with the latest breach data. Setup instructions The endpoint used in this workflow does not require an API key. Add your desired alert mechanism in the red box attached to the New breach node. How to customize this workflow to your needs Modify Notification Settings**: Tailor where alerts are sent (email, Slack, etc.). Add the desired node after the New breach node. This node contains all the data from the breach so it is eaisily available. You can choose from a variety of n8n nodes to send alerts when a new breach is detected. Below are a few common options you might consider adding after the New breach node: Email Node What it does: Sends an email notification to one or more recipients. Use case: Great for simple alerts to your inbox or a team distribution list. Customization: You can include breach details in the subject or body of the email, using data from the New breach node. Slack Node What it does: Sends a message to a Slack channel or user. Use case: Perfect for real-time alerts to your team in Slack. Customization: You can post breach details directly in a channel or DM. You can also format the message (bold, code blocks, etc.). Microsoft Teams Node What it does: Sends a message to a Teams channel. Use case: For organizations that use Microsoft Teams for communication. Customization: Similar to Slack, you can customize the message content and include all relevant breach information. Discord Node What it does: Sends an alert message to a Discord channel. Use case: Useful for teams or communities that coordinate via Discord. Customization: Add formatted messages with breach details for easy viewing. Telegram Node What it does: Sends messages to a Telegram chat or group. Use case: Good for mobile notifications and fast alerts. Customization: You can include breach summaries or detailed information, and even use bots to automate this. Webhook Node (as a sender) What it does: Sends breach data to another service via a webhook. Use case: If you have an external system or app that handles alerts, you can push the data directly to it. Customization: Send JSON payloads with detailed breach information to trigger actions in other systems. SMS Nodes (like Twilio) What it does: Sends an SMS notification to one or more phone numbers. Use case: For urgent alerts that need to be seen immediately. Customization: Keep messages concise, including key breach details like the time, type of breach, and affected system. Adjust Check Frequency**: Change the interval in the Schedule Trigger node (e.g., hourly or daily).
by Zacharia Kimotho
This workflow makes it easier to prepare for meetings and calls by researching your lead right before the call and creates a high-level meeting prep that is sent to your email. This removes the extra steps needed by teams to learn their leads, research, and prepare for the upcoming calls. How does it work This workflow starts when We Capture the webhook from cal.com for new bookings. Ensure you have a field on the form to collect LinkedIn posts. This can be optional or mandatory depending on your preferences. When a new event is booked, we will add the leads to an Airtable CRM for appointments and new bookings. This table will contain all the items and items needed to enrich and maintain your CRM. If the lead has linkedin then we do research on LinkedIn for their content and posts and perform a lead enrichment to get as much info as we can about the leads and create a new meeting prep. What you need Bright data API Cal.com account/calendar. Other calendars can be used too for this eg calendly, Google Calendar, etc with a few tweaks CRM - This can be anything not just airtable Setting it up Create/update your calendar to allow collecting users LinkedIn profiles/bios Add a new webhook to and subscribe to the desired events like below Map the fields from the webhook to match your CRM. If you have no CRM make a copy of this Airtable CRM and map the fields to your account. We will be using the Base and table ID to make the mapping easier Setup your Bright Data API and select the data source as linkedin for the scraping You can edit more data on the bio as needed Update this info to the CRM under the table lead enrichment and map accordingly You can update the prompt on the AI models or work with them as is. Update the Gmail node to send the meeting preps to you and finally update the CRM with the generated Meeting prep This automated process can save your team a couple of minutes each day otherwise spent on other client fulfillment items. If you would like to learn more about n8n templates like this, feel free to reach out via Linkedin Happy productivity!!
by Jimleuk
Ever wanted to build your own RAG search over Youtube videos? Well, now you can! This n8n template shows how you can build a very capable Youtube search engine powered by Apify, Qdrant and your LLM of choice to quickly and efficiently browse over many videos for research. I originally started to template to ask questions on the "n8n @ scale office-hours" livestream videos but then extended it to include the latest videos on the official channel. Check out a demo here: https://jimleuk.app.n8n.cloud/webhook/n8n_videos How it works Stage 1 is to collect the Youtube video transcripts and push them into a vector database. For this, I've used Apify to scrape Youtube and Qdrant to store the embeddings. Transcripts are broken down into smaller chunks and carefully tagged with metadata to assist in later search and filtering. Stage 2 is to build a web frontend for the user to query the vectorised transcripts. I'm using a webhook to serve a simple web app and API to dynamically fetch the results. When searching for a video, I've opted to use Qdrant's search groups API which in this use-case, performs better as it returns a wider range of videos results. In the web frontend, when the user clicks on the results, the matching Youtube video plays in an embedded video player. How to use Once credentials are all set, first run steps 1 - 3 to populate your vector store. Next, set the workflow to active to expose the web frontend. Visit the webhook URL in your browser to use it. If only for personal use, you may want to remove the rate limiting mechanism in step 4. Requirements Apify for Youtube Channel and Video Scraping Qdrant for Vector store OpenAI for LLM and Embeddings Customising the template Not interested in official n8n videos? Swap to a different channel - this template will work on many as long as videos are not private or set to prevent embeds. Technically any vector store should work but may not have the same grouping API. Use the simple vector store node and revert back to basic searching instead.
by scrapeless official
AI-Powered Web Data Pipeline with n8n How It Works This n8n workflow builds an AI-powered web data pipeline that automates the entire process of: Extraction** Structuring** Vectorization** Storage** It integrates multiple advanced tools to transform messy web pages into clean, searchable vector databases. Integrated Tools Scrapeless** Bypasses JavaScript-heavy websites and anti-bot protections to reliably extract HTML content. Claude AI** Uses LLMs to analyze unstructured HTML and generate clean, structured JSON data. Ollama Embeddings** Generates local vector embeddings from structured text using the all-minilm model. Qdrant Vector DB** Stores semantic vector data for fast and meaningful search capabilities. Webhook Notifications** Sends real-time updates when workflows complete or errors occur. From messy webpages to structured vector data — this pipeline is perfect for building intelligent agents, knowledge bases, or research automation tools. Setup Steps 1. Install n8n > Requires Node.js v18 / v20 / v22 npm install -g n8n n8n After installation, access the n8n interface via: URL: http://localhost:5678 2. Set Up Scrapeless Register at: Scrapeless Copy your API token Paste the token into the HTTP Request node labeled "Scrapeless Web Request" 3. Set Up Claude API (Anthropic) Sign up at Anthropic Console Generate your Claude API key Add the API key to the following nodes: Claude Extractor AI Data Checker Claude AI Agent 4. Install and Run Ollama macOS brew install ollama Linux curl -fsSL https://ollama.com/install.sh | sh Windows Download the installer from: https://ollama.com Start Ollama Server ollama serve Pull Embedding Model ollama pull all-minilm 5. Install Qdrant (via Docker) docker pull qdrant/qdrant docker run -d \ --name qdrant-server \ -p 6333:6333 -p 6334:6334 \ -v $(pwd)/qdrant_storage:/qdrant/storage \ qdrant/qdrant Test if Qdrant is running: curl http://localhost:6333/healthz 6. Configure the n8n Workflow Modify the Trigger (Manual or Scheduled) Input your Target URLs and Collection Name in the designated nodes Paste all required API Tokens / Keys into their corresponding nodes Ensure your Qdrant and Ollama services are running Ideal Use Cases Custom AI Chatbots Private Search Engines Research Tools Internal Knowledge Bases Content Monitoring Pipelines
by David Ashby
Complete MCP server exposing 2 NPR Station Finder Service API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add NPR Station Finder Service credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the NPR Station Finder Service API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://station.api.npr.org • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (2 total) 🔧 V3 (2 endpoints) • GET /v3/stations: Get Station 1 • GET /v3/stations/{stationId}: Retrieve metadata for the station with the given numeric ID 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native NPR Station Finder Service API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for Recipe Recommendation Engine with Bright Data MCP & OpenAI is a powerful automated workflow combines Bright Data's MCP for scraping trending or regional recipe data with OpenAI 4o mini to generate personalized recipe recommendations. This automated workflow is designed for: Food Bloggers & Culinary Creators : Who want to automate the extraction and curation of recipes from across the web to generate content, compile cookbooks, or publish newsletters. Nutritionists & Health Coaches : Who need structured recipe data to analyze ingredients, calories, and nutrition for personalized meal planning or dietary tracking. AI/ML Engineers & Data Scientists : Building models that classify cuisines, predict recipes from ingredients, or generate dynamic meal suggestions using clean, structured datasets. Grocery & Meal Kit Platforms : Who aim to extract recipes to power recommendation engines, ingredient lists, or personalized meal plans. Recipe Aggregator Startups : Looking to scale recipe data collection, filtering, and standardization across diverse cooking websites with minimal human intervention. Developers Integrating Cooking Features : Into apps or digital assistants that offer recipe recommendations, step-by-step cooking instructions, or nutritional insights. What problem is this workflow solving? This workflow solves: Automated recipe data extraction from any public URL AI-driven structured data extraction Scalable looped crawling and processing Real-time notifications and data persistence What this workflow does 1. Set Recipe Extract URL Configure the recipe website URL in the input node Set your Bright Data zone name and authentication 2. Paginated Data Extract Triggers a paginated extraction across multiple pages (recipe listing, index, or search pages) Returns a list of recipe links for processing 3. Loop Over Items Loops through the array of recipe links Each link is passed individually to the scraping engine 4. Bright Data MCP Client (Per Recipe) Scrapes each individual recipe page using scrape_as_html Smartly bypasses common anti-bot protections via Bright Data Web Unlocker 5. Structured Recipe Data Extract (via OpenAI GPT-4o mini) Converts raw HTML to clean text using an LLM preprocessing node Uses OpenAI GPT-4o mini to extract structured data 6. Webhook Notification Pushes the structured recipe data to your configured webhook endpoint Format: JSON payload, ideal for Slack, internal APIs, or dashboards 7. Save Response to Disk Saves the structured recipe JSON information to local file system Pre-conditions You need to have a Bright Data account and do the necessary setup as mentioned in the "Setup" section below. You need to have an OpenAI Account. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the OpenAi account credentials. Make sure to set the fields as part of Set the Recipe Extract URL. Remember to set the webhook_url to send a webhook notification of recipe response. Set the desired local path in the Write the structured content to disk node to save the recipe response. How to customize this workflow to your needs You can tailor the Recipe Recommendation Engine workflow to better fit your specific use case by modifying the following key components: 1. Input Fields Node Update the Recipe URL to target specific cuisine sites or recipe types (e.g., vegan, keto, regional dishes). 2. LLM Configuration Swap out the OpenAI GPT-4o mini model with another provider (like Google Gemini) if you prefer. Modify the structured data prompt to extract custom fields that you wish. 3. Webhook Notification Configure the Webhook Notification node to point to your preferred integration (e.g., Slack, Discord, internal APIs). 4. Storage Destination Change the Save to Disk node to store the structured recipe data in: A cloud bucket (S3, GCS, Azure Blob etc.) A database (MongoDB, PostgreSQL, Firestore) Google Sheets or Airtable for spreadsheet-style access.
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for The Brave Search Structured Data Extractor workflow is designed for professionals and teams that need high-quality, structured insights from Brave search results in real time. Whether you're performing market research, tracking competitors, training AI models, or powering content engines, this workflow offers a robust and automated solution. This workflow is tailored for: Market Researchers - Who analyze trends across multimedia channels AI Developers - Who require clean, structured datasets for model fine-tuning SEO & Content - Analysts looking to monitor visibility across news, images, and videos Media Researchers - Curating timely and relevant information across formats Automation Engineers - Integrating search insights into downstream workflows What problem is this workflow solving? Traditional web scraping and search result parsing is fragmented, inconsistent, and prone to errors, especially when dealing with multimedia (images, videos, news) data from search engines. This workflow provides: Centralized Brave search data extraction across all content types. Switches the search execution based upon the type of search that is being set. ex: news, images, videos, all Automated structured data transformation using Google Gemini Unified output persistence and notification across disk, webhook, and Google Sheets What this workflow does Input Configuration Define your Brave search query Set the search type: videos, images, news, or all Configure your Bright Data MCP zone Bright Data MCP Search Execution Initiates a Brave search via Bright Data MCP using the correct URL pattern for each search type Returns raw HTML of search results Google Gemini LLM Structured Data Extraction Transforms raw results into structured data (e.g., title, URL, source, snippet) Output Handling Save to disk (e.g., JSON or CSV file) Send Webhook notification with structured data (e.g., Slack, internal dashboards) Store in Google Sheets for team-wide access or dashboarding Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token> How to customize this workflow to your needs Enhance Output Analysis Add additional LLM prompts for topic classification, sentiment scoring, or trend forecasting. Output Format Options Choose to output CSV, Markdown, or HTML reports based on your integration target. Schedule Automation Trigger the workflow on a schedule (daily/weekly) to keep monitoring topical content.
by Davide
This workflow optimizes the management of inquiries received through a contact form (Contact Form 7 - CF7 Plugin) on a WordPress site, automating the process of classification, response drafting, and data storage. This workflow is particularly useful for businesses that receive multiple daily inquiries and want to improve their efficiency in managing customer communications. Benefits: ✅ Automation & Speed – Reduces the time needed to handle inquiries manually. ✅ Better Email Management – Ensures every message receives a timely and accurate response. ✅ Customization – The generated draft can be edited before sending, maintaining a personal touch. ✅ Inquiry History – Storing data in Google Sheets allows for easy tracking of customer interactions. ✅ Easy Integration – Works seamlessly with Contact Form 7 without complex configurations. How It Works Form Submission Handling: The workflow starts with a WordPress form submission captured via a webhook. The form data (first name, last name, email, phone, and message) is extracted and structured using the "Set Fields" node. Message Classification: The submitted message is classified into predefined categories (e.g., "Product Info," "Order Info," or "Other") using the "Message Classifier" node, powered by Google Gemini. Automated Email Drafting: Based on the classification, the workflow generates a professional email draft using one of three "Email Writer" nodes (for Product, Order, or Other requests). Each node uses Google Gemini to craft a personalized response with a structured format (subject and body). Email Draft Creation: The drafted email is sent as a Gmail draft to the appropriate department, including the original form data for context. Data Logging: All submissions, along with their classifications and email drafts, are logged in a Google Sheets spreadsheet for record-keeping and further action. Set Up Steps Install WordPress Plugin: Install the "CF7 to Webhook" plugin on WordPress and configure it to send form submissions to the n8n webhook URL. Configure Webhook in n8n: Set up the "From Wordpress" webhook node in n8n to receive POST requests from the WordPress form. Google Gemini Integration: Ensure the Google Gemini nodes are properly authenticated with the correct API credentials. Gmail and Google Sheets Setup: Authenticate the Gmail and Google Sheets nodes with the appropriate OAuth2 credentials and specify the target spreadsheet and sheet name. Customize Classification Categories: Adjust the categories in the "Message Classifier" node to match your business needs. Test the Workflow: Trigger a test form submission to verify the workflow processes data correctly, classifies the message, generates an email draft, and logs the data. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Obsidi8n
How it works: Send notes from Obsidian via Webhook to start the audio conversion OpenAI converts your text to natural-sounding audio and generates episode descriptions Audio files are stored in Cloudinary and automatically attached to your notes in Obsidian A professional podcast feed is generated, compatible with all major podcast platforms (Apple, Spotify, Google) Set up steps: Install and configure the Post Webhook Plugin in Obsidian Set up Custom Auth credentials in n8n for Cloudinary using the following JSON: { "name": "Cloudinary API", "type": "httpHeaderAuth", "authParameter": { "type": "header", "key": "Authorization", "value": "Basic {{Buffer.from('your_api_key:your_api_secret').toString('base64')}}" } } Configure podcast feed metadata (title, author, cover image, etc.) Note: The second flow is a generic Podcast Feed module that can be reused in any '[...]-to-Podcast' workflow. It generates a standard RSS feed from Google Sheets data and podcast metadata, making it compatible with all major podcast platforms.
by Karam Ghazzi
Description 📄 Turn your Slack workspace into a smart AI-powered HelpDesk using this workflow. This automation listens to Slack messages and uses an AI assistant (powered by OpenAI or any other LLM) to respond to employee questions about HR, IT, or internal policies by referencing your internal documentation (such as the Policy Handbook). If the answer isn't available, it can optionally email the relevant department (HR or IT) and ask them to update the handbook. It remembers recent messages per user, cleans up intermediate responses to keep Slack threads tidy, and ensures your team gets consistent and helpful answers—without manually searching docs or escalating simple questions. Perfect for growing teams who want to streamline internal support using n8n, Slack, and AI. How it works 🛠️ This workflow turns n8n into a Slack-based HelpDesk assistant powered by AI. It listens to Slack messages using the Events API, detects whether a real user is asking a question, and responds using OpenAI (or another LLM of your choice). Here's how it works step-by-step: Webhook Trigger: The workflow starts when a message is posted in Slack via the Events API. It filters out any messages from bots to avoid loops. Identify the User: It fetches the full Slack profile of the user who posted the message and stores their name. Send Receipt Message: An initial message is sent to the user saying, “I’m on it!”, confirming their request is being processed. AI Response Handling: The message is processed using the OpenAI Chat model (GPT-4o by default). Before responding, it checks if the query matches any HR or IT policy from the Policy Handbook. If the question can’t be answered based on internal data, it can optionally alert the HR or IT department via Gmail (after user confirmation). Memory Retention: It keeps track of the last 5 interactions per user using Simple Memory, so it remembers previous context in a Slack conversation. Cleanup and Final Reply: It deletes the initial receipt message and sends a final, clean response to the user. How to use 🚀 Clone the Workflow: Download or import the JSON workflow into your n8n instance. Connect Your Credentials: Slack API (for messaging) Google Sheets API (for department contact info) Google Docs API (for the Policy Handbook) Gmail API (optional, for notifying departments) OpenAI or another AI model Slack Setup: Set up a Slack App and enable the Events API. Subscribe to message events and point them to the Webhook URL generated by the workflow. Customize Responses: Edit the initial and final Slack message nodes if you want to personalize the wording. Swap out the LLM (ChatGPT) with your preferred model in the AI Agent node. Adjust AI Behavior: Tune the prompt logic in the “AI Agent” node if you want the AI to behave differently or access different data sources. Expand Memory or Integrations: Use external databases to store longer histories. Integrate with tools like Asana, Notion, or CRM platforms for further automation. Requirements 📋 n8n (self-hosted or cloud) Slack Developer Account & App OpenAI (or any LLM provider) Google Sheets with department contact details Google Docs containing the policy Handbook Gmail account (optional, for email alerts) Knowledge of Slack Events API setup