by David Ashby
Complete MCP server exposing all Beeminder Tool operations to AI agents. Zero configuration needed - all 4 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Beeminder Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Beeminder Tool tool with full error handling 📋 Available Operations (4 total) Every possible Beeminder Tool operation is included: 🔧 Datapoint (4 operations) • Create datapoint for goal • Delete a datapoint • Get many datapoints for a goal • Update a datapoint 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Beeminder Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Beeminder Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Karam Ghazzi
Description 📄 Turn your Slack workspace into a smart AI-powered HelpDesk using this workflow. This automation listens to Slack messages and uses an AI assistant (powered by OpenAI or any other LLM) to respond to employee questions about HR, IT, or internal policies by referencing your internal documentation (such as the Policy Handbook). If the answer isn't available, it can optionally email the relevant department (HR or IT) and ask them to update the handbook. It remembers recent messages per user, cleans up intermediate responses to keep Slack threads tidy, and ensures your team gets consistent and helpful answers—without manually searching docs or escalating simple questions. Perfect for growing teams who want to streamline internal support using n8n, Slack, and AI. How it works 🛠️ This workflow turns n8n into a Slack-based HelpDesk assistant powered by AI. It listens to Slack messages using the Events API, detects whether a real user is asking a question, and responds using OpenAI (or another LLM of your choice). Here's how it works step-by-step: Webhook Trigger: The workflow starts when a message is posted in Slack via the Events API. It filters out any messages from bots to avoid loops. Identify the User: It fetches the full Slack profile of the user who posted the message and stores their name. Send Receipt Message: An initial message is sent to the user saying, “I’m on it!”, confirming their request is being processed. AI Response Handling: The message is processed using the OpenAI Chat model (GPT-4o by default). Before responding, it checks if the query matches any HR or IT policy from the Policy Handbook. If the question can’t be answered based on internal data, it can optionally alert the HR or IT department via Gmail (after user confirmation). Memory Retention: It keeps track of the last 5 interactions per user using Simple Memory, so it remembers previous context in a Slack conversation. Cleanup and Final Reply: It deletes the initial receipt message and sends a final, clean response to the user. How to use 🚀 Clone the Workflow: Download or import the JSON workflow into your n8n instance. Connect Your Credentials: Slack API (for messaging) Google Sheets API (for department contact info) Google Docs API (for the Policy Handbook) Gmail API (optional, for notifying departments) OpenAI or another AI model Slack Setup: Set up a Slack App and enable the Events API. Subscribe to message events and point them to the Webhook URL generated by the workflow. Customize Responses: Edit the initial and final Slack message nodes if you want to personalize the wording. Swap out the LLM (ChatGPT) with your preferred model in the AI Agent node. Adjust AI Behavior: Tune the prompt logic in the “AI Agent” node if you want the AI to behave differently or access different data sources. Expand Memory or Integrations: Use external databases to store longer histories. Integrate with tools like Asana, Notion, or CRM platforms for further automation. Requirements 📋 n8n (self-hosted or cloud) Slack Developer Account & App OpenAI (or any LLM provider) Google Sheets with department contact details Google Docs containing the policy Handbook Gmail account (optional, for email alerts) Knowledge of Slack Events API setup
by Cecilia
Enable smart, real-time answers in your WhatsApp groups using a custom webhook, Pinecone vector database, and no Facebook Business setup. > 🟡 Note: This template uses a custom WhatsApp webhook. It does not use the official WhatsApp Business API. 👥 Who is this for? This workflow is designed for individuals and teams who want to enable smart WhatsApp group automation — without going through Meta’s official WhatsApp Business API. Ideal for small businesses, internal teams, communities, and personal power users. ❓ What problem is this solving? Setting up WhatsApp bots with intelligent responses often requires approval from Meta and a verified business account. This workflow removes those barriers by using a self-hosted webhook to handle incoming messages and respond using a document-trained AI via Pinecone. ⚙️ What this workflow does Connects a regular WhatsApp number to a custom webhook Adds the bot to any group chat (it stays silent unless mentioned) Indexes documents from Google Drive into Pinecone Responds with intelligent, context-aware answers from your custom knowledge base Auto-updates its knowledge every minute as the document changes 🛠️ Setup Step 1: Connect Google Drive Set up your Google Drive credentials in n8n Step 2: Configure Pinecone Create an index in Pinecone Dimension: 1536 Select this index in both Pinecone nodes Click Test Workflow to ingest your document into Pinecone Step 3: Get Access to the WhatsApp Webhook Fill out this form to request access You’ll receive a WhatsApp confirmation for linking Step 4: Test WhatsApp Integration ✅ One-on-one test: Send a message from another number 👥 Group test: Add the bot to a group; it will only respond when tagged 🧩 How to customize this workflow Modify the system prompt inside the AI agent node to control tone and behavior Update the connected Google Doc to match your specific domain (e.g. FAQs, SOPs, product manuals) Adjust the Pinecone sync frequency if you want updates more or less often 📚 Use cases Customer Support**: Instant, intelligent replies in WhatsApp without live agents Team Knowledge Bot**: Tag the bot for quick access to SOPs and internal docs Community Groups**: Automate common questions while keeping noise low Personal AI Assistant**: A WhatsApp chatbot trained on your notes and files 📝 Sticky Note Suggestion 💬 What this template does: > Enables an AI bot in your WhatsApp group that answers questions based on a Google Doc you provide. It uses a custom webhook, Google Drive, and Pinecone. 🔧 Requirements: > Google Drive account > Pinecone account with an index (dimension 1536) > Access to the custom WhatsApp webhook (see setup steps)
by David Ashby
Complete MCP server exposing all Jina AI Tool operations to AI agents. Zero configuration needed - all 3 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Jina AI Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Jina AI Tool tool with full error handling 📋 Available Operations (3 total) Every possible Jina AI Tool operation is included: 🔧 Reader (2 operations) • Read URL content • Search web 🔧 Research (1 operations) • Perform deep research 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Jina AI Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Jina AI Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Eduard
This workflow demonstrates three distinct approaches to chaining LLM operations using Claude 3.7 Sonnet. Connect to any section to experience the differences in implementation, performance, and capabilities. What you'll find: 1️⃣ Naive Sequential Chaining The simplest but least efficient approach - connecting LLM nodes in a direct sequence. Easy to set up for beginners but becomes unwieldy and slow as your chain grows. 2️⃣ Agent-Based Processing with Memory Process a list of instructions through a single AI Agent that maintains conversation history. This structured approach provides better context management while keeping your workflow organized. 3️⃣ Parallel Processing for Maximum Speed Split your prompts and process them simultaneously for much faster results. Ideal when you need to run multiple independent tasks without shared context. Setup Instructions: API Credentials: Configure your Anthropic API key in the credentials manager. This workflow uses Claude 3.7 Sonnet, but you can modify the model in each Anthropic Chat Model node, or pick an entirely different LLM. For Cloud Users: If using the parallel processing method (section 3), replace {{ $env.WEBHOOK_URL }} in the "LLM steps - parallel" HTTP Request node with your n8n instance URL. Test Data: The workflow fetches content from the n8n blog by default. You can modify this part to use a different content or a data source. Customization: Each section contains a set of example prompts. Modify the "Initial prompts" nodes to change the questions asked to the LLM. Compare these methods to understand the trade-offs between simplicity, speed, and context management in your AI workflows! Follow me on LinkedIn for more tips on AI automation and n8n workflows!
by Belgacem Dhiflaoui
What Problem Does This Solve? This workflow automates the end-to-end process of capturing company information from Google Drive, storing it semantically in Pinecone, and interacting with users via an intelligent AI chatbot. It eliminates the need for manual customer service, lead tracking, and company information retrieval—offering a fully automated, intelligent engagement system. Perfect for teams that need to: Maintain accurate, AI-readable company knowledge bases Answer customer inquiries 24/7 using AI Automatically collect and log lead information Embed a chatbot into their website to assist potential customers Target Audience: Sales teams, business owners, marketing departments, customer support reps, startup founders, or anyone looking to automate AI-powered lead generation and customer engagement. What Does It Do? Part One – Knowledge Ingestion Monitors** a Google Drive folder for new .txt or document uploads. Downloads** the document and splits the content into manageable chunks using a recursive character splitter. Generates** embeddings via OpenAI. Stores** the embeddings in a Pinecone vector database under the Q&A namespace. Purpose:** This knowledge base is later used to answer business-related questions through AI. Part Two – AI Chatbot Engagement Listens** for incoming chat messages using n8n’s chatTrigger node. Activates an AI agent** (powered by GPT-4o) to respond to inquiries regarding business hours, services, products, or general company info. Retrieves knowledge** using a vector search tool connected to Pinecone (newCompany_q). Captures leads:** If a user shows interest, the AI collects and stores: Name Email Phone number Specific interest into a connected Google Sheet automatically. Key Features 🔄 Google Drive integration for real-time file processing 🧠 OpenAI embedding + Pinecone vector store for semantic memory 🤖 LangChain agent with tool-based reasoning 🗃️ Google Sheets integration for dynamic lead storage 💬 GPT-4o model for accurate, human-like conversation ⚙️ Modular design to expand into CRM, Notion, or email workflows 🌐 Website-ready chatbot endpoint 🧰 Setup Instructions Prerequisites: n8n instance (cloud or self-hosted) Google Drive account (for uploading company data) Pinecone account (for vector storage) OpenAI API key Google Sheets access with OAuth2 credentials 📦 Installation Steps 1. Import the Workflow Upload the JSON files into your n8n instance. 2. Configure Credentials In n8n > Credentials, connect: Google Drive OpenAI Pinecone Google Sheets **3. Set Pinecone Index & Namespace Example:** Index: comanyName Namespace: Q&A 4. Test the Flow Upload a sample .txt or pdf file to the monitored Drive folder. Send a message to the chatbot (e.g., "What are your opening hours?"). Check the Google Sheet for collected user info. How It Works (Behind the Scenes) Part 1 – Data Preparation: Company files are uploaded to Google Drive. File is detected, downloaded, and chunked. Embeddings are created using OpenAI. Data is stored in Pinecone for semantic retrieval. Part 2 – Chat Interaction: A chat message triggers the workflow via webhook. The AI agent interprets the intent and accesses company data via newCompany_q. If lead data is gathered, it is appended to a Google Sheet using the AI-parsed values. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Femi Ad
Generate & Schedule Social Media Posts with GPT-4 and Telegram Approval Workflow This comprehensive content automation system features 23 nodes that seamlessly orchestrate AI-powered content creation, validation, and multi-platform publishing through Telegram interaction. It supports posting to major platforms like Twitter, LinkedIn, Facebook, Instagram, and more via the Upload-Post API. Core Components Telegram Integration: Bidirectional messaging with approval workflows and real-time notifications. AI Content Engine: Configurable language models (GPT-4, Claude, etc.) via OpenRouter with structured output parsing. Content Validation: Character count enforcement (240-265), format checking, and quality threshold monitoring. Multi-Platform Publishing: Post on any social media platform with Upload-Post API - better and easier to use than Blotato, with a dedicated n8n community node. Approval System: Preview and approve/reject functionality before content goes live. Web Research: Optional Tavily integration for real-time information gathering. Target Users Content creators seeking consistent social media presence. Digital marketers managing multiple brand accounts. Entrepreneurs wanting automated thought leadership. Agencies needing scalable content solutions. Small businesses without dedicated social media teams. Setup Requirements To get started, you'll need: Telegram Bot: Create via @BotFather and configure webhook. Required APIs: OpenRouter (for AI model access). Upload-Post API (superior alternative to Blotato with community node support). Tavily API (optional for research). n8n Prerequisites: Version 1.7+ with Langchain nodes. Webhook configuration enabled. Proper credential storage setup. Disclaimer: This template uses community-supported nodes, such as the Upload-Post API node. These may require additional setup and could change with n8n updates. Always verify compatibility and test in a safe environment. Step-by-Step Setup Guide Install n8n: Ensure you're running n8n version 1.7 or higher. Enable webhook configurations in your settings. Set Up Credentials: In n8n, add credentials for OpenRouter, Upload-Post API, and optionally Tavily. Store them securely. Create Telegram Bot: Go to Telegram, search for @BotFather, and create a new bot. Note the token and set up a webhook pointing to your n8n instance. Import the Workflow: Copy the workflow JSON (available in the template submission) and import it into your n8n dashboard. Configure Nodes: Set your AI model preferences in the OpenRouter node. Link your social media accounts via the Upload-Post API node. Adjust validation settings (e.g., character limits, retry attempts) as needed. Test the Workflow: Trigger a test run via Telegram by sending a content request. Approve or reject the preview, and monitor the output. Schedule or Automate: Use n8n's scheduling features for automated triggers, or run manually for on-demand posts. Usage Instructions Initiate via Telegram: Send a message to your bot with a topic or prompt (e.g., "Create a post about AI automation for entrepreneurs"). AI Generation: The system generates content using your chosen model, with optional web research. Validation Check: Content is automatically validated for length, quality (70% pass threshold), and format. Approval Workflow: Receive a preview in Telegram. Reply with "approve" to post, or "reject" to retry (up to 3 attempts). Publishing: Approved content posts to your selected platforms. Get notifications on success or errors. Customization: Adapt for single posts, 3-6 post threads, or different tones (business, creative, educational, personal, technical). Use scheduling for consistent posting. Workflow Features Universal Platform Support: Post to any social media platform via Upload-Post API. Scheduling Flexibility: Automated triggers or manual execution. Content Types: Single posts or multi-post threads. Quality Control: 30% error tolerance with detailed validation reporting. Character Optimization: Enforced 240-265 character range for maximum engagement. Topic Versatility: Adapts tone and style based on content type. Error Handling: Comprehensive validation with helpful user feedback. Performance Specifications: AI retry attempts: 3 for reliability. Validation threshold: 70% pass rate. Format support: Single posts and 3-6 post threads. Platform coverage: Any social media platform through Upload-Post API. Research capability: Optional web search for trending topics. Why Upload-Post API? Community-supported n8n node for easier integration. More reliable and feature-rich than Blotato. Supports all major social platforms. Active development and support. Workflow Image Need help customizing this workflow for your specific use case, Femi? As a fellow entrepreneur passionate about automation and business development, I'd be happy to consult. Connect with me on LinkedIn: https://www.linkedin.com/in/femi-adedayo-h44/ or email for support. Let's make your AI automation agency even more efficient!
by DataMinex
📊 Real-Time Flight Data Analytics Bot with Dynamic Chart Generation via Telegram 🚀 Template Overview This advanced n8n workflow creates an intelligent Telegram bot that transforms raw CSV flight data into stunning, interactive visualizations. Users can generate professional charts on-demand through a conversational interface, making data analytics accessible to anyone via messaging. Key Innovation: Combines real-time data processing, Chart.js visualization engine, and Telegram's messaging platform to deliver instant business intelligence insights. 🎯 What This Template Does Transform your flight booking data into actionable insights with four powerful visualization types: 📈 Bar Charts**: Top 10 busiest airlines by flight volume 🥧 Pie Charts**: Flight duration distribution (Short/Medium/Long-haul) 🍩 Doughnut Charts**: Price range segmentation with average pricing 📊 Line Charts**: Price trend analysis across flight durations Each chart includes auto-generated insights, percentages, and key business metrics delivered instantly to users' phones. 🏗️ Technical Architecture Core Components Telegram Webhook Trigger: Captures user interactions and button clicks Smart Routing Engine: Conditional logic for command detection and chart selection CSV Data Pipeline: File reading → parsing → JSON transformation Chart Generation Engine: JavaScript-powered data processing with Chart.js Image Rendering Service: QuickChart API for high-quality PNG generation Response Delivery: Binary image transmission back to Telegram Data Flow Architecture User Input → Command Detection → CSV Processing → Data Aggregation → Chart Configuration → Image Generation → Telegram Delivery 🛠️ Setup Requirements Prerequisites n8n instance** (self-hosted or cloud) Telegram Bot Token** from @BotFather CSV dataset** with flight information Internet connectivity** for QuickChart API Dataset Source This template uses the Airlines Flights Data dataset from GitHub: 🔗 Dataset: Airlines Flights Data by Rohit Grewal Required Data Schema Your CSV file should contain these columns: airline,flight,source_city,departure_time,arrival_time,duration,price,class,destination_city,stops File Structure /data/ └── flights.csv (download from GitHub dataset above) ⚙️ Configuration Steps 1. Telegram Bot Setup Create a new bot via @BotFather on Telegram Copy your bot token Configure the Telegram Trigger node with your token Set webhook URL in your n8n instance 2. Data Preparation Download the dataset from Airlines Flights Data Upload the CSV file to /data/flights.csv in your n8n instance Ensure UTF-8 encoding Verify column headers match the dataset schema Test file accessibility from n8n 3. Workflow Activation Import the workflow JSON Configure all Telegram nodes with your bot token Test the /start command Activate the workflow 🔧 Technical Implementation Details Chart Generation Process Bar Chart Logic: // Aggregate airline counts const airlineCounts = {}; flights.forEach(flight => { const airline = flight.airline || 'Unknown'; airlineCounts[airline] = (airlineCounts[airline] || 0) + 1; }); // Generate Chart.js configuration const chartConfig = { type: 'bar', data: { labels, datasets }, options: { responsive: true, plugins: {...} } }; Dynamic Color Schemes: Bar Charts: Professional blue gradient palette Pie Charts: Duration-based color coding (light→dark blue) Doughnut Charts: Price-tier specific colors (green→purple) Line Charts: Trend-focused red gradient with smooth curves Performance Optimizations Efficient Data Processing: Single-pass aggregations with O(n) complexity Smart Caching: QuickChart handles image caching automatically Minimal Memory Usage: Stream processing for large datasets Error Handling: Graceful fallbacks for missing data fields Advanced Features Auto-Generated Insights: Statistical calculations (percentages, averages, totals) Trend analysis and pattern detection Business intelligence summaries Contextual recommendations User Experience Enhancements: Reply keyboards for easy navigation Visual progress indicators Error recovery mechanisms Mobile-optimized chart dimensions (800x600px) 📈 Use Cases & Business Applications Airlines & Travel Companies Fleet Analysis**: Monitor airline performance and market share Pricing Strategy**: Analyze competitor pricing across routes Operational Insights**: Track duration patterns and efficiency Data Analytics Teams Self-Service BI**: Enable non-technical users to generate reports Mobile Dashboards**: Access insights anywhere via Telegram Rapid Prototyping**: Quick data exploration without complex tools Business Intelligence Executive Reporting**: Instant charts for presentations Market Research**: Compare industry trends and benchmarks Performance Monitoring**: Track KPIs in real-time 🎨 Customization Options Adding New Chart Types Create new Switch condition Add corresponding data processing node Configure Chart.js options Update user interface menu Data Source Extensions Replace CSV with database connections Add real-time API integrations Implement data refresh mechanisms Support multiple file formats Visual Customizations // Custom color palette backgroundColor: ['#your-colors'], // Advanced styling borderRadius: 8, borderSkipped: false, // Animation effects animation: { duration: 2000, easing: 'easeInOutQuart' } 🔒 Security & Best Practices Data Protection Validate CSV input format Sanitize user inputs Implement rate limiting Secure file access permissions Error Handling Graceful degradation for API failures User-friendly error messages Automatic retry mechanisms Comprehensive logging 📊 Expected Outputs Sample Generated Insights "✈️ Vistara leads with 350+ flights, capturing 23.4% market share" "📈 Long-haul flights dominate at 61.1% of total bookings" "💰 Budget category (₹0-10K) represents 47.5% of all bookings" "📊 Average prices peak at ₹14K for 6-8 hour duration flights" Performance Metrics Response Time**: <3 seconds for chart generation Image Quality**: 800x600px high-resolution PNG Data Capacity**: Handles 10K+ records efficiently Concurrent Users**: Scales with n8n instance capacity 🚀 Getting Started Download the workflow JSON Import into your n8n instance Configure Telegram bot credentials Upload your flight data CSV Test with /start command Deploy and share with your team 💡 Pro Tips Data Quality**: Clean data produces better insights Mobile First**: Charts are optimized for mobile viewing Batch Processing**: Handles large datasets efficiently Extensible Design**: Easy to add new visualization types Ready to transform your data into actionable insights? Import this template and start generating professional charts in minutes! 🚀
by Ranjan Dailata
Who this is for The Crunchbase B2B Lead Discovery Pipeline is designed for sales teams, B2B marketers, business analysts, and data operations teams who need a reliable way to extract, structure, and summarize company information from Crunchbase to fuel lead generation and market intelligence. This workflow is ideal for: Sales Development Reps (SDRs) - Needing structured leads from Crunchbase Marketing Analysts - Generating segmented outreach lists Growth Teams - Identifying trending B2B startups RevOps Teams - Automating company research pipelines Data Teams - Consolidating insights into Google Sheets for dashboards What problem is this workflow solving? Manual extraction of company data from Crunchbase is time-consuming, inconsistent, and often lacks the contextual summary required for sales enablement or growth targeting. This workflow automates the extraction, transformation, summarization, and delivery of Crunchbase company data into structured formats, making it instantly usable for B2B targeting and analysis. It solves: The difficulty of scaling lead discovery from Crunchbase The need to summarize raw textual content for quick insights The lack of integration between web scraping, LLM processing, and storage What this workflow does Markdown to Textual Data Extractor**: Takes raw scraped markdown from Crunchbase and converts it into readable plain text using a basic LLM chain Structured Data Extraction**: Applies a parsing model (OpenAI) to extract structured fields such as company name, funding rounds, industry tags, location, and founding year Summarization Chain**: Generates an executive summary from the raw Crunchbase text using a summarization prompt template Send to Google Sheets**: Adds the structured data and summary into a Google Sheet for team access and further processing Persist to Disk**: Saves both raw and structured data files locally for archiving or further use Webhook Notification**: Sends a structured payload to a webhook endpoint (e.g., Slack, CRM, internal tools) with lead insights Pre-conditions You need to have a Bright Data account and do the necessary setup as mentioned in the "Setup" section below. You need to have an OpenAI Account. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, Configure the Google Sheet Credentials with your own account. Follow this documentation - Set Google Sheet Credential In n8n, configure the OpenAi account credentials. Ensure the URL and Bright Data zone name are correctly set in the Set URL, Filename and Bright Data Zone node. Set the desired local path in the Write a file to disk node to save the responses. How to customize this workflow to your needs LLM Prompt Customization : Modify the extraction prompt to include additional fields like revenue, social links, leadership team Adjust summarization tone (e.g., executive summary, sales-focused snapshot or marketing digest) File Persistence Store raw markdown, extracted JSON, and summary text separately for audit/debug Webhook Notification Connect to CRM (e.g., HubSpot, Salesforce) via webhook to automatically create leads Send Slack notifications to alert sales reps when a new high-potential company is discovered
by David Ashby
Complete MCP server exposing all Gong Tool operations to AI agents. Zero configuration needed - all 4 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Gong Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Gong Tool tool with full error handling 📋 Available Operations (4 total) Every possible Gong Tool operation is included: 🔧 Call (2 operations) • Get call • Get many calls 👤 User (2 operations) • Get user • Get many users 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Gong Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Gong Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Jimleuk
This n8n template shows you how to create an MCP server out of your existing n8n workflows. With this, any MCP client connected can get more done with powerful end-to-end workflows rather than just simple tools. Designing agent tools for outcome rather than utility has been a long recommended practice of mine and it applies well when it comes to building MCP servers; In gist, agents to be making the least amount of calls possible to complete a task. This is why n8n can be a great fit for MCP servers! This template connects your agent/MCP client (like Claude Desktop) to your existing workflows by allowing the AI to discover, manage and run these workflows indirectly. How it works An MCP trigger is used and attaches 4 custom workflow tools to discover and manage existing workflows to use and 1 custom workflow tool to execute them. We'll introduce an idea of "available" workflows which the agent is allowed to use. This will help limit and avoid some issues when trying to use every workflow such as clashes or non-production. The n8n node is a core node which taps into your n8n instance API and is able to retrieve all workflows or filter by tag. For our example, we've tagged the workflows we want to use with "mcp" and these are exposed through the tool "search workflows". Redis is used as our main memory for keeping track of which workflows are "available". The tools we have are "add Workflow", "remove workflow" and "list workflows". The agent should be able to manage this autonomously. Our approach to allow the agent to execute workflows is to use the Subworkflow trigger. The tricky part is figuring out the input schema for each but was eventually solved by pulling this information out of the workflow's template JSON and adding it as part of the "available" workflow's description. To pass parameters through the Subworkflow trigger, we can do so via the passthrough method - which is that incoming data is used when parameters are not explicitly set within the node. When running, the agent will not see the "available" workflows immediately but will need to discover them via "list" and "search". The human will need to make the agent aware that these workflows will be preferred when answering queries or completing tasks. How to use First, decide which workflows will be made visible to the MCP server. This example uses the tag of "mcp" but you can all workflows or filter in other ways. Next, ensure these workflows have Subworkflow triggers with input schema set. This is how the MCP server will run them. Set the MCP server to "active" which turns on production mode and makes available to production URL. Use this production URL in your MCP client. For Claude Desktop, see the instructions here - https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-langchain.mcptrigger/#integrating-with-claude-desktop. There is a small learning curve which will shape how you communicate with this MCP server so be patient and test. The MCP server will work better if there is a focused goal in mind ie. Research and report, rather than just a collection of unrelated tools. Requirements N8N API key to filter for selected workflows. N8N workflows with Subworkflow triggers! Redis for memory and tracking the "available" workflows. MCP Client or Agent for usage such as Claude Desktop - https://claude.ai/download Customising this workflow If your targeted workflows do not use the subworkflow trigger, it is possible to amend the executeTool to use HTTP requests for webhooks. Managing available workflows helps if you have many workflows where some may be too similar for the agent. If this isn't a problem for you however, feel free to remove the concept of "available" and let the agent discover and use all workflows!
by n8n Team
This workflow sends a message to a Discord channel when a new row is added or a row is updated in a Google Sheet. The message will send all data rows in the Google Sheet. Prerequisites Discord account and Discord credentials. Google account and Google credentials. How it works Using a code node, we can use the obtained Google Sheet data to create a custom message that will be sent to Discord. The message will be sent to the Discord channel specified in the Discord node. Setup This workflow requires that you set up a Discord webhook and have an existing Google Sheet with data. See how to set up a Discord webhook here.