by Luciano Gutierrez
Supabase AI Agent with RAG & Multi-Tenant CRUD Version: 1.0.0 n8n Version: 1.88.0+ Author: Koresolucoes License: MIT Description A stateful AI agent workflow powered by Supabase and Retrieval-Augmented Generation (RAG). Enables persistent memory, dynamic CRUD operations, and multi-tenant data isolation for AI-driven applications like customer support, task orchestration, and knowledge management. Key Features: š§ RAG Integration: Leverages OpenAI embeddings and Supabase vector search for context-aware responses. šļø Full CRUD: Manage agent_messages, agent_tasks, agent_status, and agent_knowledge in real time. š¤ Multi-Tenant Ready: Supports per-user/organization data isolation via dynamic table names and webhooks. š Secure: Role-based access control via Supabase Row Level Security (RLS). Use Cases Customer Support Chatbots: Persist conversation history and resolve queries using institutional knowledge. Automated Task Management: Track and update task statuses dynamically. Knowledge Repositories: Store and retrieve domain-specific information for AI agents. Instructions 1. Import Template Go to n8n > Templates > Import from File and upload this workflow. 2. Configure Credentials Add your Supabase and OpenAI API keys under Settings > Credentials. 3. Set Up Multi-Tenancy (Optional) Dynamic Webhook Path**: Replace the default webhook path with /mcp/tool/supabase/:userId to enable per-user routing. Table Names**: Use a Set Node to dynamically generate table names (e.g., agent_messages_{{userId}}). 4. Activate & Test Enable the workflow and send test requests to the webhook URL. Tags AI Agent RAG Supabase CRUD Multi-Tenant OpenAI Automation Screenshots License This template is licensed under the MIT License.
by Luciano Gutierrez
Google Calendar AI Agent with Dynamic Scheduling Version: 1.0.0 n8n Version: 1.88.0+ Author: Koresolucoes License: MIT Description An AI-powered workflow to automate Google Calendar operations using dynamic parameters and MCP (Model Control Plane) integration. Enables event creation, availability checks, updates, and deletions with timezone-aware scheduling [[1]][[2]][[8]]. Key Features: š Full Calendar CRUD: Create, read, update, and delete events in Google Calendar. ā° Availability Checks: Verify time slots using AVALIABILITY_CALENDAR node with timezone support (e.g., America/Sao_Paulo). š¤ AI-Driven Parameters: Use $fromAI() to inject dynamic values like Start_Time, End_Time, and Description [[3]][[4]]. š MCP Integration: Connects to an MCP server for centralized AI agent control [[5]][[6]]. Use Cases Automated Scheduling: Book appointments based on AI-recommended time slots. Meeting Coordination: Sync calendar events with CRM/task management systems. Resource Management: Check room/equipment availability before event creation. Instructions 1. Import Template Go to n8n > Templates > Import from File and upload this workflow. 2. Configure Credentials Add Google Calendar OAuth2 credentials under Settings > Credentials. Ensure the calendar ID matches your target (e.g., ODONTOLOGIA group calendar). 3. Set Up Dynamic Parameters Use $fromAI('Parameter_Name') in nodes like CREATE_CALENDAR to inject AI-generated values (e.g., event descriptions). 4. Activate & Test Enable the workflow and send test requests to the webhook path /mcp/:tool/calendar. Tags Google Calendar Automation MCP AI Agent Scheduling CRUD Screenshots License This template is licensed under the MIT License. Notes: Extend multi-tenancy by adding :userId to the webhook path (e.g., /mcp/:userId/calendar) [[7]]. For timezone accuracy, always specify options.timezone in availability checks [[8]]. Refer to n8nās Google Calendar docs for advanced field mappings.
by Robert Breen
Beginner AI Agent Duo: LeadāQualifier Task Automator & Ecommerce Chatbot Status: Ready for UseāÆā Note: This template is built entirely with official n8n nodesāno communityānode installation required. šĀ Description This template demonstrates two beginnerāfriendly AIāagent patterns that cover the most common use cases: | Agent | Purpose | Flow Highlights | |-------|---------|-----------------| | LeadāQualifier Task Automator | Classifies phoneācall transcripts to decide if the caller is a good bulkāorder lead. | Manual Trigger ā Code (sample data) ā AIĀ Agent (GPTā4oāmini) ā StructuredĀ OutputĀ Parser ā Set (clean fields) | | Ecommerce Chatbot | Answers customer questions about products, bulk pricing, shipping, and returns. | ChatĀ Trigger (webhook) ā AIĀ Agent (GPTā4oāmini) with Memory ā IfĀ node ā Orderāplaced reply or noāop | Both agents run on GPTā4oāmini and use n8nās LangChaināpowered nodes for quick, lowācode configuration. āļøĀ HowĀ toĀ InstallĀ &Ā Run Import the Workflow In n8n, go to Workflows ā Import from File or Paste JSON, then save. Add Your OpenAIĀ API Key Go to Credentials ā New ā OpenAI API. Paste your key from <https://platform.openai.com>. Select this credential in both OpenAI Chat Model nodes. (Optional) Select a Different Model Default model is gptā4oāmini. Change to GPTā4o, GPTā3.5āturbo, or any available model in each OpenAI node. Test the LeadāQualifier Agent Click Activate. Press TestĀ workflow. The Code node feeds four sample transcripts; the AI Agent returns JSON like: { "Name": "Jordan Lee", "Is Good Lead": "Yes", "Reasoning": "Customer requests 300 custom mugs, indicating a bulk order." } Test the Ecommerce Chatbot Copy the Webhook URL from the When chat message received trigger. POST a payload like: { "message": "Hi, do you offer discounts if I buy 120 notebooks?" } The AI Agent replies with bulkāpricing info. If the customer confirms an order, it appends *; the IfĀ node then sends āYour order has been placedā. š§©Ā Customization Ideas Refine Qualification Logic**āEdit the Task Agentās system prompt to match your own lead criteria. Save Leads Automatically**āAdd Google Sheets, Airtable, or a database node after the Set node. Expand the Chatbot**āConnect inventory APIs, payment gateways, or CRM integrations. Adjust Memory Length*āChange the *Simple Memory nodeās window to retain more conversation context. š¤ Connect with Me Description Iām Robert Breen, founder of Ynteractive ā a consulting firm that helps businesses automate operations using n8n, AI agents, and custom workflows. Iāve helped clients build everything from intelligent chatbots to complex sales automations, and Iām always excited to collaborate or support new projects. If you found this workflow helpful or want to talk through an idea, Iād love to hear from you. Links š Website: https://www.ynteractive.com šŗ YouTube: @ynteractivetraining š¼ LinkedIn: https://www.linkedin.com/in/robert-breen š¬ Email: rbreen@ynteractive.com
by n8n Team
Who this template is for This template is for researchers, students, professionals, or content creators who need to quickly extract and summarize key insights from PDF documents using AI-powered analysis. Use case Converting lengthy PDF documents into structured, digestible summaries organized by topic with key insights. This is particularly useful for processing research papers, reports, whitepapers, or any document where you need to quickly understand the main topics and extract actionable insights without reading the entire document. How this workflow works Document Upload: Receives PDF files through a POST endpoint at /ai_pdf_summariser File Validation: Checks that the PDF is under 10MB and has fewer than 20 pages to meet API limits Content Extraction: Extracts text content from the PDF file AI Analysis: Uses OpenAI's GPT-4o-mini to analyze the document and break it down into distinct topics Insight Generation: For each topic, generates 3 key insights with titles and detailed explanations Format Response: Converts the structured data into markdown format for easy reading Return Results: Provides the formatted summary along with document metadata (file hash) Set up steps Configure OpenAI API: Set up your OpenAI credentials for the GPT-4o-mini model Deploy Webhook: The workflow automatically creates a POST endpoint at /ai_pdf_summariser Test Upload: Send a PDF file to the endpoint using a multipart/form-data request Adjust Limits: Modify the file size (10MB) and page count (20) validation limits if needed based on your requirements Customize Prompts: Update the system prompt in the Information Extractor node to change how topics and insights are generated The workflow includes comprehensive error handling for file validation failures (400 error) and processing errors (500 error), ensuring reliable operation even with problematic documents.
by Dataki
Edit 19/11/2024: As explained on the workflow, the AI Agent with the original system prompt was not effective when using gpt4-o-mini. To address this, I optimized the prompt to work better with this model. You can find the prompts Iāve tested on this Notion Page. And yes, there is one that works well with gpt4-o-mini. AI Agent to chat with you Search Console Data, using OpenAI and Postgres This AI Agent enables you to interact with your Search Console data through a chat interface. Each node is documented within the template, providing sufficient information for setup and usage. You will also need to configure Search Console OAuth credentials. Follow this n8n documentation to set up the OAuth credentials. Important Notes Correctly Configure Scopes for Search Console API Calls Itās essential to configure the scopes correctly in your Google Search Console API OAuth2 credentials. Incorrect configuration can cause issues with the refresh token, requiring frequent reconnections. Below is the configuration I use to avoid constant re-authentication: Of course, you'll need to add your client_id and client_secret from the Google Cloud Platform app you created to access your Search Console data. Configure Authentication for the Webhook Since the webhook will be publicly accessible, donāt forget to set up authentication. Iāve used Basic Auth, but feel free to choose the method that best meets your security requirements. š¤©š Example of awesome things you can do with this AI Agent
by Jan Oberhauser
Simpe API which queries the received country code via GraphQL and returns it. Example URL: https://n8n.exampl.ecom/webhook/1/webhook/webhook?code=DE Receives country code from an incoming HTTP Request Reads data via GraphQL Converts the data to JSON Constructs return string
by Jonathan
Task: Create a simple API endpoint using the Webhook and Respond to Webhook nodes Why: You can prototype or replace a backend process with a single workflow Main use cases: Replace backend logic with a workflow
by Don Jayamaha Jr
This workflow acts as a central API gateway for all technical indicator agents in the Binance Spot Market Quant AI system. It listens for incoming webhook requests and dynamically routes them to the correct timeframe-based indicator tool (15m, 1h, 4h, 1d). Designed to power multi-timeframe analysis at scale. š„ Watch Tutorial: šÆ What It Does Accepts requests via webhook with a token symbol and timeframe Forwards requests to the correct internal technical indicator tool Returns a clean JSON payload with RSI, MACD, BBANDS, EMA, SMA, and ADX Can be used directly or as a microservice by other agents š ļø Input Format Webhook endpoint: POST /webhook/indicators Body format: { "symbol": "DOGEUSDT", "timeframe": "15m" } š Routing Logic | Timeframe | Routed To | | --------- | -------------------------------- | | 15m | Binance SM 15min Indicators Tool | | 1h | Binance SM 1hour Indicators Tool | | 4h | Binance SM 4hour Indicators Tool | | 1d | Binance SM 1day Indicators Tool | š Use Cases | Use Case | Description | | -------------------------------------------------- | ------------------------------------------------------ | | š Used by Binance Financial Analyst Tool | Automatically triggers all indicator tools in parallel | | š¤ Integrated in Binance Quant AI System | Supports reasoning, signal generation, and summaries | | āļø Can be called independently for raw data access | Useful for dashboards or advanced analytics | š¤ Output Example { "symbol": "DOGEUSDT", "timeframe": "15m", "rsi": 56.7, "macd": "Bearish Crossover", "bbands": "Stable", "ema": "Price above EMA", "adx": 19.4 } ā Prerequisites Make sure all the following workflows are installed and operational: Binance SM 15min Indicators Tool Binance SM 1hour Indicators Tool Binance SM 4hour Indicators Tool Binance SM 1day Indicators Tool OpenAI credentials (for any agent using LLM formatting) š§¾ Licensing & Attribution Ā© 2025 Treasurium Capital Limited Company All architectural routing logic and endpoint structuring is IP-protected. No unauthorized rebranding or resale permitted. š Need help? Connect on LinkedIn ā Don Jayamaha
by Don Jayamaha Jr
A sentiment intelligence sub-agent for the Binance Spot Market Quant AI Agent. It aggregates crypto news from major sources, filters by token keyword (e.g., BTC, ETH), and produces a Telegram-ready summary including market sentiment and top headlinesāpowered by GPT-4o. š„ Live Demo: š ļø Workflow Function This tool performs the following steps: | š§ Step | š Description | | ------------------------ | ----------------------------------------------------------------------------- | | Webhook Input | Accepts { "message": "symbol" } via HTTP POST | | Crypto Keyword Extractor | GPT model extracts the valid crypto symbol (e.g., "SOL", "DOGE", "ETH") | | RSS News Aggregators | Pulls latest headlines from 9+ crypto sources (CoinDesk, Cointelegraph, etc.) | | Merge & Filter Articles | Keeps only articles containing the specified token | | Prompt Builder | Creates prompt for GPT with filtered headlines | | GPT-4o Summarizer | Summarizes news into 3-part response: Summary, Sentiment, Headline Links | | Telegram Formatter | Converts GPT output into a Telegram-friendly message | | Response Handler | Returns formatted message to the caller via webhook | š„ Webhook Trigger Format { "message": "ETH" } This triggers a full execution of the workflow and returns output like: š£ ETH Sentiment: Neutral ⢠BlackRockās tokenized fund expands to Ethereum mainnet (CoinDesk) ⢠Ethereum fees remain high, analysts call for L2 migration (NewsBTC) ⢠Vitalik warns about centralized risks in staking (Cointelegraph) š Installation Guide 1. Import & Enable Load the .json into your n8n Editor Enable webhook trigger in the top-right corner Ensure it's reachable via POST /webhook/custom-path 2. Required Credentials OpenAI API Key** (GPT-4o capable) No API keys required for RSS feeds 3. Connect to Quant Agent Add an HTTP Request node in your main AI agent Point to this workflow's webhook with body { "message": "symbol" } Capture the response to include in your Telegram output š Real Use Cases | Scenario | Result | | ---------------------------------- | ---------------------------------------------------------------- | | BTC Sentiment before a key event | Returns 8ā12 filtered articles with bullish/neutral/bearish tone | | Daily pulse for altcoins like DOGE | Shows relevant headlines, helpful for intraday trading setups | | Telegram chatbot integration | Enables user to query sentiment via /sentiment ETH | | Macro context for Quant AI outputs | Adds emotional/news context to technical-based trade decisions | š§¾ Licensing & Attribution Ā© 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding or resale permitted. š For support: LinkedIn ā Don Jayamaha
by Giulio
This n8n workflow template allows you to create a CRUD endpoint that performs the following actions: Create a new record Get a record Get many records Update a record Delete a record This template is connected with Airtable but you can replace the Airtable nodes with anything you need to interact with (e.g. Postgres, MySQL, Notion, Coda...). The template uses the n8n Webhook node setting 'Allow Multiple HTTP Methods' to enable multiple HTTP methods on the same node. Features Just two nodes to create 5 endpoints Use it with Airtable or replace the Airtable nodes for your own customization Add your custom logic exploiting all n8n's possibilities Workflow Steps Webhook node**: exposes the endpoints to get many records and create a new record Webhook (with ID) node**: exposes the endpoints to get, update, and delete a record. Due to a n8n limitation, this endpoint will have an additional code in the path (e.g. https://my.app.n8n.cloud/webhook/580ccc56-f308-4b64-961d-38323501a170/customers/:id). Keep this in mind when using these endpoints in your application Various Airtable nodes**: execute various specific operations to interact with Airtable records Getting Started To deploy and use this template: Import the workflow into your n8n workspace Customize the endpoint paths by tweaking the 'Path' parameters in the 'Webhook' and 'Webhook (with ID)' nodes (currently customers) Set up your Airtable credentials by following this guide and customize the Airtable nodes by selecting your base, table, and the correct fields to update. ...or... replace the Airtable nodes and connect the endpoint to any other service (e.g. Postgres, MySQL, Notion, Coda) How to use the workflow Activate the workflow Connect your app to the endpoints (production URLs) to perform the various operations allowed by the workflow Note that the Webhook nodes have two URLs, one for testing and one for production. The testing URL is activated when you click on 'Test workflow' button and can't be used for production. The production URL is available after you activate the workflow. More info here. Feel free to get in touch with me if you have questions about this workflow.
by moosa
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. š Overview This workflow enables a powerful AI-driven virtual assistant that dynamically responds to website queries using webhook input, Pinecone vector search, and OpenAI agents ā all smartly routed based on the source website. š§ How It Works Webhook Trigger The workflow starts with a Webhook node that receives query parameters: query: The user's question userId: Unique user identifier site: Website identifier (e.g., test_site) page: Page identifier (e.g., homepage, pricing) Smart Routing A Switch node directs the request to the correct AI agent based on the site value. Each AI agent uses: OpenAI GPT-4/3.5 model Pinecone vector store for context-aware answers SQL-based memory for consistent multi-turn conversation Contextual AI Agent Each agent is customized per website using: Site-specific Pinecone namespaces Predefined system prompts to stay in scope Webhook context including page, site, and userId Final Response The response is sent back to the originating website using the Respond to Webhook node. š§ Use Case Ideal for multi-site platforms that want to serve tailored AI chat experiences per domain or page ā whether itās support, content discovery, or interactive agents. ā Highlights š§ Vector search using Pinecone for contextual responses š Website-aware logic with Switch node routing š No hardcoded API keys š§© Modular agents for scalable multi-site support
by Joseph LePage
š¤ This n8n workflow creates an intelligent Telegram bot that processes multiple types of messages and provides automated responses using AI capabilities. The bot serves as a personal assistant that can handle text, voice messages, and images through a sophisticated processing pipeline. Core Components Message Reception and Validation š„ š Implements webhook-based message reception for real-time processing š Features a robust user validation system that verifies sender credentials š Supports both testing and production webhook endpoints for development flexibility Message Processing Pipeline ā” š Uses a smart router to detect and categorize incoming message types š Processes three main message formats: š¬ Text messages š¤ Voice recordings šø Images with captions AI Integration š§ š¤ Leverages OpenAI's GPT-4 for message classification and processing š£ļø Incorporates voice transcription capabilities for audio messages šļø Features image analysis using GPT-4 Vision API for processing visual content Technical Architecture Webhook Management š š Maintains separate endpoints for testing and production environments š Implements automatic webhook status monitoring ā” Provides real-time webhook configuration updates Error Handling ā ļø š Features comprehensive error detection and reporting š Implements fallback mechanisms for unprocessable messages š¬ Provides user feedback for failed operations Message Classification System š š·ļø Categorizes incoming messages into tasks and general conversation š Implements separate processing paths for different message types š§© Maintains context awareness across message processing Security Features User Authentication š ā Validates user credentials against predefined parameters š¤ Implements first name, last name, and user ID verification š« Restricts access to authorized users only Response System Intelligent Responses š” š¤ Generates contextual responses based on message classification