by David Ashby
Complete MCP server exposing 1 Buy Marketing API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Buy Marketing API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Buy Marketing API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.ebay.com/buy/marketing/v1_beta • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (1 total) 🔧 Merchandised_Product (1 endpoints) • GET /merchandised_product: Fetch Merchandised Products 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Buy Marketing API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 1 Recommendation API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Recommendation API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Recommendation API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.ebay.com{basePath} • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (1 total) 🔧 Find (1 endpoints) • POST /find: Get Promoted Listings Recommendations 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Recommendation API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 1 IP2WHOIS Domain Lookup API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add IP2WHOIS Domain Lookup credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the IP2WHOIS Domain Lookup API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.ip2whois.com/v2 • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (1 total) 🔧 General (1 endpoints) • GET /: Lookup WHOIS Data 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native IP2WHOIS Domain Lookup API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Ezema Kingsley Chibuzo
🧠 What It Does This n8n workflow turns your Telegram bot into a smart, multi-modal AI assistant that accepts text, documents, images, and audio messages, interprets them using OpenAI models, and responds instantly with context-aware answers. It integrates a Supabase vector database to store document embeddings and retrieve relevant information before sending a prompt to OpenAI — enabling a full RAG experience 💡 Why This Workflow? Most support bots can only handle basic text input. This workflow: Supports multiple input formats (voice, documents, images, text) Dynamically extracts and processes data from uploaded files Implements RAG by combining user input with relevant memory or vector-based context Delivers more accurate, relevant, and human-like AI responses. 👤 Who It's For Businesses looking to automate support using Telegram Freelancers or solopreneurs offering AI Chatbots for businesses. Creators building AI-powered bots for real use cases as it's great for Customer support knowledge, Legal or Policy document, long FAQs, Project documentation, and Product information retrieval. Devs or analysts exploring AI + multi-format input + vector memory. ⚙️ How It Works 🗂️ Knowledge Base Setup Run the “Add to Supabase Vector DB” workflow manually to upload a document from your google drive and embed it into your vector database. This powers the Telegram chatbot’s ability to answer questions using your content. 🔁 Telegram Message Routing Telegram Trigger captures the user message (Text, Image, Voice, Document) Message Router routes input by type using a Switch node Each type is handled separately: Voice → Translate recording to text (.ogg, .mp3) Image → Analyze image to text. Text → Sent directly to AI Agent (.txt). Document → Parsed (e.g. .docx to .txt) accordingly. 📎 Document Type Routing Before routing documents by type, the Supported Document File Types node first checks if the file extension is allowed. If not supported, it exits early with an error message — preventing unnecessary processing. Supported documents are then routed using the Document Router node, and converted to text for further processing. Supported Document File Types .jpg .jpeg .png .webp .pdf .doc .docx .xls .xlsx .json .xml. The text content is combined with stored memory and embedded knowledge using a RAG approach, enabling the AI to respond based on real uploaded data. 🧠 RAG via Supabase Uploaded documents are vectorized using OpenAI Embeddings. Embeddings are stored in Supabase with metadata. On new questions, the chatbot: Extracts question intent Queries Supabase for semantically similar chunks Ranks retrieved chunks to find the most relevant match. Injects them into the prompt for OpenAI. OpenAI generates a grounded response based on actual document content. Response is sent to the Telegram user with content awareness. 🛠 How to Set It Up Open n8n or your local/self-hosted instance. Import the `.json ` workflow file. Set up these credentials: Google drive API Key Telegram API (Bot Token) Guide OpenAI API Supabase API Key + Environment ConvertAPI API Key Postgres API Key Cohere API Key Add a prompt suited to your business. Add a custom AI agent prompt that reflects your business domain, tone, and purpose. This is very important. Without it, your agent won't know how best to respond. Activate the workflow. Start testing by sending a message or document to your Telegram bot.
by David Ashby
Complete MCP server exposing 4 Transportation Laws and Incentives API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Transportation Laws and Incentives credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Transportation Laws and Incentives API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to http://developer.nrel.gov/api/transportation-incentives-laws • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (4 total) 🔧 V1.{Output_Format} (1 endpoints) • GET /v1.{output_format}: Return a full list of laws and incentives that match your query. 🔧 V1 (3 endpoints) • GET /v1/category-list.{output_format}: Return the law categories for a given category type. • GET /v1/pocs.{output_format}: Get the points of contact for a given jurisdiction. • GET /v1/{id}.{output_format}: Fetch the details of a specific law given the law's ID. 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Transportation Laws and Incentives API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Rudi Afandi
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow allows you to create and manage custom short URLs directly via Telegram, with all data stored in MongoDB, and redirects handled efficiently via Nginx. How it works This flow provides a seamless URL shortening experience: Create via Telegram: Send a long URL to your bot. It will ask if you want a custom short code. Store in MongoDB: All long URLs and their corresponding short codes are securely stored in your MongoDB instance. Fast Redirects: When a user accesses a short URL, Nginx forwards the request to a dedicated n8n webhook, which then quickly redirects them to the original long URL. Set up steps This setup is straightforward, especially if you already have a running n8n instance and a VPS. Difficulty: Medium (Basic n8n/VPS knowledge required) Estimated Time: 15-30 minutes n8n Instance & VPS: Ensure you have n8n running on your VPS (e.g., 2 core 2GB, as you have). Telegram Bot: Create a new bot via @BotFather and get your Bot Token. Add this as a Telegram credential in n8n. MongoDB Database: Set up a MongoDB instance (either on your VPS or a cloud service like MongoDB Atlas). Create a database and a collection (e.g., url or short_urls). Add your MongoDB credentials in n8n. Here's MongoDB data structure JSON: >[ {"_id": "686a11946a48b580d72d0397", "longUrl": "https://longurl.com/abcdefghijklm/", "shortUrl": "short-code"} ] Domain/Subdomain: Point a domain or subdomain (e.g., s.yourdomain.com) to your VPS IP address. This will be your short URL base. Nginx/Caddy Configuration: Configure your web server (Nginx or Caddy) on the VPS to proxy requests from your short URL domain to the n8n webhook for redirects. (Detailed Nginx config is provided as sticky notes in the redirect workflow) Workflow Setup: Import both provided n8n workflows (Telegram URL Shortener Creator and URL Redirect Handler). Activate both workflows. Crucial: Set an environment variable in your n8n instance (or .env file) named SHORTENER_DOMAIN with the value of your short URL domain (e.g., https://s.yourdomain.com). Refer to sticky notes inside the workflows for detailed node configurations and expressions.
by Adnan
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. 👥 Who is this for? This workflow is designed for a variety of professionals who manage vendor relationships and data security. It is especially beneficial for: 🛡️ GRC (Governance, Risk, and Compliance) Professionals**: Streamline your risk assessment processes 🔒 Information Security Teams**: Quickly evaluate the security posture of third-party vendors 📋 Procurement Departments**: Enhance due diligence when onboarding new service providers 🚀 Startup Founders**: Efficiently assess vendors without a dedicated security team This tool is perfect for anyone looking to automate the manual review of vendor websites, policies, and company data. ✨ 🎯 What problem is this workflow solving? Manual vendor due diligence is a time-consuming process that can take hours for a single vendor. This workflow automates over 80% of these manual tasks, which typically include: 🔍 Finding and organizing basic vendor information 🏢 Researching the company's background 📄 Collecting links to key documents like Privacy Policies, Terms of Service, and Trust Pages 📖 Manually reviewing each document to extract risk-relevant information 📊 Compiling all findings into a formatted report or spreadsheet for record-keeping By leveraging Gemini for structured parsing and web scraping with live internet data, this workflow frees you up to focus on critical analysis and final review. ⚡ ⚙️ What this workflow does This end-to-end automated n8n workflow performs the following steps: 📝 Intake: Begins with a simple form to capture the vendor's name, the business use case, and the type of data they will handle 🔎 Background Research: Gathers essential background information on the company ⚠️ Risk Analysis: Conducts comprehensive research on various risk-related topics 🔗 URL Extraction: Finds and validates public URLs for privacy policies, security pages, and trust centers 📈 Risk Assessment: Generates a structured risk score and a detailed assessment based on the collected content and context 📤 Export: Exports the final results to a Google Sheet for easy access and record-keeping 🚀 Setup To get started with this workflow, follow these steps: 🔑 Configure Credentials: Set up your API credentials for Gemini and Jina AI 📊 Connect Google Sheets: Authenticate your Google Sheets account and configure the the Sheet where you want to store the results 🔗 Download the Google Sheet template for your assessment ouput from here ⚙️ (Optional) Customize Prompts: Adjust the prompts within the workflow to better suit your specific needs 🎯 (Optional) Align Risk Framework: Modify the risk questions to align with your organization's internal vendor risk framework
by Fahmi Oktafian
This n8n workflow is a Telegram bot that allows users to either: Generate AI images using Pollinations API, or Generate blog articles using Gemini AI Users simply type image your prompt or blog your title, and the bot responds with either an AI-generated image or article. Who's it for This template is ideal for: Content creators and marketers who want to generate visual and written content quickly Telegram bot developers looking for real-world AI integration Educators or students automating content workflows Anyone managing content pipelines using Google Sheets What it does / How it works Telegram Interaction Trigger Telegram Message: Listens for new messages or button clicks via Telegram Classify Telegram Input: JavaScript logic to classify input as /start, /help, normal text, or callback Switch Input Type: Directs the flow based on the classification Menu & Help Send Main Menu to User: Shows "Generate Image", "Blog Article", "Help" options Switch Callback Selection: Routes based on button pressed (image, blog, or help) Send Help Instructions: Sends markdown instructions on how to use the bot Input Validation Validate Command Format: Ensures input starts with image or blog Notify Invalid Input Format: If validation fails, informs user of correct format Image Generator Prompt User for Image Description → When user clicks Generate Image Detect Text-Based Input Type → Detects if text is image or blog Switch Text Command Type → Directs whether to generate image or article Show Typing for Image Generation → Sends "uploading photo..." typing status Build Image Generation URL → Constructs Pollinations API image URL from prompt Download AI Image → Makes HTTP request to get the image Send Image Result to Telegram → Sends image to user via Telegram Log Image Prompt to Google Sheets → Logs prompt, image URL, date, and user ID Upload Image to Google Drive → Saves image to Google Drive folder Blog Article Generator Prompt User for Blog Title → When user clicks Blog Article Store Blog Prompt → Saves prompt for later use Log Blog Prompt to Google Sheets → Writes title + user ID to Google Sheets Send Article Style Options → Offers: Formal, Casual, or News style Store Selected Article Style → Updates row with chosen style in Google Sheets Fetch Last User Prompt → Finds the latest prompt submitted by this user Extract Last Blog Prompt → Extracts row for use in AI request Gemini Chat Wrapper → Handles input into LangChain node for AI processing Generate Article with Gemini → Calls Gemini to create 3-paragraph blog post Parse Gemini Response → Parses JSON string to extract title and content Send Article to Telegram → Sends blog article result back to user Log Final Article to Google Sheets → Updates row with final content and timestamp Requirements Telegram bot (via @BotFather) Pollinations API (free and public endpoint) Google Sheets & Drive (OAuth credential setup in n8n) Google Gemini / PaLM API key via LangChain Self-hosted or cloud n8n setup Setup Instructions Clone the workflow and import it into your n8n instance Set credentials: Telegram API Google Sheets OAuth Google Drive OAuth Gemini (via LangChain) Replace: Sheet ID with your own Google Sheet Folder ID on Google Drive chat_id placeholders if needed (use expressions instead) Deploy and send /start in your Telegram bot 🔧 Customization Tips Edit the Gemini prompt to adjust article length or tone Add extra style buttons like "SEO", "Story", "Academic" Add image post-processing (e.g. compression, renaming) Add error catching logic (e.g. if Pollinations image fails) Store images with filenames based on timestamp/user Security Considerations Use n8n credentials for all tokens (Telegram, Gemini, Sheets, Drive) Never hardcode your token inside HTTP nodes Do not expose real Google Sheet or Drive links in shared version Use Set node to collect all editable variables (like folder ID, sheet name)
by Mark Shcherbakov
Video Guide I prepared a detailed guide that illustrates the entire process of building an AI agent using Supabase and Google Drive within N8N workflows. Youtube Link Who is this for? This workflow is designed for developers, data scientists, and business users who wish to automate document management and enable AI-powered interactions over their stored files. It's especially beneficial for scenarios where users need to process, analyze, and retrieve information from uploaded documents rapidly. What problem does this workflow solve? Managing files across multiple platforms often involves tedious manual processes. This workflow facilitates automated file handling, making it easier for users to upload, parse, and interact with documents through an AI agent. It reduces redundancy and enhances the efficiency of data retrieval and management tasks. What this workflow does This workflow integrates Supabase storage with Google Drive and employs an AI agent to manage files effectively. The agent can: Upload files to Supabase storage and activate processes based on file changes in Google Drive. Retrieve and parse documents, converting them into a structured format for easy querying. Utilize an AI agent to answer user queries based on saved document data. Data Collection: The workflow initially gathers files from Supabase storage, ensuring no duplicates are processed in the 'files' table. File Handling: It processes files to be parsed based on their type, leveraging LlamaParse for effective data transformation. Google Drive Integration: The workflow monitors a designated Google Drive folder to upload files automatically and refresh document records in the database with new data. AI Interaction: A webhook is established to enable the AI agent to converse with users, facilitating queries and leveraging stored document knowledge. Setup Supabase Storage Setup: Create a private bucket in Supabase storage, modifying the default name in the URL. Upload your files using the provided upload options. Database Configuration: Establish the 'file' and 'document' tables in Supabase with the necessary fields. Execute any required SQL queries for enabling vector matching features. N8N Workflow Logic: Start with a manual trigger for the initial workflow segment or consider alternative triggers like webhooks. Replace all relevant credentials across nodes with your own to ensure seamless operation. File Processing and Google Drive Monitoring: Set up file processing to take care of downloading and parsing files based on their types. Create triggers to monitor the designated Google Drive folder for file uploads and updates. Integrate AI Agent: Configure the webhook for the AI agent to accept chat inputs while maintaining session context for enhanced user interactions. Utilize PostgreSQL to store user interactions and manage conversation states effectively. Testing and Adjustments: Once everything is set up, run tests with the AI agent to validate its responses based on the documents in your database. Fine-tune the workflow and AI model as needed to achieve desired performance.
by Alex Kim
This workflow leverages n8n to perform automated Google Maps API queries and manage data efficiently in Google Sheets. It's designed to extract specific location data based on a given list of ZIP codes and categories. Features Queries the Google Maps API for location data using predefined ZIP codes and subcategories. Filters, de-duplicates, and organizes data into structured rows in Google Sheets. Implements exponential backoff retries to handle API rate limits. Logs and updates statuses directly in Google Sheets for easy tracking. Prerequisites Google OAuth Credentials: A configured Google Cloud project for Google Maps API and Sheets API access. Google Sheets: A sheet with ZIP codes and categories defined (e.g., "AZ Zips"). n8n Setup: A running instance of n8n with credentials configured for Google OAuth. Setup Instructions 1. Prepare Google Sheets Add the ZIP codes to the "AZ Zips" sheet. Define subcategories in another sheet (e.g., "Google Maps Categories"). Provide the sheet's URL in the Settings node of the workflow. 2. Configure API Access Set up Google OAuth credentials for Maps and Sheets APIs in n8n. Ensure your API key has access to the places.searchText endpoint. 3. Workflow Customization Modify textQuery parameters in the GMaps API node to match your query needs. Adjust trigger intervals as required (e.g., manual or scheduled execution). 4. Run the Workflow Execute the workflow manually or schedule periodic runs to keep your data updated. Notes This workflow includes robust error handling to retry failed API calls with exponential backoff. All data is organized and logged directly in Google Sheets for easy reference and updates. For more information or issues, feel free to reach out!
by scrapeless official
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How it works This n8n workflow helps you build a fully automated SEO content engine using Scrapeless and AI. It’s designed for teams running international websites—such as SaaS products, e-commerce platforms, or content-driven businesses—who want to grow targeted search traffic through high-conversion content, without relying on manual research or hit-or-miss topics. The flow runs in three key phases: 🔍 Phase 1: Topic Discovery Automatically find high-potential long-tail keywords based on a seed keyword using Google Trends via Scrapeless. Each keyword is analyzed for trend strength and categorized by priority (P0–P3) with the help of an AI agent. 🧠 Phase 2: Competitor Research For each P0–P2 keyword, the flow performs a Google Search (via Deep SerpAPI) and extracts the top 3 organic results. Scrapeless then crawls each result to extract full article content in clean Markdown. This gives you a structured, comparable view of how competitors are writing about each topic. ✍️ Phase 3: AI Article Generation Using AI (OpenAI or other LLM), the workflow generates a complete SEO article draft, including: SEO title Slug Meta description Trend-based strategy summary Structured JSON-based article body with H2/H3 blocks Finally, the article is stored in Supabase (or any other supported DB), making it ready for review, API-based publishing, or further automation. Set up steps This flow requires intermediate familiarity with n8n and API key setup. Full configuration may take 30–60 minutes. ✅ Prerequisites Scrapeless** account (for Google Trends and web crawling) LLM provider** (e.g. OpenAI or Claude) Supabase* or *Google Sheets** (to store keywords & article output) 🧩 Required Credentials in n8n Scrapeless API Key OpenAI (or other LLM) credentials Supabase or Google Sheets credentials 🔧 Setup Instructions (Simplified) Input Seed Keyword Edit the “Set Seed Keyword” node to define your niche, e.g., "project management". Google Trends via Scrapeless Use Scrapeless to retrieve “related queries” and their interest-over-time data. Trend Analysis with AI Agent AI evaluates each keyword's trend strength and assigns a priority (P0–P3). Filter & Store Keyword Data Group and sort keywords by priority, then store them in Google Sheets. Competitor Research Use Deep SerpAPI to get top 3 Google results. Crawl each using Scrapeless. AI Content Generation Feed competitor content + trend data into AI. Output a structured SEO blog article. Store Final Article Save full article JSON (title, meta, slug, content) to Supabase.
by Gregor
Awork currently does not support a check for open subtasks or open dependencies when setting a task status to done. This workflow offers you a simple workaround to add this functionality to Awork and notifies users when triggered. Multiple configuration options available. How it works Triggered via Awork Webhook call on status change of tasks If task is marked as done, subtasks and/or dependent tasks are checked for their status If unfinished tasks are found, a status rollback to previous status is performed and user gets notified Set up steps Add webhook call to Awork Configure Awork API credentials Set up workflow configuration via setup node, e.g. user notification text, restrict to subtasks/dependency checks etc.