by David Ashby
Complete MCP server exposing 2 Analytics API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Analytics API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Analytics API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.ebay.com{basePath} • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (2 total) 🔧 Rate_Limit (1 endpoints) • GET /rate_limit/: Retrieve Application Rate Limits 🔧 User_Rate_Limit (1 endpoints) • GET /user_rate_limit/: Retrieve User Rate Limits 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Analytics API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 3 IPQualityScore API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add IPQualityScore API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the IPQualityScore API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://ipqualityscore.com/api • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (3 total) 🔧 Json (3 endpoints) • GET /json/email/{YOUR_API_KEY_HERE}/{USER_EMAIL_HERE}: Email Validation • GET /json/phone/{YOUR_API_KEY_HERE}/{USER_PHONE_HERE}: Phone Validation • GET /json/url/{YOUR_API_KEY_HERE}/{URL_HERE}: Malicious URL Scanner 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native IPQualityScore API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Jimleuk
This n8n template demonstrates the beginnings of building your own n8n-powered WhatsApp chatbot! Under the hood, utilise n8n's powerful AI features to handle different message types and use an AI agent to respond to the user. A powerful tool for any use-case! How it works Incoming WhatsApp Trigger provides a way to get messages into the workflow. The message received is extracted and sent through 1 of 4 branches for processing. Each processing branch uses AI to analyse, summarize or transcribe the message so that the AI agent can understand it. The supported types are text, image, audio (voice notes) and video. The AI Agent is used to generate a response generally and uses a wikipedia tool for more complex queries. Finally, the response message is sent back to the WhatsApp user using the WhatsApp node. How to use Once you have setup and configured your WhatsApp account, you'll need to activate your workflow to start processing messages. Good to know: Large media files may negatively impact workflow performance. Requirements WhatsApp Buisness account Google Gemini for LLM. Gemini is used specifically because it can accept audio and video files whereas at time of writing, many other providers like OpenAI's GPT, do not. Customising this workflow For performance reasons, consider detecting large audio and video before sending to the LLM. Pre-processing such files may allow your agent to perform better. Go beyond and create rich and engagement customer experiences by responding using images, audio and video instead of just text!
by Sebastian/OptiLever
Tired of spending HOURS writing product descriptions that don’t rank or convert? This could be your solution. This free Product Description Writer workflow for n8n uses a multi-agent AI system to turn your product list into conversion-focused, SEO-ready copy. It analyzes your product images, identifies key features, and writes optimized titles and descriptions for platforms like Shopify and Google Shopping. It can process your entire catalog in minutes, saving you countless hours of manual work. This workflow is perfect for: 🛒 Shopify stores 🛒 Etsy sellers 🛒 Product managers 🛒 Digital marketers 🛒 Anyone who hates writing product copy manually! How it works This workflow automates the entire product description process in a few high-level steps: Reads Your Products: The workflow starts by reading product data from your specified Google Sheet, including the product name, an image URL, and optional fields like brand voice or target market. Analyzes Product Images: It downloads each product image and uses an AI vision model (GPT-4o-mini) to perform a detailed visual analysis, extracting objective information like materials, colors, features, and structure. Writes Optimized Copy: The visual analysis and your original data are passed to two specialized AI agents. The first drafts a Shopify-optimized title and description, while the second refines it and generates additional SEO-focused copy for Google Merchant Center. Updates Your Spreadsheet: The final, optimized product titles and descriptions for both Shopify and Google are automatically written back to the original Google Sheet. Set up steps Setting up this workflow takes only a few minutes. You will need to configure credentials for the following services: Google Sheets**: To allow the workflow to read your product list and write back the results. OpenAI**: To power the AI agents that analyze images and generate the copy. Detailed instructions and customization tips are included in the sticky notes inside the workflow itself. Benefits Automated Vision-Based Copywriting**: Reduces manual description writing time. Multi-Channel Ready**: Outputs are optimized for both Shopify and Google Merchant Center standards. Brand Alignment**: Uses optional user-provided draft descriptions and brand voice to maintain brand tone. SEO and Conversion Focus**: Titles and descriptions are optimized for both search engines and consumer engagement. Image-Centric Accuracy**: Uses actual product images for accurate attribute extraction, minimizing errors from missing or vague text data. Tips & Customization To adjust brand voice or tone, modify the system prompts in the Shopify and GMC AI agents. To extend the workflow for scheduled runs, add a cron trigger or a Google Sheets "status column" filter. For QA/debugging, consider adding logging nodes to Slack or Discord, or export AI outputs to a review sheet before updating the main sheet. To improve Shopify or GMC field mappings, edit the final Google Sheets update node's column settings. For speed optimization, the batch size in the Loop Over Items node can be adjusted, but be mindful of API rate limits.
by simonscrapes
Use Case Research search engine rankings for SEO analysis: You need to track keyword rankings for your website You want to analyze competitor positions in search results You need data for SEO competition analysis You want to monitor SERP changes over time What this Workflow Does The workflow uses ScrapingRobot API to fetch Google search results: Retrieves SERP data for your target keywords Captures URL rankings and page titles Processes up to 5000 searches with free account Organizes results for SEO analysis Setup Create a ScrapingRobot account and get your API key Add your ScrapingRobot API key to the HTTP Request node's GET SERP token parameter Either connect your keyword database (column name "Keyword") or use the "Set Keywords" node Configure your preferred output database connection How to Adjust it to Your Needs Modify keyword source to pull from different databases Adjust the number of SERP results to capture Customize output format for your reporting needs More templates and n8n workflows >>> @simonscrapes
by simonscrapes
Use Case Transform web pages into AI-friendly markdown format: You need to process webpage content for LLM analysis You want to extract both content and links from web pages You need clean, formatted text without HTML markup You want to respect API rate limits while crawling pages What this Workflow Does The workflow uses Firecrawl.dev API to process webpages: Converts HTML content to markdown format Extracts all links from each webpage Handles API rate limiting automatically Processes URLs in batches from your database Setup Create a Firecrawl.dev account and get your API key Add your Firecrawl API key to the HTTP Request node's Authorization header Connect your URL database to the input node (column name must be "Page") or edit the array in Example fields from data source Configure your preferred output database connection How to Adjust it to Your Needs Modify input source to pull URLs from different databases Adjust rate limiting parameters if needed Customize output format for your specific use case More templates and n8n workflows >>> @simonscrapes
by Mario
This is an add-on for the template Check if workflows contain build-in nodes that are not of the latest version Purpose This workflow highlights outdated nodes within all workflows of a single n8n instance and places an updated preconfigured node right next to it, so it can be swapped easily. How it works The parent workflow checks the entire n8n instance for outdated nodes within all workflows and passes a list of those alongside some metadata to this workflow This workflow then processes that data and updates the affected workflows Outdated nodes are renamed by prepending an emoji (default: ⚠️) - this is also used for future checks to prevent from double-processing The latest version of each outdated node is added to the workflow canvas (not wired up) behind the old one, slightly shifted in position An Email is sent with a list of modified workflows In the settings it is possible to define: which symbol/emoji should be prepended to outdated notes whether to include only major node updates or all of them whether to add the new nodes to the canvas or not Setup Clone this template to your n8n instance Update the Settings node by setting at least the base URL of your n8n instance Set a recipient in the Gmail node Clone the parent template to your n8n instance and configure it as described in it’s description Add an “Execute Workflow” node to the end of the parent workflow and configure it, so it calls this workflow How to use Execute the parent workflow and check your Email Inbox. All linked workflows should contain one or more updated nodes with an emoji prepended to their names. Disclaimer Beware, that major updates can cause migrations of nodes to fail, since their structure can differ. So always compare the old nodes with the newly created, if all parameters still meet the requirements. Be careful when executing this workflow on a production environment, since it directly modifies your workflows. It is advisable to run this on your testing environment and migrate successfully tested workflows to your production environment using git or manually.
by Agentick AI
This n8n workflow automates the process of collecting job and decision-maker data, crafting AI-generated referral messages, and drafting them in Gmail—all using a combination of Apify, Google Sheets, LLMs, and email APIs. Use cases Auto-sourcing job postings from LinkedIn via Apify Identifying decision-makers at relevant companies Auto-drafting custom referral request messages using AI Exporting structured data to Google Sheets and drafting Gmail messages for outreach Good to know You can customize the filtering logic to target specific cities or companies. Message creation uses the Gemini 2.0 Flash model and LangChain’s output parser for structured messages. Email data is fetched using Anymailfinder, but can be replaced with other providers like Hunter.io. Gmail API drafts the message, but you need to enable Gmail API access from your Google Cloud console. How it works Trigger A Schedule Trigger runs the automation daily. Job Data Extraction Apify pulls job listings using a predefined actor. The HTTP response is split and structured using the Split Out node. Store Job Data Job listings are saved to a Google Sheet. The node maps key fields like title, company, location, and poster info. Decision-Maker Discovery Another Apify actor pulls decision-maker data from LinkedIn. This is split and filtered (e.g., by city or company name). Store Contacts Contact details (name, title, location, etc.) are appended to another Google Sheet (n8n-sheet). Message Generation A LLM Chain uses Gemini 2.0 Flash to generate short, custom LinkedIn messages. The message respects rules like tone, length (<100 words), and personalization. Parse & Merge AI Output The output is structured using Structured Output Parser and merged with contact data. Save Final Messages The final headline and body are stored back into Google Sheets (n8n-sheet). Email Discovery Get Email IDs node hits Anymailfinder API using the LinkedIn profile link. Draft in Gmail Using Gmail API, the message is drafted in your inbox with subject and body auto-filled. How to use Update Apify actor inputs to specify roles, companies, or locations. Replace the manual Schedule Trigger with a webhook or form input if desired. Update the Google Sheets document and sheet name in the relevant nodes. Add your Gmail and Anymailfinder credentials in n8n settings. Requirements Google Sheets API access Gmail API access Apify account Gemini API key (via Google AI Studio) Anymailfinder (or alternate email discovery API) Customizing this workflow This framework is highly modular. You can: Add more filters for company size, role, or hiring urgency Use alternate LLMs (OpenAI, Claude, etc.) Switch output channels (Slack, WhatsApp, etc.) Plug in different CRM tools for follow-ups
by Mohan Gopal
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. 🤖 AI-Powered Document QA System using Webhook, Pinecone + OpenAI + n8n This project demonstrates how to build a Retrieval-Augmented Generation (RAG) system using n8n, and create a simple Question Answer system using Webhook to connect with User Interface (created using Lovable): 🧾 Downloads the pdf file format documents from Google Drive (contract document, user manual, HR policy document etc...) 📚 Converts them into vector embeddings using OpenAI 🔍 Stores and searches them in Pinecone Vector DB 💬 Allows natural language querying of contracts using AI Agents 📂 Flow 1: Document Loading & RAG Setup This flow automates: Reading documents from a Google Drive folder Vectorizing using text-embedding-3-small Uploading vectors into Pinecone for later semantic search 🧱 Workflow Structure A [Manual Trigger] --> B[Google Drive Search] B --> C[Google Drive Download] C --> D[Pinecone Vector Store] D --> E[Default Data Loader] E --> F[Recursive Character Text Splitter] E --> G[OpenAI Embedding] 🪜 Steps Manual Trigger: Kickstarts the workflow on demand for loading new documents. Google Drive Search & Download Node: Google Drive (Search: file/folder) Downloads PDF documents Apply Recursive Text Splitter: Breaks long documents into overlapping chunks Settings: Chunk Size: 1000 Chunk Overlap: 100 OpenAI Embedding Model: text-embedding-3-small Used for creating document vectors Pinecone Vector Store Host: url Index: index Batch Size: 200 Pinecone Settings: Type: Dense Region: us-east-1 Mode: Insert Documents 💬 Flow 2: Chat-Based Q&A Agent This flow enables chat-style querying of stored documents using OpenAI-powered agents with vector memory. 🧱 Workflow Diagram A[Webhook (chat message)] --> B[AI Agent] B --> C[OpenAI Chat Model] B --> D[Simple Memory] B --> E[Answer with Vector Store] E --> F[Pinecone Vector Store] F --> G[Embeddings OpenAI] 🪜 Components Chat (Trigger): Receives incoming chat queries AI Agent Node Handles query flow using: Chat Model: OpenAI GPT Memory: Simple Memory Tool: Question Answer with Vector Store Pinecone Vector Store: Connected via same embedding index as Flow 1 Embeddings: Ensures document chunks are retrievable using vector similarity Response Node: Returns final AI response to user via webhook 🌐 Flow 3: UI-Based Query with Lovable This flow uses a web UI built using Lovable to query contracts directly from a form interface. 📥 Webhook Setup for Lovable Webhook Node Method: POST URL:url Response: Using 'Respond to Webhook' Node 🧱 Workflow Logic A[Webhook (Lovable Form)] --> B[AI Agent] B --> C[OpenAI Chat Model] B --> D[Simple Memory] B --> E[Answer with Vector Store] E --> F[Pinecone Vector Store] F --> G[Embeddings OpenAI] B --> H[Respond to Webhook] 💡 Lovable UI Users can submit: Full Name Email Department Freeform Query: User can enter any freeform query. Data is sent via webhook to n8n and responded with the answer from contract content. 🔍 Use Cases Contract Querying for Legal/HR teams Procurement & Vendor Agreement QA Customer Support Automation (based on terms) RAG Systems for private document knowledge ⚙️ Tools & Tech Stack 📌 Final Notes Pinecone Index: package1536 Dimension: 1536 Chunk Size: 1000, Overlap: 100 Embedding Model: text-embedding-3-small Feel free to fork the workflow or request the full JSON export. Looking forward to your suggestions and improvements!
by Mario
Purpose This workflow allows you to import any workflow from a file or another n8n instance and map the credentials easily. How it works A multi-form setup guides you through the entire process At the beginning you have two options: Upload a workflow file (JSON) Copy workflow from a remote n8n instance If you choose the second option, you get to choose one of your predefined (in the Settings node) remote instances first, then it retrieves a list of all the workflows using the n8n API which you then can choose a workflow from. Now both initial options come together - the workflow file is being processed In parallel all credentials of the current instance are being retrieved using the Execute Command node The next form page enables a mapping of all the credentials used in the workflow. The matching happens between the names (because one workflow can contain different credentials of the same type) of the original credentials and the ones available on the current instance. Every option then shows all available credentials of the same type. In addition the user has always the choice to create a new credential on the fly. For every option which was set to create a new credential, an empty credential is being created on the current instance using the n8n API. An emoji is being appended to the name, which indicates that it needs to be populated. Finally the workflow gets updated with the new credential ID’s and created on the current instance using the n8n API. Then the user gets a message, if the process has succeeded or not. Setup Select your credentials in the nodes which require those Configure your remote instance(s) in the Settings node. (You can skip this step, if you only want to use the File Upload feature) Every instance is defined as object with the keys name, apiKey and baseUrl. Those instances are then wrapped inside an array. You can find an example described within a note on the workflow canvas. How to use Grab the (production) URL of the Form from the first node Open the URL and follow the instructions given in the multi-form Disclaimer Security: Beware, that all credentials are being decrypted and processed within the workflow. Also the API keys to other n8n instances are stored within the workflow. This solution is primarily meant for transferring data between testing environments. For production use consider the n8n enterprise edition which provides a reliable way to deploy workflows between different environments without the need of manual credential mapping.
by Keith Rumjahn
Who's this for? If you own a website and need to analyze your Google analytics data If you need to create an SEO report on which pages are getting most traffic or how your google search terms are performing If you want to grow your site based on suggestions from data Use case Instead of hiring an SEO expert, I run this report weekly. It checks compares the data from this week to the week before: Views based on countries The top performing pages Google search console performance Watch youtube tutorial here Get my SEO A.I. agent system here Read my detailed case study here How it works The workflow gathers google analytics for the past 7 days then it gathers the data for the week before for comparison. It does this 3 times to get: views per country, engagement per page and google search console results for organic search results. The google analytics nodes has already chosen the correct dimensions and metrics. At the end, it passes the data to openrouter.ai for A.I. analyse. Finally it saves to baserow. How to use this Input your Google analytics credentials Input your property ID Input your Openrouter.ai credentials Input your baserow credentials You will need to create a baserow database with columns: Name, Country Views, Page Views, Search Report, Blog (name of your blog). Created by Rumjahn
by Yaron Been
Automatically monitor and track funding rounds in the US Fintech and Healthtech sectors using Crunchbase API, with daily updates pushed to Google Sheets for easy analysis and monitoring. 🚀 What It Does Daily Monitoring**: Automatically checks for new funding rounds every day at 8 AM Smart Filtering**: Focuses on US-based Fintech and Healthtech companies Data Enrichment**: Extracts and formats key funding information Automated Storage**: Pushes data to Google Sheets for easy access and analysis 🎯 Perfect For VC firms tracking investment opportunities Startup founders monitoring market activity Market researchers analyzing funding trends Business analysts tracking competitor funding ⚙️ Key Benefits ✅ Real-time funding round monitoring ✅ Focused industry tracking (Fintech & Healthtech) ✅ Automated data collection and organization ✅ Structured data output in Google Sheets ✅ Complete funding details including investors and amounts 🔧 What You Need Crunchbase API key Google Sheets account n8n instance Basic spreadsheet setup 📊 Data Collected Company Name Industry Funding Round Type Announced Date Money Raised (USD) Investors Crunchbase URL 🛠️ Setup & Support Quick Setup Deploy in 30 minutes with our step-by-step configuration guide 📺 Watch Tutorial 💼 Get Expert Support 📧 Direct Help Stay ahead of market movements with automated funding round tracking. Transform manual research into an efficient, automated process.