by Solomon
Learn how to build an MCP Server and Client in n8n with official nodes. > ⚠ Requires n8n version 1.88.0 or higher. In this example, we use Google Calendar and custom functions as two separate MCP Servers, demonstrating how to integrate both native and custom tools. How it works The AI Agent connects to two MCP Servers. Each MCP Trigger (Server) generates a URL exposing its tools. This URL is used by an MCP Client linked to the AI Agent. Whenever you make changes to the tools, there’s no need to modify the MCP Client. It automatically keeps the AI Agent informed on how to use each tool, even if you change them over time. That’s the power of MCP 🙌 Who is this template for Anyone looking to use MCP with their AI Agents. How to set up Instructions are included within the workflow itself. Check out my other templates 👉 https://n8n.io/creators/solomon/
by Nico Kowalczyk
Description: This template facilitates the transfer of a folder, along with all its files and subfolders, within a Nextcloud instance. The Nextcloud user must have access to both the source and destination folders. While Nextcloud allows folder movement, complications may arise when dealing with external storage that has rate limits. This workflow ensures the individual transfer of each file to avoid exceeding rate limits, particularly useful for setups involving external storage with rate limitations. How it works: Identify all files and subfolders within the specified source folder. Recursive search within subfolders for additional files. Replicate the folder structure in the target folder. Individually move each identified file to the corresponding location in the target folder. Set up steps: Set Nextcloud credentials for all Nextcloud nodes involved in the process. -Edit the trigger settings. Detailed instructions can be found within the respective trigger configuration. Initiate the workflow to commence the folder transfer process. Help If you need assistance with applying this template, feel free to reach out to me. You can find additional information about me and my services here. => https://nicokowalczyk.de/links I have also produced a video where I explain the workflow and provide an example. You can find this video over here. https://youtu.be/K1kmG_Q_jRk Cheers. Nico Kowalczyk
by Chris Carr
Split Test Agent Prompts with Supabase and OpenAI Use Case Oftentimes, it's useful to test different settings for a large language model in production against various metrics. Split testing is a good method for doing this. What it Does This workflow randomly assigns chat sessions to one of two prompts, the baseline and the alternative. The agent will use the same prompt for all interactions in that chat session. How it Works When messages arrive, a table containing information regarding session ID and which prompt to use is checked to see if the chat already exists If it does not, the session ID is added to the table and a prompt is randomly assigned These values are then used to generate a response Setup Create a table in Supabase called split_test_sessions. It needs to have the following columns: session_id (text) and show_alternative (bool) Add your Supabase, OpenAI, and PostgreSQL credentials Modify the Define Path Values node to set the baseline and alternative prompt values. Activate the workflow and test by sending messages through n8n's inbuilt chat Experiment with different chat sessions to test see both prompts in action Next Steps Modify the workflow to test different LLM settings such as temperature Add a method to measure the efficacy of the two alternative prompts
by Miquel Colomer
This n8n workflow template uses uProc's "Get Email by Domain, Firstname and Lastname" tool to discover a professional email address, and then sends that email to a Telegram channel. > ⚠️ Note: You must set up your *uProc credentials (Email + API Key)* from the *Integration settings* before running this workflow. 🚀 What It Does Uses user-provided data: first name, last name, and company domain Calls uProc to discover the most likely email address for that person Sends the discovered email and confidence level to a Telegram group 🛠️ Step-by-Step Setup Add uProc Credentials Go to the uProc integration page and copy your email and API key. Add them as credentials in your n8n instance. Set Tool Parameters Use the Set node to define: firstname: First name of the person lastname: Last name of the person domain: Their company domain Replace the Set Node (Optional) You can dynamically fetch the firstname, lastname, and domain from other sources like: Google Sheets MySQL or Postgres Webhook or Form submissions Run the Workflow Trigger the flow manually or integrate it with a larger automation. 🔍 uProc Parameters Explained domain**: The company domain (e.g., uproc.io) firstname**: First name of the person lastname** (in parameter: language): Last name of the person mode**: verify: Verifies email in real-time with mail server guess: Guesses based on company format (e.g., firstname.lastname@domain.com) 📦 uProc Response Fields email: Discovered email address confidence: Indicates if the result is verified or risky (e.g., catch-all) score: Reliability score from 0 (unreliable) to 99 (highly reliable) 📬 Notification via Telegram After discovering the email, the result is sent to a specified Telegram channel with this format: User Miquel Colomer has next email on uproc.io: contact@uproc.io (verified - 99) Clicking the email allows you to send a message directly to the recipient. 🔐 Credentials Used uProc API** – For discovering email addresses Telegram API** – To send messages to a specific group/channel ✨ Customization Tips Loop over a list of people**: Replace the set node with a data source that contains multiple people. Filter by score or confidence** before sending. Add additional outputs**: You can send the data via Email, Slack, or save it to a database. Trigger automatically**: Combine with a webhook or time-based trigger for automation. ❓Questions? Template created by Miquel Colomer and n8nhackers.com. Need help customizing or deploying? Contact us for consulting and support.
by Łukasz
Who is it for If you are a postmaster or you manage email server, you can set up DKIM and SPF records to ensure that spoofing your email address is hard. On your domain you can also set up DMARC record to receive XML reports from email providers (rua tag). Those reports contain data if email they received passed DKIM and SPF verifications. Since DMARC email is public, you will receive a lot of emails from email providers, not only if DKIM/SPF fail. There is no need for it - you probably only need to know if SPF/DKIM failed. So this script is intended to automatically parse all DMARC reports that come from email providers, but ONLY send you notification if SPF or DKIM failed - meaning that either someone tries to spoof your email or your DKIM/SPF is improperly set up. How it works script monitors postmaster email for DMARC reprots (rua) unpacks report and parses XML into JSON maps JSON and formats fields for MySQL/MariaDB input inputs into database sends notification on DKIM or SPF failure Remember to set up email input mailbox notification channels for slack for email
by David Ashby
Complete MCP server exposing 2 Mobility API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Mobility API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Mobility API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://developer.o2.cz/mobility/sandbox/api • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (2 total) 🔧 Info (1 endpoints) • GET /info: Retrieve Application Info 🔧 Transit (1 endpoints) • GET /transit/{from}/{to}: Transit between basic residential units 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Mobility API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 2 topupsapi API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add topupsapi credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the topupsapi API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://polls.apiblueprint.org • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (2 total) 🔧 Questions (2 endpoints) • GET /questions: Create Question 1 • POST /questions: Create a New Question 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native topupsapi API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 2 Negotiation API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Negotiation API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Negotiation API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.ebay.com{basePath} • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (2 total) 🔧 Find_Eligible_Items (1 endpoints) • GET /find_eligible_items: Find Eligible Listings 🔧 Send_Offer_To_Interested_Buyers (1 endpoints) • POST /send_offer_to_interested_buyers: Send Discount Offer 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Negotiation API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 4 Seller Service Metrics API API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Seller Service Metrics API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Seller Service Metrics API API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.ebay.com{basePath} • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (4 total) 🔧 Customer_Service_Metric (1 endpoints) • GET /customer_service_metric/{customer_service_metric_type}/{evaluation_type}: Get {Evaluation Type} 🔧 Seller_Standards_Profile (2 endpoints) • GET /seller_standards_profile: Get Seller Standards Profile • GET /seller_standards_profile/{program}/{cycle}: This call retrieves a single standards profile for the associated seller 🔧 Traffic_Report (1 endpoints) • GET /traffic_report: Retrieve Listing Traffic Report 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Seller Service Metrics API API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 2 IP Geolocation API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add IP Geolocation API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the IP Geolocation API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.bigdatacloud.net • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (2 total) 🔧 Data (2 endpoints) • GET /data/ip-geolocation-full: IP Geolocation with Confidence Area and Hazard Report API • GET /data/ip-geolocation-with-confidence: IP Geolocation with Confidence Area API 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native IP Geolocation API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 3 Search Services API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Search Services credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Search Services API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.archive.org • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (3 total) 🔧 Search (3 endpoints) • GET /search/v1/fields: Fields that can be requested • GET /search/v1/organic: Return relevance-based results from search queries • GET /search/v1/scrape: Scrape search results from Internet Archive, allowing a scrolling cursor 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Search Services API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Fabian ZNTL
What it does This workflow automatically processes incoming emails with intelligent AI classification, creating draft responses and sending Slack notifications based on email content. How it works Monitors emails with the 'AI-Agent' label AI classification into categories: Inquiry, Support, Newsletter, Action Item Adds appropriate labels to emails automatically Creates draft replies for Support and Inquiry emails Sends Slack notifications for Action Items and Newsletter summaries Setup Requirements Gmail OAuth2 credentials configured OpenAI API credentials (or other AI provider) Slack OAuth2 credentials (if notifications desired) Gmail labels created (see setup instructions below) How to customize Modify classification categories in the AI Agent Adjust label mappings in the Parse Classification node Customize draft response templates Configure different Slack channels for different email types