by Nick Saraev
Google Maps Email Scraper System Categories: Lead Generation, Web Scraping, Business Automation This workflow creates a completely free Google Maps email scraping system that extracts unlimited business emails without requiring expensive third-party APIs. Built entirely in N8N using simple HTTP requests and JavaScript, this system can generate thousands of targeted leads for any industry or location while operating at 99% free cost structure. Benefits Zero API Costs** - Operates entirely through free Google Maps scraping without expensive third-party services Unlimited Lead Generation** - Extract emails from thousands of Google Maps listings across any industry Geographic Targeting** - Search by specific cities, regions, or business types for precise lead targeting Complete Automation** - From search query to organized email list with minimal manual intervention Built-in Data Cleaning** - Automatic duplicate removal, filtering, and data validation Scalable Processing** - Handle hundreds of businesses per search with intelligent rate limiting How It Works Google Maps Search Integration: Uses strategic HTTP requests to Google Maps search URLs Processes search queries like "Calgary + dentist" to extract business listings Bypasses API restrictions through direct HTML scraping techniques Intelligent URL Extraction: Custom JavaScript regex patterns extract website URLs from Google Maps data Filters out irrelevant domains (Google, schema, static files) Returns clean list of actual business websites for processing Smart Website Processing: Loop-based architecture prevents IP blocking through intelligent batching Built-in delays and redirect handling for reliable scraping Processes each website individually with error handling Email Pattern Recognition: Advanced regex patterns identify email addresses within website HTML Extracts contact emails, info emails, and administrative addresses Handles multiple email formats and validation patterns Data Aggregation & Cleaning: Automatically removes duplicate emails across all processed websites Filters null entries and invalid email formats Exports clean, organized email lists to Google Sheets Required Google Sheets Setup Create a Google Sheet with these exact column headers: Search Tracking Sheet: searches - Contains your search queries (e.g., "Calgary dentist", "Miami lawyers") Email Results Sheet: emails - Contains extracted email addresses from all processed websites Setup Instructions: Create Google Sheet with two tabs: "searches" and "emails" Add your target search queries to the searches tab (one per row) Connect Google Sheets OAuth credentials in n8n Update the Google Sheets document ID in all sheet nodes The workflow reads search queries from the first sheet and exports results to the second sheet automatically. Business Use Cases Local Service Providers** - Find competitors and potential partners in specific geographic areas B2B Sales Teams** - Generate targeted prospect lists for cold outreach campaigns Marketing Agencies** - Build industry-specific lead databases for client campaigns Real Estate Professionals** - Identify businesses in target neighborhoods for commercial opportunities Franchise Development** - Research potential markets and existing competition Market Research** - Analyze business density and contact information across regions Revenue Potential This system transforms lead generation economics: $0 per lead vs. $2-5 per lead from paid databases Process 1,000+ leads daily without hitting API limits Sell as a service for $500-2,000 per industry/location Perfect for agencies offering lead generation to local businesses Difficulty Level: Intermediate Estimated Build Time: 1-2 hours Monthly Operating Cost: $0 (completely free) Watch My Complete Build Process Want to watch me build this entire system live from scratch? I walk through every single step - including the JavaScript code, regex patterns, error handling, and all the debugging that goes into creating a bulletproof scraping system. 🎥 Watch My Live Build: "Scrape Unlimited Leads WITHOUT Paying for APIs (99% FREE)" This comprehensive tutorial shows the real development process - including writing custom JavaScript, handling rate limits, and building systems that actually work at scale without getting blocked. Set Up Steps Basic Workflow Architecture: Set up manual trigger for testing and Google Sheets integration Configure initial HTTP request node for Google Maps searches Enable SSL ignore and response headers for reliable scraping URL Extraction Code Setup: Configure JavaScript code node with custom regex patterns Set up input data processing from Google Maps HTML responses Implement URL filtering logic to remove irrelevant domains Website Processing Pipeline: Add "Split in Batches" node for intelligent loop processing Configure HTTP request nodes with proper delays and redirect handling Set up error handling for websites that can't be scraped Email Extraction System: Implement JavaScript code node with email-specific regex patterns Configure email validation and format checking Set up data aggregation for multiple emails per website Data Cleaning & Export: Configure filtering nodes to remove null entries and duplicates Set up "Split Out" node to aggregate emails into single list Connect Google Sheets integration for organized data export Testing & Optimization: Use limit nodes during testing to prevent IP blocking Test with small batches before scaling to full searches Implement proxy integration for high-volume usage Advanced Optimizations Scale the system with: Multi-Page Scraping:** Extract URLs from homepages, then scrape contact pages for more emails Proxy Integration:** Add residential proxies for unlimited scraping without rate limits Industry Templates:** Create pre-configured searches for different business types Contact Information Expansion:** Extract phone numbers, addresses, and social media profiles CRM Integration:** Automatically add leads to sales pipelines and marketing sequences Important Considerations Rate Limiting:** Built-in delays prevent IP blocking during normal usage Scalability:** For high-volume usage, consider proxy services for unlimited requests Compliance:** Ensure proper usage rights for extracted contact information Data Quality:** System includes filtering but manual verification recommended for critical campaigns Check Out My Channel For more advanced automation systems and business-building strategies that generate real revenue, explore my YouTube channel where I share proven automation techniques used by successful agencies and entrepreneurs.
by Mario
Purpose This workflow snippet allows for advanced error catching during retry attempts. There are cases, where you want to check if an item exists first, so you can determine the following actions. Some API’s do not support an endpoint (e.g. Todoist: completed tasks) to do so, which is why you would work with the error branch, only that this does not work well in combination with the retry functionality. How it works Instead of the builtin retry function of a Node a custom loop is used, to get more granular control in between the iterations If the main executed node fails, the error can be filtered for an expected error, which can trigger a separate action The retries only happen, if an unexpected error happened The workflow only stops, if the defined amount of retries exceeded Setup Copy the nodes into your existing workflow Replace the “Replace me” placeholder with the Node you want to apply the retry logic on Follow the sticky notes for more instructions and optional settings
by David Ashby
Complete MCP server exposing 4 Seller Service Metrics API API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Seller Service Metrics API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Seller Service Metrics API API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.ebay.com{basePath} • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (4 total) 🔧 Customer_Service_Metric (1 endpoints) • GET /customer_service_metric/{customer_service_metric_type}/{evaluation_type}: Get {Evaluation Type} 🔧 Seller_Standards_Profile (2 endpoints) • GET /seller_standards_profile: Get Seller Standards Profile • GET /seller_standards_profile/{program}/{cycle}: This call retrieves a single standards profile for the associated seller 🔧 Traffic_Report (1 endpoints) • GET /traffic_report: Retrieve Listing Traffic Report 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Seller Service Metrics API API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Wildkick
🚀 Local Multi-LLM Testing & Performance Tracker This workflow is perfect for developers, researchers, and data scientists benchmarking multiple LLMs with LM Studio. It dynamically fetches active models, tests prompts, and tracks metrics like word count, readability, and response time, logging results into Google Sheets. Easily adjust temperature 🔥 and top P 🎯 for flexible model testing. Level of Effort: 🟢 Easy – Minimal setup with customizable options. Setup Steps: Install LM Studio and configure models. Update IP to connect to LM Studio. Create a Google Sheet for result tracking. Key Outcomes: Benchmark LLM performance. Automate results in Google Sheets for easy comparison. Version 1.0
by Tiartyos
Voice Cloning Workflow - Zyphra Zonos API Who is this for? This workflow is designed for developers, content creators, and businesses looking to automate high-quality voice synthesis using AI voice cloning technology. What problem does this solve? It automates the process of generating natural-sounding speech from text using a sample voice file, eliminating the need for manual voice recording and providing consistent voice output for applications like audiobooks, virtual assistants, or content localization. What this workflow does The workflow receives text and voice cloning parameters via webhook, reads a sample voice file from your storage, sends the data to Zyphra's Zonos API for voice synthesis, and saves the generated audio file to your specified output location. Prerequisites You'll need: API key from Zyphra (obtain from https://playground.zyphra.com/settings/api-keys) Account registration at https://playground.zyphra.com Sample voice file stored on accessible disk/cloud storage n8n instance running with webhook capabilities Setup Configure your Zyphra API key in the "Call Zyphra Clone API" node under Header Parameters (Name: X-API-Key, Value: your-api-key) Ensure your sample voice files are accessible at the paths you'll specify Test the webhook endpoint is accessible Supported Audio Formats The API supports multiple output formats through the mime_type parameter: WebM** (default) - audio/webm Ogg** - audio/ogg WAV** - audio/wav MP3** - audio/mp3 or audio/mpeg MP4/AAC** - audio/mp4 or audio/aac Usage Example Endpoint: POST http://localhost:5678/webhook-test/voice-clone Headers: Content-Type: application/json Request Body: { "text": "Hello there! This voice sounds just like the sample!", "speaking_rate": 18, "sample_voice_path": "/data/output/sampleVoice.wav", "output_path": "/data/output/", "language_iso_code": "en-us", "mime_type": "audio/wav", "model": "zonos-v0.1-transformer", "emotion": { "happiness": 0.8, "neutral": 0.3, "sadness": 0.05, "disgust": 0.05, "fear": 0.05, "surprise": 0.05, "anger": 0.05, "other": 0.5 } } Parameters Required Parameters text**: Text to synthesize into speech sample_voice_path**: Path to your voice sample file output_path**: Directory where generated audio will be saved Optional Parameters (with defaults) speaking_rate**: 15 - Speech speed language_iso_code**: "en-us" - Language code mime_type**: "audio/wav" - Output audio format model**: "zonos-v0.1-transformer" - AI model to use emotion**: Object with emotion levels (0-1 scale)
by Sagar
This template streamlines your AI Avatar Video Automation workflow by connecting Google sheets for Voice Text & AI Avatar Video Link storage, using HTTP Nodes for connecting Heygen API & AI Avatar/Voice Id for automated Video generation. Pre-requisites Before setting up this workflow, ensure you have: A Google account with access to Google Sheets A Heygen Account with API access in account's settings. n8n.io account with workflow access Setup Instructions Configure Data Source Create a Google Sheet with the following columns: Script/Voice Text & Final AI Avatar Video Link. Connect Google Sheet Add your Google sheet credentials in the “Google Sheet” node Specify the folder path where your columns are stored. Configure the node to retrieve files based on filenames from your Google Sheet Set Up HTTP Node with Heygen API Credentials Configure the node to generate AI Video based on Script/Voice Text. Configure HTTP Node 2 Connect Heygen API Credentials Set up the API node to Get the AI Avatar Video Link. then finally setup Google sheet node again to get & upload the final AI Avatar video link in the column "the Final AI Avatar Video Link" Workflow Automation Setup Configure the scheduler node to run at your preferred frequency Set up error handling to notify you of any posting failures Execution Instructions After completing all connections, test the workflow. Monitor the execution in the n8n dashboard to ensure proper functioning View the “Executions” tab to track successful and troubleshoot any errors. This template saves hours of manual AI Avatar video Creation Process. use this without the daily manual effort. Details Nodes used in workflow Manual Trigger Node Google Sheet Node 1 HTTP Node 1 HTTP Node 2 Google Sheets Node 2
by David Olusola
⚙️ How It Works: LocalRAG.AI ⚠️ Note: This system only works for self-hosted n8n instances. It will not function on n8n.cloud or other remote setups. LocalRAG.AI is a private, on-prem AI assistant that uses your own documents to answer questions intelligently. It combines LangChain, Ollama, Qdrant, and Postgres into a powerful AI pipeline — all running locally for maximum data privacy. 🔄 What It Does Monitors Your Google Drive Folders for new or updated files. Downloads the file, extracts the text, and prepares it. Generates Embeddings using your local Ollama model (e.g., LLaMA 3). Stores them in Qdrant, your local vector database. During a chat, it: Uses vector search to retrieve relevant chunks. Combines them with chat history stored in Postgres. Responds via a LangChain AI agent using your local model. 🛠️ Setup Steps (Self-hosted Only) Install and Self-host n8n (e.g., via Docker). Set up your Ollama instance locally and load your desired LLM (e.g., llama3). Deploy Qdrant locally for vector storage. Connect a Postgres DB to store chat history. Create and import the workflow in n8n. Authenticate Google Drive to monitor folders. Connect credentials for Ollama, Qdrant, Postgres in the n8n workflow. Start chatting through the Webhook Trigger or custom UI. 🧠 Perfect For: Research teams handling confidential data Internal documentation Q&A AI chatbots that don’t rely on OpenAI or cloud
by Davide
The "Voice RAG Chatbot with ElevenLabs and OpenAI" workflow in n8n is designed to create an interactive voice-based chatbot system that leverages both text and voice inputs for providing information. Ideal for shops, commercial activities and restaurants How it works: Here's how it operates: Webhook Activation: The process begins when a user interacts with the voice agent set up on ElevenLabs, triggering a webhook in n8n. This webhook sends a question from the user to the AI Agent node. AI Agent Processing: Upon receiving the query, the AI Agent node processes the input using predefined prompts and tools. It extracts relevant information from the knowledge base stored within the Qdrant vector database. Knowledge Base Retrieval: The Vector Store Tool node interfaces with the Qdrant Vector Store to retrieve pertinent documents or data segments matching the user’s query. Text Generation: Using the retrieved information, the OpenAI Chat Model generates a coherent response tailored to the user’s question. Response Delivery: The generated response is sent back through another webhook to ElevenLabs, where it is converted into speech and delivered audibly to the user. Continuous Interaction: For ongoing conversations, the Window Buffer Memory ensures context retention by maintaining a history of interactions, enhancing the conversational flow. Set up steps: To configure this workflow effectively, follow these detailed setup instructions: ElevenLabs Agent Creation: Begin by creating an agent on ElevenLabs (e.g., named 'test_n8n'). Customize the first message and define the system prompt specific to your use case, such as portraying a character like a waiter at "Pizzeria da Michele". Add a Webhook tool labeled 'test_chatbot_elevenlabs' configured to receive questions via POST requests. Qdrant Collection Initialization: Utilize the HTTP Request nodes ('Create collection' and 'Refresh collection') to initialize and clear existing collections in Qdrant. Ensure you update placeholders QDRANTURL and COLLECTION accordingly. Document Vectorization: Use Google Drive integration to fetch documents from a designated folder. These documents are then downloaded and processed for embedding. Employ the Embeddings OpenAI node to generate embeddings for the downloaded files before storing them into Qdrant via the Qdrant Vector Store node. AI Agent Configuration: Define the system prompt for the AI Agent node which guides its behavior and responses based on the nature of queries expected (e.g., product details, troubleshooting tips). Link necessary models and tools including OpenAI language models and memory buffers to enhance interaction quality. Testing Workflow: Execute test runs of the entire workflow by clicking 'Test workflow' in n8n alongside initiating tests on the ElevenLabs side to confirm all components interact seamlessly. Monitor logs and outputs closely during testing phases to ensure accurate data flow between systems. Integration with Website: Finally, integrate the chatbot widget onto your business website replacing placeholder AGENT_ID with the actual identifier created earlier on ElevenLabs. By adhering to these comprehensive guidelines, users can successfully deploy a sophisticated voice-driven chatbot capable of delivering precise answers utilizing advanced retrieval-augmented generation techniques powered by OpenAI and ElevenLabs technologies.
by Mahmoud Ashraf
This workflow automatically creates in-depth, SEO-friendly Arabic articles based on any keyword you provide. It researches the topic, generates a full article outline, writes every section in Arabic, and saves the final article directly to your Notion workspace—all in a few clicks. How It Works Step 1:** You submit a simple web form with your keyword and (optionally) an article title. Step 2:** The workflow researches the topic using advanced AI, gathers trending questions from Google, and creates a detailed, structured outline. Step 3:** Each section of the article is written in Arabic by AI, following best SEO practices and including real FAQs. Step 4:** The completed article is automatically formatted and saved to your Notion database, ready for review or publishing. Setup Instructions What you need:** An OpenAI API key (for AI-powered writing and outline generation) An OpenRouter API key (for research via Perplexity/Sonar AI) A Notion account and Notion API integration (for saving articles) DataForSEO account (for fetching Google "People Also Ask" questions) How to set up:** Import the workflow into your n8n instance. Connect your API credentials for OpenAI, OpenRouter, Notion, and (optionally) DataForSEO. Update your Notion database ID in the workflow settings. Deploy the workflow. Fill out the web form to generate your first article. Setup time:** 10–20 minutes if you already have your accounts. Tip: You can fully customize the outline and writing prompts for your target audience or topic. The workflow is modular—easy to adapt for different languages or content styles.
by A Z
Automatically scrape X (Twitter) for posts hiring specific roles (e.g., automation engineers, video editors, graphic designers), filter true hiring intent with AI, deduplicate in Google Sheets, and alert via Telegram. What it does Pulls recent X/Twitter posts for multiple role keywords via Apify. Normalizes each post (text, author, links, location). Uses an AI Agent to keep only posts where the author is hiring (not self-promo). Checks Google Sheets for duplicates by URL before saving. Writes qualified posts to a sheet and sends a Telegram notification. We are using n8n automation roles as the example here How it works (Step by Step) Schedule Trigger – Runs on an interval (currently every 12 hours). Scrape X/Twitter – Apify tweet-scraper fetches up to 50 latest posts for keywords like: n8n developer, looking for n8n, n8n expert, hire AI automation, looking for AI automation. Normalize Fields – Set node maps to: url, text, author.userName, author.url, author.location. AI Filter & Dedupe Check Accept only clear hiring posts for n8n/AI automation roles (reject self-promotion). Queries Google Sheets to see if url already exists; duplicates are dropped. Gate – IF node passes only non-empty AI outputs. Parse JSON Safely – Code node extracts/validates JSON from the AI output. Save to Google Sheets – Appends/updates a row (matching on url). Telegram Alert – Sends a message with the tweet URL, author, location, and text. Who it’s for Freelancers, agencies, and job seekers who want a steady radar of real hiring posts for their target roles. Customization Ideas Swap keywords to track other roles (video editors, designers, copywriters, etc.). Add Slack/Discord notifications. Extend the AI rules (e.g., different geographies or role scopes). Treat the sheet as a mini-CRM (status, outreach date, notes).
by max e
Turn plain-language chat like “Tomorrow 9 AM: write blog post” into neatly organised Todoist tasks with GPT-4o and n8n—zero code. 🪄 Ultimate Personal Todoist Agent Turn natural-language requests into perfectly-organized Todoist tasks—all on autopilot inside n8n. > “Add Finish quarterly report by Friday afternoon” → the agent creates the task, sets the due date & priority, and even drops it into the right project. ✨ 🌟 Why this workflow rocks All-in-one Todoist super‑powers** – create, update, complete, move, archive… every major Todoist endpoint is wired up (tasks, projects, sections, labels, comments). LLM‑powered intent detection** – an OpenAI model interprets plain-English (or emoji‑filled!) messages so you don’t have to remember slash‑commands. Minimal setup** – just two credentials and you’re live. Battle‑tested building block** – use it as‑is, or plug the Todoist Agent node into your own agents & chatbots. 🛠️ What you’ll need | Credential | Where it’s used | How to set it up | | ------------------ | -------------------------------------- | --------------------------------------------------------------------------------------------- | | OpenAI API | Orchestrator & LLM nodes | Paste your OpenAI secret key into an OpenAI credential in n8n. | | Todoist OAuth2 | Todoist node and HTTP Request node | Log in Todoist from your browser to set up credential in n8n. | > That’s it—no webhooks, no extra secrets. > Tested with *gpt‑4o‑latest* – the fastest & most accurate model in our trials. ⚡ Quick‑start (5 minutes) Import the JSON template (hit ▶️ Try it out on the n8n template page or drag‑drop the file into your canvas). Select your credentials in the two credential dropdowns. Click Test workflow. In the sample Function node, tweak the message field (e.g. “Tomorrow at 9 am: write blog post”). Run → watch your new Todoist task appear. (Optional) Swap the Function node for your favourite chat trigger (Telegram, Slack, WhatsApp, Discord, you name it). Boom—your personal Todoist genie is alive! 🧞♂️ 🧩 How it works (under the hood) [Trigger / Chat message] │ ▼ [🗂️ Orchestrator Agent] ← OpenAI Chat Model + Short‑term Memory │ ↳ Parses intent & entities │ ▼ [🤖 Todoist Agent] ← 15+ Todoist endpoints │ ↳ Executes the right call (create, update, complete, etc.) ▼ [Done ✅ ] The Orchestrator is an example. In production you can drop it and simply expose the Todoist Agent as a tool for any other agent workflow. 🎛️ Customising & extending | Idea | How to do it | | ------------------------- | ---------------------------------------------------------------------------------------- | | Notion / Sheets sync | After the Todoist Agent node, add a Notion or Google Sheets node to log completed items. | | Voice commands | Swap the chat trigger for a Speech‑to‑Text node (e.g. Whisper). | 🤝 Need custom automations? Want me to build or tweak something for you? → Email maxemelyanenko@gmail.com and let’s make it happen! ⚠️ What’s not included (yet) Shared projects & other Todoist Pro/Business endpoints. File attachments in the comments. Editing comments. Pull requests welcome! 🙌
by David Ashby
Complete MCP server exposing 2 CarbonDoomsDay API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add CarbonDoomsDay credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the CarbonDoomsDay API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.carbondoomsday.com/api • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (2 total) 🔧 Co2 (2 endpoints) • GET /co2/: Get CO2 Measurement by Date • GET /co2/{date}/: CO2 measurements from the Mauna Loa observatory. This data is made available through the good work of the people at the Mauna Loa observatory. Their release notes say: These data are made freely available to the public and the scientific community in the belief that their wide dissemination will lead to greater understanding and new scientific insights. We currently scrape the following sources: [co2_mlo_weekly.csv] [co2_mlo_surface-insitu_1_ccgg_DailyData.txt] [weekly_mlo.csv] We have daily CO2 measurements as far back as 1958. Learn about using pagination via [the 3rd party documentation]. [co2_mlo_weekly.csv]: https://www.esrl.noaa.gov/gmd/webdata/ccgg/trends/co2_mlo_weekly.csv [co2_mlo_surface-insitu_1_ccgg_DailyData.txt]: ftp://aftp.cmdl.noaa.gov/data/trace_gases/co2/in-situ/surface/mlo/co2_mlo_surface-insitu_1_ccgg_DailyData.txt [weekly_mlo.csv]: http://scrippsco2.ucsd.edu/sites/default/files/data/in_situ_co2/weekly_mlo.csv [the 3rd party documentation]: http://www.django-rest-framework.org/api-guide/pagination/#pagenumberpagination 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native CarbonDoomsDay API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.