by Todsaporn Sangboon
📈 How it works This n8n workflow allows you to interact with Binance Spot Trading API directly to: Place Limit Buy and Limit Sell orders Place Market Buy and Market Sell orders Query account info* and *open orders** Cancel all open orders** for a specific symbol All requests are signed using Binance's HMAC SHA256 signature method for secure trading. ⚙️ Setup Steps Create Binance API Credentials in n8n: Go to Credentials > New Choose Binance API Add api_key and api_secret Save as Binance API Import this workflow into your n8n instance. Update default values: In Set Parameter nodes like LimitBuy Parameter, change: symbol (e.g. BTCUSDT) quantity, price as needed Run the workflow manually via the Execute workflow trigger. ✅ Notes Credential node is marked with instructions. HMAC signatures are automatically calculated before making each request. HTTP nodes are preconfigured for Binance API v3. 🔒 No API key or secret is included.
by Angel Menendez
Submission Overview for Voiceflow Demo Workflow View the YouTube video for this workflow here. Who is this for? This workflow is ideal for businesses and developers using Voiceflow to power AI voice chatbots. It benefits teams that want to enhance chatbot functionality through integrations with platforms like Zendesk, Google Calendar, and Airtable. What problem is this workflow solving? The workflow addresses the need for seamless integration of chatbot interactions with backend systems. It automates customer service tasks such as ticket creation, meeting scheduling, and data reporting, reducing manual effort and enhancing efficiency. What does this workflow do? Customer Lookup:** Checks the database for existing customers and returns relevant details or a "NOT_FOUND" status. Zendesk Ticket Creation:** Automates the creation of support tickets for customer issues. Meeting Scheduling:** Integrates with Google Calendar to provide availability and schedule meetings. Transcript Reporting:** Aggregates interaction data and sends it to Airtable for analysis by the product team. Setup Configure your Voiceflow chatbot to connect to this workflow via a webhook. Set up the required integrations: Zendesk API: For ticket creation. Google Calendar API: For scheduling. Airtable API: For storing transcripts. Customize the workflow's nodes to match your use case, such as database fields or API endpoints. Deploy the workflow on your n8n instance and test the integrations. How to customize this workflow to your needs Adjust database queries to match your customer data schema. Modify the Zendesk ticket payload to include additional fields or custom formats. Update Google Calendar configurations for different scheduling requirements. Add or remove Airtable fields based on the product team's analysis needs. This template adheres to n8n’s submission guidelines, ensuring clarity, relevance, and broad applicability for users in customer service, product development, and automation.
by David Ashby
🛠️ Pushover Tool MCP Server Complete MCP server exposing all Pushover Tool operations to AI agents. Zero configuration needed - 1 operation pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Pushover Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Pushover Tool tool with full error handling 📋 Available Operations (1 total) Every possible Pushover Tool operation is included: 💬 Message (1 operations) • Push a message 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Pushover Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Pushover Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing all Mandrill Tool operations to AI agents. Zero configuration needed - all 2 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Mandrill Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Mandrill Tool tool with full error handling 📋 Available Operations (2 total) Every possible Mandrill Tool operation is included: 💬 Message (2 operations) • Send a message based on a template • Send a message based on HTML 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Mandrill Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Mandrill Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Lucas
🎶 Add liked songs to a monthly playlist > This Workflow is a port of Add saved songs to a monthly playlist from IFTTT. When you like a song, the workflow will save this song in a monthly playlist. E.g.: It's June 2024, I liked a song. The workflow will save this song in a playlist called June '24. If this playlist does not exist, the workflow will create it for me. ⚙ How it works Each 5 minutes, the workflow will start automatically. He will do 3 things : Get the last 10 songs you saved in the "Liked song" playlist (by clicking on the heart in the app) and save them in a NocoDB table (of course, the workflow avoid to create duplicates). Check if the monthly playlist is already created. Otherwise, the playlist is created. The created playlist is also saved in NocoDB to avoid any problems. Check if the monthly playlist contains all the song liked this month by getting them from NocoDB. If they are not present, add them one by one in the playlist. You may have a question regarding the need of NocoDB. Over the last few weeks/months, I've had duplication problems in my playlists and some playlists have been created twice because Spotify wasn't returning all the information but only partial information. Having the database means I don't have to rely on Spotify's data but on my own, which is accurate and represents reality. 📝 Prerequisites You need to have : Spotify API keys, which you can obtain by creating a Spotify application here: https://developer.spotify.com/dashboard. Create a NocoDB API token 📚 Instructions Follow the instructions below Create your Spotify API credential Create your NocoDB credential Populate all Spotify nodes with your credentials Populate all Spotify nodes with your credentials Enjoy ! If you need help, feel free to ping me on the N8N Discord server or send me a DM at "LucasAlt" Show your support Share your workflow on X and mention @LucasCtrlAlt Consider buying me a coffee 😉
by Mohamed Abdelwahab
Automates the process of generating, storing, and publishing engaging LinkedIn posts derived from books (PDFs) using AI and vector search. 🧠 Overview This workflow: Watches a Google Drive folder for new or updated book PDFs. Extracts and embeds the content using OpenAI. Stores the data in a Pinecone vector database. Uses a LangChain agent to generate post ideas. Creates concise LinkedIn posts with hook, insight, CTA. Updates a Google Sheet and posts to LinkedIn. 🛠 Workflow Breakdown 📥 1. Google Drive Trigger Trigger:** Watches a folder for new or updated PDF files. Action:** Downloads the updated PDF. 📄 2. Extract and Embed Content Extract from File:** Parses PDF to extract text. Text Splitter:** Breaks text into chunks. Embeddings (OpenAI):** Converts chunks into vector embeddings. Pinecone Vector Store:** Saves the embeddings with the book name as namespace. 🧠 3. Post Idea Generation (LangChain Agent) Uses a prompt to: Search Pinecone DB Extract insights Format into 5 LinkedIn post ideas with: Hook Insight CTA Memory buffer** and structured output parser are used for clean AI interaction. ✍️ 4. Post Creation Each idea is: Split Rewritten with a GPT model prompt to match LinkedIn tone Styled for under 600 characters Includes emojis, hashtags, and tone guidelines 📊 5. Google Sheet Integration Saves all generated posts to a Google Sheet. Marks status: "published" or "no". 🔁 6. Scheduled Publishing Every day: Pulls an unpublished post Publishes it to LinkedIn Updates the post's status and timestamp in the Google Sheet ⚙️ Setup Guide 📂 Google Drive Create a folder for book PDFs Connect your Google Drive account to n8n Provide access token with file read permission 📊 Google Sheets Create a Google Sheet with columns: bookname, hook, insight, cta, postContent, published, date Add credentials in n8n with read/write permission 🧠 Pinecone Set up a Pinecone project and index (linkdenpost) Namespace will be auto-named using the book filename 🔑 API Credentials Required OpenAI API** (for embeddings and post generation) Pinecone API** (for vector storage and retrieval) LinkedIn OAuth2** (to publish posts) Google Drive & Sheets** credentials 🔁 Flow Summary graph TD A[Google Drive Trigger] --> B[Download PDF] B --> C[Extract Text] C --> D[Text Splitter] D --> E[Create Embeddings] E --> F[Pinecone Vector Store] F --> G[LangChain Agent] G --> H[Structured Output (5 Post Ideas)] H --> I[Split Ideas] I --> J[Format as LinkedIn Post (GPT)] J --> K[Store in Google Sheet] L[Schedule Trigger] --> M[Get Unpublished Post] M --> N[Post to LinkedIn] N --> O[Mark as Published] 🧪 Prompt Example (Used in LangChain Agent) You are a content strategist. Search the Pinecone vector DB containing a book. Generate 5 unique LinkedIn post ideas with: A Hook (curiosity driven) Insight (summary < 100 words) CTA ("Agree or disagree?", etc.) Respond in structured JSON: [ { "Hook": "...", "Insight": "...", "CTA": "..." }, ... ] ✅ Output Sample { "Hook": "Why your lab's results might be invalid 😱", "Insight": "ISO/IEC 17025 stresses that labs must plan and address risks to impartiality and validity.", "CTA": "Does your lab audit for these risks?" } 📆 Schedule Control Uses Schedule Trigger to post daily at a set time. Ensures automation with LinkedIn and accurate Google Sheet syncing. 📝 Notes Posts remain professional and concise for a LinkedIn audience Works with any PDF book Supports multi-book pipelines You can filter and tag books by filename or folder for segmenting post styles
by scrapeless official
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How it works This advanced automation builds a fully autonomous SEO blog writer using n8n, Scrapeless, LLMs, and Pinecone vector database. It’s powered by a Retrieval-Augmented Generation (RAG) system that collects high-performing blog content, stores it in a vector store, and then generates new blog posts based on that knowledge—endlessly. Part 1: Build a Knowledge Base from Popular Blogs Scrape existing articles** from a well-established writer (in this case, Mark Manson) using the Scrapeless node. Extract content from blog pages* and store it in *Pinecone**, a powerful vector database that supports similarity search. Use Gemini Embedding 001** or any other supported embedding model to encode blog content into vectors. Result**: You’ll have a searchable vector store of expert-level content, ready to be used for content generation and intelligent search. Part 2: SERP Analysis & AI Blog Generation Use Scrapeless' SERP node to fetch search results based on your keyword and search intent. Send the results to an LLM (like Gemini, OpenRouter, or OpenAI) to generate a keyword analysis report in Markdown → then converted to HTML. Extract long-tail keywords, search intent insights, and content angles from this report. Feed everything into another LLM with access to your Pinecone-stored knowledge base, and generate a fully SEO-optimized blog post. Set up steps Prerequisites Scrapeless API key Pinecone account and index setup An embedding model (Gemini, OpenAI, etc.) n8n instance with Community Node: n8n-nodes-scrapeless installed Credential Configuration Add your Scrapeless and Pinecone credentials in n8n under the "Credentials" tab Choose embedding dimensions according to the model you use (e.g., 768 for Gemini Embedding 001) Key Highlights Clones a real content creator**: Replicates knowledge and writing style from top-performing blog authors. Auto-scrapes hundreds of blog posts** without being blocked. Stores expert content** in a vector DB to build a reusable knowledge base. Performs real-time SERP analysis** using Scrapeless to fetch and analyze search data. Generates SEO blog drafts** using RAG with detailed keyword intelligence. Output includes**: blog title, HTML summary report, long-tail keywords, and AI-written article body. RAG + SEO: The Future of Content Creation This template combines: AI reasoning** from large language models Reliable data scraping** from Scrapeless Scalable storage** via Pinecone vector DB Flexible orchestration** using n8n nodes This is not just an automation—it’s a full-stack SEO content machine that enables you to: Build a domain-specific knowledge base Run intelligent keyword research Generate traffic-ready content on autopilot 💡 Use Cases SaaS content teams cloning competitor success Affiliate marketers scaling high-traffic blog production Agencies offering automated SEO content services AI researchers building personal knowledge bots Writers automating first-draft generation with real-world tone
by Billy Christi
Who is this for? This workflow is perfect for: Businesses and teams who need an automated solution to organize, analyze, and retrieve insights from their internal documents. Researchers who want to quickly analyze and query large collections of research papers, reports, or datasets. Customer support teams looking to streamline access to product documentation and support resources. Legal and compliance professionals needing to reference and query legal documents with confidence. AI enthusiasts and developers wanting to implement Retrieval-Augmented Generation (RAG) systems without starting from scratch. What problem is this workflow solving? Manually organizing, processing, and searching through documents can be time-consuming, error-prone, and inefficient. This workflow solves that by: Automating document processing** from Google Drive, supporting multiple formats like PDFs, CSVs, and Google Docs. Extracting, chunking, and enhancing document text**, preserving context and improving AI comprehension. Storing vector embeddings** in a secure, scalable Supabase vector database, enabling semantic search and retrieval. Providing an interactive AI chat interface** that allows users to ask natural language questions and get precise, document-based answers. This means teams can quickly access relevant insights from their document repositories—boosting productivity and ensuring accurate information retrieval. Key Features 🚀 End-to-End Document Processing: From Google Drive upload detection to vector embedding and storage. 🔍 Semantic Search & Retrieval: Users can ask complex, natural-language questions and receive contextually relevant answers. 🤖 AI-Powered Summaries & Metadata: Automatically generates document titles and summaries using Google Gemini AI. 📝 Smart Chunking & Contextual Enhancement: Breaks documents into smart chunks with overlap, preserving context and table integrity. 🔐 Secure & Scalable Vector Database: Stores and retrieves embeddings in a Supabase vector store for fast, reliable searches. 💬 Conversational AI Interface: Uses OpenAI to power natural, accurate, and cost-effective AI chat interactions. How does this workflow work? Monitors Google Drive for new files Extracts text from PDFs and CSVs (or Google Docs auto-converted) Splits text into context-preserving chunks Enhances chunk quality and stores embeddings in Supabase Enables natural language search and AI-powered chat interactions with the stored documents Typical Use Cases 📚 Corporate Knowledge Base 🔬 Research Paper Analysis 📞 Customer Support Document Query ⚖️ Legal Document Review and Analysis 🔍 Internal Team Documentation Search Why You’ll Love It This workflow lets you build a scalable, searchable, and AI-powered document system—without needing to write complex code or manage multiple systems. With this, you can: Stay organized with automated document processing. Deliver faster, more accurate answers to user queries. Reduce manual work and improve productivity. Gain a competitive edge with cutting-edge AI search capabilities. Setup Requirements An n8n instance with Google Drive, Supabase, OpenAI, and Gemini credentials configured. Access to a Supabase vector store for storing document embeddings. Configurable chunk size, overlap, and processing limits (default: 1000 characters per chunk, 20 chunks max).
by Alfonso Corretti
Who is this for? 🧑🏻🫱🏻🫲🏻🤖 Humans and Robots alike. This workflow can be used as a Chat Trigger, as well as a Workflow Trigger. It will take a natural language request, and then generate a SQL query. The resulting query parameter will contain the query, and a sqloutput parameter will contain the results of executing such query. What's the use case? This template is most useful paired with other workflows that extract e-mail information and store it in a structured Postgres table, and use LLMs to understand inquiries about information contained in an e-mail inbox and formulate questions that needs answering. Plus, the prompt can be easily adapted to formulate SQL queries over any kind of structured database. Privacy and Economics As LLM provider I'm using Ollama locally, as I consider my e-mail extremely sensitive information. As model, phi4-mini does an excellent job balancing quality and efficiency. Setup Upon running for the first time, this workflow will automatically trigger a sub-section to read all tables and extract their schema into a local file. Then, either by chatting with the workflow in n8n's interface or by using it as a sub-workflow, you will get a query and a sqloutput response. Customizations If you want to work with just one particular table yet keep edits at bay, append a condition to the List all tables in a database step, like so: WHERE table_schema='public' AND table_name='my_emails_table_name' To repurpose this workflow to work with any other data corpus in a structured database, inspect the AI Agent user and system prompts and edit them accordingly.
by David Ashby
Complete MCP server exposing 4 AWS Cost and Usage Report Service API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add AWS Cost and Usage Report Service credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the AWS Cost and Usage Report Service API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to http://cur.{region}.amazonaws.com • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (4 total) 🔧 #X-Amz-Target=Awsorigamiservicegatewayservice.Deletereportdefinition (1 endpoints) • POST /#X-Amz-Target=AWSOrigamiServiceGatewayService.DeleteReportDefinition: Deletes the specified report. 🔧 #X-Amz-Target=Awsorigamiservicegatewayservice.Describereportdefinitions (1 endpoints) • POST /#X-Amz-Target=AWSOrigamiServiceGatewayService.DescribeReportDefinitions: Lists the AWS Cost and Usage reports available to this account. 🔧 #X-Amz-Target=Awsorigamiservicegatewayservice.Modifyreportdefinition (1 endpoints) • POST /#X-Amz-Target=AWSOrigamiServiceGatewayService.ModifyReportDefinition: Allows you to programatically update your report preferences. 🔧 #X-Amz-Target=Awsorigamiservicegatewayservice.Putreportdefinition (1 endpoints) • POST /#X-Amz-Target=AWSOrigamiServiceGatewayService.PutReportDefinition: Creates a new report using the description that you provide. 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native AWS Cost and Usage Report Service API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 8 Bulk WHOIS API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Bulk WHOIS API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Bulk WHOIS API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to http://localhost:5000 • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (8 total) 🔧 Batch (4 endpoints) • GET /batch: Get your batches • POST /batch: Create batch. Batch is then being processed until all provided items have been completed. At any time it can be get to provide current status with results optionally. • DELETE /batch/{id}: Delete batch • GET /batch/{id}: Get batch 🔧 Db (1 endpoints) • GET /db: Query domain database 🔧 Domains (3 endpoints) • GET /domains/{domain}/check: Check domain availability • GET /domains/{domain}/rank: Check domain rank (authority). • GET /domains/{domain}/whois: WHOIS query for a domain 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Bulk WHOIS API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 11 hashlookup CIRCL API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add hashlookup CIRCL API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the hashlookup CIRCL API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://hashlookup.circl.lu • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (11 total) 🔧 Bulk (2 endpoints) • POST /bulk/md5: Bulk Search MD5 Hashes • POST /bulk/sha1: Bulk Search SHA1 Hashes 🔧 Children (1 endpoints) • GET /children/{sha1}/{count}/{cursor}: Return children from a given SHA1. A number of element to return and an offset must be given. If not set it will be the 100 first elements. A cursor must be given to paginate over. The starting cursor is 0. 🔧 Info (1 endpoints) • GET /info: Get Database Info 🔧 Lookup (3 endpoints) • GET /lookup/md5/{md5}: Lookup MD5. • GET /lookup/sha1/{sha1}: Lookup SHA-1. • GET /lookup/sha256/{sha256}: Lookup SHA-256. 🔧 Parents (1 endpoints) • GET /parents/{sha1}/{count}/{cursor}: Return parents from a given SHA1. A number of element to return and an offset must be given. If not set it will be the 100 first elements. A cursor must be given to paginate over. The starting cursor is 0. 🔧 Session (2 endpoints) • GET /session/create/{name}: Create a session key to keep search context. The session is attached to a name. After the session is created, the header hashlookup_session can be set to the session name. • GET /session/get/{name}: Return set of matching and non-matching hashes from a session. 🔧 Stats (1 endpoints) • GET /stats/top: Get Top Queries 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native hashlookup CIRCL API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.