by Václav Čikl
Overview This workflow automates the entire process of creating professional subtitle (.SRT) and synced lyrics (.LRC) files from audio recordings. Upload your vocal track, let Whisper AI transcribe it with precise timestamps, and GPT-5-nano segments it into natural, singable lyric lines. With an optional quality control step, you can manually refine the output while maintaining perfect timestamp alignment. Key Features Whisper AI Transcription**: Word-level timestamps with multi-language support via ISO codes Intelligent Segmentation**: GPT-5-nano formats transcriptions into natural lyric lines (2-8 words per line) Quality Control Option**: Download, edit, and re-upload corrections with smart timestamp matching Advanced Alignment**: Levenshtein distance algorithm preserves timestamps during manual edits Dual Format Export**: Generate both .SRT (video subtitles) and .LRC (synced lyrics) files No Storage Needed**: Files generated in-memory for instant download Multi-Language**: Supports various languages through Whisper API Use Cases Generate synced lyrics for music video releases on YouTube Create .LRC files for Musixmatch, Apple Music, and Spotify Prepare professional subtitles for social media content Batch process subtitle files for catalog releases Maintain consistent lyric formatting across artists Streamline content delivery for streaming platforms Speed up video editing workflow Perfect For For Musicians & Artists For Record Labels For Content Creators What You'll Need Required Setup OpenAI API Key** for Whisper transcription and GPT-5-nano segmentation Recommended Input Format**: MP3 audio files (max 25MB) Content**: Clean vocal tracks work best (isolated vocals recommended, but whole tracks works still good) Languages**: Any language supported by Whisper (specify via ISO code) How It Works Automatic Mode (No Quality Check) Upload your MP3 vocal track to the workflow Transcription: Whisper AI processes audio with word-level timestamps Segmentation: GPT-5-nano formats text into natural lyric lines Generation: Workflow creates .SRT and .LRC files Download your ready-to-use subtitle files Manual Quality Control Mode Upload your MP3 vocal track and enable quality check Transcription: Whisper AI processes audio with timestamps Initial Segmentation: GPT-5-nano creates first draft Download the .TXT file for review Edit lyrics in any text editor (keep line structure intact) Re-upload corrected .TXT file Smart Matching: Advanced diff algorithm aligns changes with original timestamps Download final .SRT and .LRC files with perfect timing Technical Details Transcription API**: OpenAI Whisper (/v1/audio/transcriptions) Segmentation Model**: GPT-5-nano with custom lyric-focused prompt System Prompt*: *"You are helping with preparing song lyrics for musicians. Take the following transcription and split it into lyric-like lines. Keep lines short (2–8 words), natural for singing/rap phrasing, and do not change the wording." Timestamp Matching**: Levenshtein distance + alignment algorithm File Size Limit**: 25MB (n8n platform default) Processing**: All in-memory, no disk storage Cost**: Based on Whisper API usage (varies with audio length) Output Formats .SRT (SubRip Subtitle) Standard format for: YouTube video subtitles Video editing software (Premiere, DaVinci Resolve, etc.) Media players (VLC, etc.) .LRC (Lyric File) Synced lyrics format for: Musixmatch Apple Music Spotify Music streaming services Audio players with lyrics display Pro Tips 💡 For Best Results: Use isolated vocal tracks when possible (remove instrumentals) Ensure clear recordings with minimal background noise For quality check edits, only modify text content—don't change line breaks Test with shorter tracks first to optimize your workflow ⚙️ Customization Options: Adjust GPT segmentation style by modifying the system prompt Add language detection or force specific languages in Whisper settings Customize output file naming conventions in final nodes Extend workflow with additional format exports if needed Workflow Components Audio Input: Upload interface for MP3 files Whisper Transcribe: OpenAI API call with timestamp extraction Post-Processing: GPT-5-nano segmentation into lyric format Routing Quality Check: Decision point for manual review Timestamp Matching: Diff and alignment for corrected text Subtitles Preparation: JSON formatting for both output types File Generation: Convert to .SRT and .LRC formats Download Nodes: Export final files Template Author: Questions or need help with setup? 📧 Email:xciklv@gmail.com 💼 LinkedIn:https://www.linkedin.com/in/vaclavcikl/
by n8n Team
This workflow adds new HubSpot contacts to the Mailchimp email list. Prerequisites HubSpot account and HubSpot credentials Mailchimp account and Mailchimp credentials How it works Cron node triggers this workflow every day at 7:00. HubSpot node searches for the new contacts created. Mailchimp node creates a new contact in a certain audience and add a 'subscribed' status.
by n8n Team
This workflow creates a new contact in Mautic when a new customer is created in Shopify. By default, the workflow will fill the first name, last name, and email address. You can add any other fields you require. Prerequisites Shopify account and Shopify credentials. Mautic account and Mautic credentials. How it works Triggers on a new customer in Shopify. Sends the required data to Mautic to create a new contact.
by n8n Team
This workflow turns a light red when an update is made to a GitHub repository. By default, updates include pull requests, issues, pushes just to name a few. Prerequisites GitHub credentials. Home Assistant credentials. How it works Triggers off on the On any update in repository node. Uses Home Assistant to turn on a light and then configure the light to turn red.
by n8n Team
This workflow provides a simple example of how to use itemMatching(itemIndex: Number) in the Code node to retrieve linked items from earlier in the workflow.
by Nskha
This n8n template provides a comprehensive solution for managing Key-Value (KV) pairs using Cloudflare's KV storage. It's designed to simplify the interaction with Cloudflare's KV storage APIs, enabling users to perform a range of actions like creating, reading, updating, and deleting namespaces and KV pairs. Features Efficient Management**: Handle multiple KV operations seamlessly. User-Friendly**: Easy to use with pre-configured Cloudflare API credentials within n8n. Customizable**: Flexible for integration into larger workflows (Copy / paste your prefered part). Prerequisites n8n workflow automation tool (version 1.19.0 or later). A Cloudflare account with access to KV storage. Pre-configured Cloudflare API credentials in n8n. Workflow Overview This workflow is divided into three main sections for ease of use: Single Actions: Perform individual operations on KV pairs. Bulk Actions: Handle multiple KV pairs simultaneously. Specific Actions: Execute specific tasks like renaming namespaces. Key Components Manual Trigger**: Initiates the workflow. Account Path Node**: Sets the path for account details, a prerequisite for all actions. HTTP Request Nodes**: Facilitate interaction with Cloudflare's API for various operations. Sticky Notes**: Provide quick documentation links and brief descriptions of each node's function. Usage Setup Account Path: Input your Cloudflare account details in the 'Account Path' node. you can get your account path by your cloudflare URL Choose an Action: Select the desired operation from the workflow. Configure Nodes: Adjust parameters in the HTTP request nodes as needed. (each node contain sticky note with direct link to it own document page) Execute Workflow: Trigger the workflow manually to perform the selected operations. Detailed Node Descriptions I covered in this Workflow the full api calls of Cloudflare KV product. API NODE: Delete KV Type**: HTTP Request Function**: Deletes a specified KV pair within a namespace. Configuration**: This node requires the namespace ID and KV pair name. It automatically fetches these details from preceding nodes, specifically from the "List KV-NMs" and "Set KV-NM Name" nodes. Documentation**: Delete KV Pair API API NODE: Create KV-NM Type**: HTTP Request Function**: Creates a new Key-Value Namespace. Configuration**: Users need to input the title for the new namespace. This node uses the account information provided by the "Account Path" node. Documentation**: Create Namespace API API NODE: Delete KV1 Type**: HTTP Request Function**: Renames an existing Key-Value Namespace. Configuration**: Requires the old namespace name and the new desired name. It retrieves these details from the "KV to Rename" and "List KV-NMs" nodes. Documentation**: Rename Namespace API API NODE: Write KVs inside NM Type**: HTTP Request Function**: Writes multiple Key-Value pairs inside a specified namespace. Configuration**: This node needs a JSON array of key-value pairs along with their namespace identifier. It fetches the namespace ID from the "List KV-NMs" node. Documentation**: Write Multiple KV Pairs API API NODE: Read Value Of KV In NM Type**: HTTP Request Function**: Reads the value of a specific Key-Value pair in a namespace. Configuration**: Requires the Key's name and Namespace ID, which are obtained from the "Set KV-NM Name" and "List KV-NMs" nodes. Documentation**: Read KV Pair API API NODE: Read MD from Key Type**: HTTP Request Function**: Reads the metadata of a specific Key in a namespace. Configuration**: Similar to the "Read Value Of KV In NM" node, it needs the Key's name and Namespace ID, which are obtained from the "Set KV-NM Name" and "List KV-NMs" nodes. Documentation**: Read Metadata API > The rest can be found inside the workflow with sticky/onflow note explain what to do. Best Practices Modular Use**: Extract specific parts of the workflow for isolated tasks. Validation**: Ensure correct namespace and KV pair names before execution. Security**: Regularly update your Cloudflare API credentials for secure access, and make sure to give your API only access to the KV. Keywords: Cloudflare KV, n8n workflow automation, API integration, key-value storage management.
by David Roberts
Sometimes you want to take a different action in your error workflow based on the data that was flowing through it. This template illustrates how you can do that (more specifically, how you can retrieve the data of a webhook node). How it works Use the 'n8n' node to fetch the data of the failed execution Parse that data to find webhook nodes and extract the data of the one that was executed
by Le Nguyen
This template implements a recursive web crawler inside n8n. Starting from a given URL, it crawls linked pages up to a maximum depth (default: 3), extracts text and links, and returns the collected content via webhook. 🚀 How It Works 1) Webhook Trigger Accepts a JSON body with a url field. Example payload: { "url": "https://example.com" } 2) Initialization Sets crawl parameters: url, domain, maxDepth = 3, and depth = 0. Initializes global static data (pending, visited, queued, pages). 3) Recursive Crawling Fetches each page (HTTP Request). Extracts body text and links (HTML node). Cleans and deduplicates links. Filters out: External domains (only same-site is followed) Anchors (#), mailto/tel/javascript links Non-HTML files (.pdf, .docx, .xlsx, .pptx) 4) Depth Control & Queue Tracks visited URLs Stops at maxDepth to prevent infinite loops Uses SplitInBatches to loop the queue 5) Data Collection Saves each crawled page (url, depth, content) into pages[] When pending = 0, combines results 6) Output Responds via the Webhook node with: combinedContent (all pages concatenated) pages[] (array of individual results) Large results are chunked when exceeding ~12,000 characters 🛠️ Setup Instructions 1) Import Template Load from n8n Community Templates. 2) Configure Webhook Open the Webhook node Copy the Test URL (development) or Production URL (after deploy) You’ll POST crawl requests to this endpoint 3) Run a Test Send a POST with JSON: curl -X POST https://<your-n8n>/webhook/<id> \ -H "Content-Type: application/json" \ -d '{"url": "https://example.com"}' 4) View Response The crawler returns a JSON object containing combinedContent and pages[]. ⚙️ Configuration maxDepth** Default: 3. Adjust in the Init Crawl Params (Set) node. Timeouts** HTTP Request node timeout is 5 seconds per request; increase if needed. Filtering Rules** Only same-domain links are followed (apex and www treated as same-site) Skips anchors, mailto:, tel:, javascript: Skips document links (.pdf, .docx, .xlsx, .pptx) You can tweak the regex and logic in Queue & Dedup Links (Code) node 📌 Limitations No JavaScript rendering (static HTML only) No authentication/cookies/session handling Large sites can be slow or hit timeouts; chunking mitigates response size ✅ Example Use Cases Extract text across your site for AI ingestion / embeddings SEO/content audit and internal link checks Build a lightweight page corpus for downstream processing in n8n ⏱️ Estimated Setup Time ~10 minutes (import → set webhook → test request)
by David Ashby
🛠️ NASA Tool MCP Server Complete MCP server exposing all NASA Tool operations to AI agents. Zero configuration needed - all 15 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every NASA Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n NASA Tool tool with full error handling 📋 Available Operations (15 total) Every possible NASA Tool operation is included: 🔧 Asteroidneobrowse (1 operations) • Get many asteroid neos 🔧 Asteroidneofeed (1 operations) • Get an asteroid neo feed 🔧 Asteroidneolookup (1 operations) • Get an asteroid neo lookup 🔧 Astronomypictureoftheday (1 operations) • Get the astronomy picture of the day 🔧 Donkicoronalmassejection (1 operations) • Get a DONKI coronal mass ejection 🔧 Donkihighspeedstream (1 operations) • Get a DONKI high speed stream 🔧 Donkiinterplanetaryshock (1 operations) • Get a DONKI interplanetary shock 🔧 Donkimagnetopausecrossing (1 operations) • Get a DONKI magnetopause crossing 🔧 Donkinotifications (1 operations) • Get a DONKI notifications 🔧 Donkiradiationbeltenhancement (1 operations) • Get a DONKI radiation belt enhancement 🔧 Donkisolarenergeticparticle (1 operations) • Get a DONKI solar energetic particle 🔧 Donkisolarflare (1 operations) • Get a DONKI solar flare 🔧 Donkiwsaenlilsimulation (1 operations) • Get a DONKI wsa enlil simulation 🔧 Earthassets (1 operations) • Get Earth assets 🔧 Earthimagery (1 operations) • Get Earth imagery 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native NASA Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every NASA Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 14 Domains-Index API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Domains-Index API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Domains-Index API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to /v1 • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (14 total) 🔧 Domains (9 endpoints) • GET /domains/search: Domains Database Search • GET /domains/tld/{zone_id}: Get TLD records • GET /domains/tld/{zone_id}/download: Download Whole Dataset for TLD • GET /domains/tld/{zone_id}/search: Domains Search for TLD • GET /domains/updates/added: Get added domains, latest if date not specified • GET /domains/updates/added/download: Download added domains, latest if date not specified • GET /domains/updates/deleted: Get deleted domains, latest if date not specified • GET /domains/updates/deleted/download: Download deleted domains, latest if date not specified • GET /domains/updates/list: List of updates 🔧 Info (5 endpoints) • GET /info/api: GET /info/api • GET /info/stat/: Returns overall stagtistics • GET /info/stat/{zone}: Returns statistics for specific zone • GET /info/tld/: Returns overall Tld info • GET /info/tld/{zone}: Returns statistics for specific zone 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Domains-Index API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Rahi
Workflow: Track Email Campaign Engagement Analytics with Smartlead and Google Sheets Automatically fetch lead-level email engagement analytics (opens, clicks, replies, unsubscribes, bounces) from Smartlead and update them in Google Sheets. Use this to keep a single, always-fresh source of truth for campaign performance and sequence effectiveness. Summary Pull Smartlead campaign analytics on a schedule and write them to a Google Sheet (append or update). Works with pagination, avoids duplicates via a stable key, and is ready for dashboards, pivots, or BI tools. What This Workflow Does Collects campaign stats from Smartlead (per-lead, per-sequence). Handles pagination safely (offset/limit). Writes to Google Sheets using appendOrUpdate with a matching column to prevent duplicates. Can run on a schedule for near real-time analytics. Node Structure Overview | Step | Node | Purpose | |---|---|---| | 1️⃣ | Schedule Trigger | Starts the workflow on a cadence (e.g., hourly) | | 2️⃣ | Code (Pagination Generator) | Emits {offset, limit} pairs (e.g., 0..9900, step 100) | | 3️⃣ | Split in Batches | Sends each pagination pair to the API sequentially | | 4️⃣ | HTTP Request (Smartlead) | GET /campaigns/{campaign_id}/statistics with offset/limit | | 5️⃣ | Split Out | Turns the API data[] array into one item per lead record | | 6️⃣ | Google Sheets (appendOrUpdate) | Upserts rows by stats_id into EngagedLeads tab | | 7️⃣ | Loop Back | Continues until all batches have been processed | Step-by-Step Setup Prerequisites Smartlead account + API key with access to campaign statistics. Google account + Google Sheets OAuth connected in n8n. Create the Google Sheet Spreadsheet name: Email Analytics (can be anything). Tab name: EngagedLeads. Add these exact headers (first row): lead_name, lead_email, lead_category, sequence_number, stats_id, email_subject, sent_time, open_time, click_time, reply_time, open_count, click_count, is_unsubscribed, is_bounced Configure the Schedule Trigger Choose a frequency (e.g., every 2 hours). If you’re testing, set a single run or a short cadence. Configure the Code Node (Pagination) Emit N items like: { "offset": 0, "limit": 100 } { "offset": 100, "limit": 100 } ... 100 is a good default limit. For up to 10,000 records, generate 100 offsets. Configure the Smartlead API Node Method: GET URL: https://server.smartlead.ai/api/v1/campaigns/{campaign_id}/statistics Query parameters: api_key = <YOUR_SMARTLEAD_API_KEY> offset = {{ $json.offset }} limit = {{ $json.limit }} Map response to JSON. Split Out the Response Use a Split Out (or similar) to iterate over data[] so each lead record is one item. Google Sheets Node (Append or Update) Operation: appendOrUpdate. Document: Your Email Analytics sheet. Sheet/Tab: EngagedLeads. Matching Column: stats_id. Map fields from Smartlead response to sheet columns: lead_name ← lead name (or composed from first/last if provided) lead_email ← email lead_category ← category/type if available sequence_number ← sequence step number stats_id ← stable identifier (e.g., Smartlead stats_id or message id) email_subject ← subject sent_time, open_time, click_time, reply_time ← timestamps open_count, click_count ← integers is_unsubscribed, is_bounced ← booleans If the same stats_id arrives again, the row is updated, not appended. Test and Activate Run once manually to verify API and sheet mapping. Check the sheet for new/updated rows. Activate the workflow to run automatically. Smartlead API Reference (Used by This Workflow) Endpoint** GET https://server.smartlead.ai/api/v1/campaigns/{campaign_id}/statistics Required query parameters** api_key (string) offset (number) limit (number) Typical response (trimmed example)** { "data": [ { "lead_name": "Jane Doe", "lead_email": "jane@example.com", "sequence_number": 2, "stats_id": "15b6ff3a-...-b2b9f343c2e1", "email_subject": "Quick intro", "sent_time": "2025-10-08T10:18:55.496Z", "open_time": "2025-10-08T10:20:10.000Z", "click_time": null, "reply_time": null, "open_count": 1, "click_count": 0, "is_unsubscribed": false, "is_bounced": false } ], "total": 1234 } Google Sheets Structure (Recommended) Spreadsheet: Email Analytics Tab: EngagedLeads Columns:lead_name, lead_email, lead_category, sequence_number, stats_id, email_subject, sent_time, open_time, click_time, reply_time, open_count, click_count, is_unsubscribed, is_bounced Matching Column: stats_id (prevents duplicates and allows updates) Customization Tips Multiple Campaigns** Duplicate the workflow and set a different {campaign_id} and/or write results to a separate tab in your Google Sheet. Batch Size** Increase or decrease the limit value (e.g., 200) in your Code node if you want fewer or more API calls. Filtering** Add a Code or IF node to skip rows where is_bounced = true or is_unsubscribed = true. Dashboards** Create a new tab named Dashboard in Google Sheets and visualize your data using built-in charts or connect it to Looker Studio for advanced visualization. Enrichment** Join this dataset with your CRM data (e.g., HubSpot or Salesforce) using lead_email as a key to gain deeper customer insights. Security and Publishing Notes Do not hardcode** your Smartlead API key in the workflow export. Use n8n credentials or environment variables instead. When sharing the template publicly, replace sensitive values with placeholders like: <YOUR_SMARTLEAD_API_KEY> and <YOUR_GOOGLE_SHEET_ID>. Keep your Google Sheet private unless you intentionally want to share it publicly. Troubleshooting No rows in Sheets** Verify that the API response includes data[], confirm that the Split Out node is configured correctly, and check field mappings. Duplicates** Ensure the Google Sheets node has its matching column set to stats_id. Rate Limits** Increase the schedule interval, add a short Wait node between batches, or reduce the limit size. Mapping Errors** Ensure that column names in Sheets exactly match your field mappings — they are case-sensitive. Timezone Differences** Smartlead timestamps are in UTC. Convert them downstream if your local timezone is different. Example Use Case Run this workflow hourly to maintain a live, company-wide Email Engagement Sheet. Sales teams** can monitor replies and active leads. Marketing teams** can track open and click rates by sequence. Operations** can export monthly summaries — no Smartlead login required. Tags Smartlead EmailMarketing Automation GoogleSheets Analytics CRM MarketingOps
by David Ashby
Complete MCP server exposing 9 Api2Pdf - PDF Generation, Powered by AWS Lambda API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Api2Pdf - PDF Generation, Powered by AWS Lambda credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Api2Pdf - PDF Generation, Powered by AWS Lambda API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://v2018.api2pdf.com • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (9 total) 🔧 Chrome (3 endpoints) • POST /chrome/html: Convert raw HTML to PDF • GET /chrome/url: Convert URL to PDF • POST /chrome/url: Convert URL to PDF 🔧 Libreoffice (1 endpoints) • POST /libreoffice/convert: Convert office document or image to PDF 🔧 Merge (1 endpoints) • POST /merge: Merge multiple PDFs together 🔧 Wkhtmltopdf (3 endpoints) • POST /wkhtmltopdf/html: Convert raw HTML to PDF • GET /wkhtmltopdf/url: Convert URL to PDF • POST /wkhtmltopdf/url: Convert URL to PDF 🔧 Zebra (1 endpoints) • GET /zebra: Generate bar codes and QR codes with ZXING. 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Api2Pdf - PDF Generation, Powered by AWS Lambda API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.