by Khairul Muhtadin
This automated TLDW (Too Long; Didn't Watch) generator using Decodo's scraping API to extract complete video transcripts and metadata, then uses Google Gemini 3 to create intelligent summaries with key points, chapters breakdown, tools mentioned, and actionable takeaways—eliminating hours of manual note-taking and video watching. Why Use This Workflow? Time Savings: Convert a 2-hour video into a readable 5-minute summary, reducing research time by 95% Comprehensive Coverage: Captures key points, chapters, tools, quotes, and actionable steps that manual notes often miss Instant Accessibility: Receive structured summaries directly in Telegram within 30-60 seconds of sharing a link Multi-Language Support: Process transcripts in multiple languages supported by YouTube's auto-caption system Ideal For Content Creators & Researchers:** Quickly extract insights from competitor videos, educational content, or industry talks without watching hours of footage Students & Educators:** Generate study notes from lecture recordings, online courses, or tutorial videos with chapter-based breakdowns Marketing Teams:** Analyze competitor content strategies, extract tools and techniques mentioned, and identify trending topics across multiple videos Busy Professionals:** Stay updated with conference talks, webinars, or industry updates by reading summaries instead of watching full recordings How It Works Trigger: User sends any YouTube URL (youtube.com or youtu.be) to a configured Telegram bot Data Collection: Workflow extracts video ID and simultaneously fetches full transcript and metadata (title, channel, views, duration, chapters, tags) via Decodo API Processing: Raw transcript data is extracted and cleaned, while metadata is parsed into structured fields including formatted statistics and chapter timestamps AI Processing: Google Gemini Flash analyzes the transcript to generate a structured summary covering one-line overview, key points, main topics by chapter, tools mentioned, target audience, practical takeaways, and notable quotes Setup Guide Prerequisites | Requirement | Type | Purpose | |-------------|------|---------| | n8n instance | Essential | Workflow execution platform | | Telegram Bot API | Essential | Receives video links and delivers summaries | | Decodo Scraper API | Essential | Extracts YouTube transcripts and metadata | | Google Gemini API | Essential | AI-powered summary generation | Installation Steps Import the JSON file to your n8n instance Configure credentials: Telegram Bot API: Create a bot via @BotFather on Telegram, obtain the API token, and configure in n8n Telegram credentials Decodo API: Sign up at Decodo Dashboard, get your API key, create HTTP Header Auth credential with header name "Authorization" and value "Basic [YOUR_API_KEY]" Google Gemini API: Obtain API key from Google AI Studio, configure in n8n Google Palm API credentials Update environment-specific values: In the "Alert Admin" node, replace YOUR_CHAT_ID with your personal Telegram user ID for error notifications Optionally adjust the languageCode in "Set: Video ID & Config" node (default: "en") Customize settings: Modify the AI prompt in "Generate TLDR" node to adjust summary structure and depth Test execution: Send a YouTube link to your Telegram bot Verify you receive the "Processing..." notification, video info card, and formatted summary chunks Technical Details Workflow Logic The workflow employs parallel processing for efficiency. Transcript and metadata are fetched simultaneously after video ID extraction. Once both API calls complete, the transcript feeds directly into Gemini AI while metadata is parsed separately. The merge node combines AI output with structured metadata before splitting into Telegram-friendly chunks. Error handling is isolated on a separate branch triggered by any node failure, formatting error details and alerting admins without disrupting the main flow. Customization Options Basic Adjustments: Language Selection**: Change languageCode from "en" to "id", "es", "fr", etc. to fetch transcripts in different languages (YouTube must have captions available) Summary Style**: Edit the prompt in "Generate TLDR" to focus on specific aspects (e.g., "focus only on technical tools mentioned" or "create a summary for beginners") Message Length**: Adjust maxCharsPerChunk (currently 4000) to create longer or shorter message splits based on preference Advanced Enhancements: Database Storage**: Add a Postgres/Airtable node after "Merge: Data + Summary" to archive all summaries with timestamps and user IDs for searchable knowledge base (medium complexity) Multi-Model Comparison**: Duplicate the "Generate TLDR" chain and connect GPT-4 or Claude, merge results to show different AI perspectives on the same video (high complexity) Auto-Translation**: Insert a translation node after summary generation to deliver summaries in user's preferred language automatically (medium complexity) Troubleshooting Common Issues: | Problem | Cause | Solution | |---------|-------|----------| | "Not a YouTube URL" error | URL format not recognized | Ensure UR sent contains youtube.com or youtu.be | | No transcript available | Video lacks captions or wrong language | Check video has auto-generated or manual captions change languageCode to match available options | | Decodo API 401/403 error | Invalid or expired API key | Verify API key in HTTP Header Auth credential. regenerate if needed from Decodo dashboard || | Error notifications not received | Wrong chat ID in Alert Admin node | Get your Telegram user ID from @userinfobot and update the node | Use Case Examples Scenario 1: Marketing Agency Competitive Analysis Challenge: Agency needs to analyze 50+ competitor YouTube videos monthly to identify content strategies, tools used, and messaging angles—watching all videos would require 80+ hours Solution: Drop youtube links into a shared Telegram group with the bot. Summaries are generated instantly, highlighting tools mentioned, key talking points, and target audience insights Result: Research time reduced from 80 hours to 6 hours monthly (93% time savings), with searchable archive of all competitor content strategies Created by: Khaisa Studio Category: AI-Powered Automation Tags: YouTube, AI, Telegram, Summarization, Content Analysis, Decodo, Gemini Need custom workflows? Contact us Connect with the creator: Portfolio • Workflows • LinkedIn • Medium • Threads
by Nskha
This n8n template provides a comprehensive solution for managing Key-Value (KV) pairs using Cloudflare's KV storage. It's designed to simplify the interaction with Cloudflare's KV storage APIs, enabling users to perform a range of actions like creating, reading, updating, and deleting namespaces and KV pairs. Features Efficient Management**: Handle multiple KV operations seamlessly. User-Friendly**: Easy to use with pre-configured Cloudflare API credentials within n8n. Customizable**: Flexible for integration into larger workflows (Copy / paste your prefered part). Prerequisites n8n workflow automation tool (version 1.19.0 or later). A Cloudflare account with access to KV storage. Pre-configured Cloudflare API credentials in n8n. Workflow Overview This workflow is divided into three main sections for ease of use: Single Actions: Perform individual operations on KV pairs. Bulk Actions: Handle multiple KV pairs simultaneously. Specific Actions: Execute specific tasks like renaming namespaces. Key Components Manual Trigger**: Initiates the workflow. Account Path Node**: Sets the path for account details, a prerequisite for all actions. HTTP Request Nodes**: Facilitate interaction with Cloudflare's API for various operations. Sticky Notes**: Provide quick documentation links and brief descriptions of each node's function. Usage Setup Account Path: Input your Cloudflare account details in the 'Account Path' node. you can get your account path by your cloudflare URL Choose an Action: Select the desired operation from the workflow. Configure Nodes: Adjust parameters in the HTTP request nodes as needed. (each node contain sticky note with direct link to it own document page) Execute Workflow: Trigger the workflow manually to perform the selected operations. Detailed Node Descriptions I covered in this Workflow the full api calls of Cloudflare KV product. API NODE: Delete KV Type**: HTTP Request Function**: Deletes a specified KV pair within a namespace. Configuration**: This node requires the namespace ID and KV pair name. It automatically fetches these details from preceding nodes, specifically from the "List KV-NMs" and "Set KV-NM Name" nodes. Documentation**: Delete KV Pair API API NODE: Create KV-NM Type**: HTTP Request Function**: Creates a new Key-Value Namespace. Configuration**: Users need to input the title for the new namespace. This node uses the account information provided by the "Account Path" node. Documentation**: Create Namespace API API NODE: Delete KV1 Type**: HTTP Request Function**: Renames an existing Key-Value Namespace. Configuration**: Requires the old namespace name and the new desired name. It retrieves these details from the "KV to Rename" and "List KV-NMs" nodes. Documentation**: Rename Namespace API API NODE: Write KVs inside NM Type**: HTTP Request Function**: Writes multiple Key-Value pairs inside a specified namespace. Configuration**: This node needs a JSON array of key-value pairs along with their namespace identifier. It fetches the namespace ID from the "List KV-NMs" node. Documentation**: Write Multiple KV Pairs API API NODE: Read Value Of KV In NM Type**: HTTP Request Function**: Reads the value of a specific Key-Value pair in a namespace. Configuration**: Requires the Key's name and Namespace ID, which are obtained from the "Set KV-NM Name" and "List KV-NMs" nodes. Documentation**: Read KV Pair API API NODE: Read MD from Key Type**: HTTP Request Function**: Reads the metadata of a specific Key in a namespace. Configuration**: Similar to the "Read Value Of KV In NM" node, it needs the Key's name and Namespace ID, which are obtained from the "Set KV-NM Name" and "List KV-NMs" nodes. Documentation**: Read Metadata API > The rest can be found inside the workflow with sticky/onflow note explain what to do. Best Practices Modular Use**: Extract specific parts of the workflow for isolated tasks. Validation**: Ensure correct namespace and KV pair names before execution. Security**: Regularly update your Cloudflare API credentials for secure access, and make sure to give your API only access to the KV. Keywords: Cloudflare KV, n8n workflow automation, API integration, key-value storage management.
by Firecrawl
What this does Uses Firecrawl to scrape any company website and extract structured business signals from it. The enriched profile is automatically saved to Supabase. A self-hosted, free alternative to paid enrichment APIs like Apollo or Clay, powered by Firecrawl. How it works Webhook receives a POST request with a url field (bare domain or full URL) Verify URL node validates and normalizes the domain Firecrawl scrapes the target website and searches for additional company data AI Agent (OpenRouter) extracts structured business signals from the scraped content Structured Output Parser formats the result into a clean JSON profile Supabase checks for duplicates before inserting, then saves the enriched profile Respond to Webhook returns the enriched result (or a 422 error if the URL was invalid) Business signals extracted Company name, industry, pricing model, free trial availability, employee size signal, funding stage, tech stack and integrations detected, target customer profile, trust signals (certifications, reviews, customer count), hiring status and open roles count. Requirements Firecrawl API key OpenRouter API key (or swap for any OpenAI-compatible model) Supabase project (setup SQL provided below) Setup Create a Supabase project and run the following SQL in the SQL editor: CREATE TABLE lead_enrichment ( id UUID PRIMARY KEY DEFAULT gen_random_uuid(), created_at TIMESTAMPTZ DEFAULT now(), updated_at TIMESTAMPTZ DEFAULT now(), domain TEXT NOT NULL UNIQUE, company_name TEXT, industry TEXT, pricing_model TEXT, has_free_trial BOOLEAN, employee_signal TEXT, funding_stage TEXT, tech_stack TEXT[], integrations TEXT[], target_customer TEXT, trust_signals TEXT[], hiring BOOLEAN, open_roles_count INT, raw_scraped_text TEXT, enrichment_source TEXT DEFAULT 'firecrawl' ); CREATE OR REPLACE FUNCTION update_updated_at() RETURNS TRIGGER AS $$ BEGIN NEW.updated_at = now(); RETURN NEW; END; $$ LANGUAGE plpgsql; CREATE TRIGGER set_updated_at BEFORE UPDATE ON lead_enrichment FOR EACH ROW EXECUTE FUNCTION update_updated_at(); Add your Firecrawl API key as a credential in n8n Add your OpenRouter API key as a credential (or swap for any OpenAI-compatible provider) Add your Supabase credentials (project URL + service role key) Activate the workflow How to use Send a POST request to the webhook URL: curl -X POST https://your-n8n-instance/webhook/your-id \ -H "Content-Type: application/json" \ -d '{"url": "firecrawl.dev"}' `
by David Roberts
Sometimes you want to take a different action in your error workflow based on the data that was flowing through it. This template illustrates how you can do that (more specifically, how you can retrieve the data of a webhook node). How it works Use the 'n8n' node to fetch the data of the failed execution Parse that data to find webhook nodes and extract the data of the one that was executed
by Yaron Been
Description This workflow automatically extracts Amazon product reviews and identifies hidden friction signals that are costing you conversions. It helps ecommerce and product teams turn customer complaints into measurable revenue opportunities. Overview This workflow uses Bright Data's Web Scraper API to collect Amazon reviews, then scans them for friction signals like delivery issues, return complaints, sizing problems, and product defects. AI classifies each friction signal by revenue impact, scores severity, and prioritizes the most costly conversion leaks. Results are split into: Checkout Optimization List** Delivery & Returns Risk Report** Both are logged into Google Sheets for immediate action. Tools Used n8n**: Automation platform that orchestrates the workflow Bright Data**: Scrapes Amazon product reviews at scale without getting blocked OpenRouter**: AI-powered friction classification, revenue impact estimation, and prioritization Google Sheets**: Logs checkout optimization opportunities, delivery risks, and errors How to Install 1. Import the Workflow Download the .json file and import it into your n8n instance. 2. Configure Bright Data Add your Bright Data API credentials to all Bright Data nodes. 3. Configure OpenRouter Add your OpenRouter API key for AI friction analysis. 4. Set Up Google Sheets Create a spreadsheet following the "Google Sheets Setup" sticky note inside the workflow. Connect each Google Sheets node to your document. 5. Customize Edit the configuration node to define: Target Amazon product URL Review scope Analysis depth Use Cases Ecommerce Managers Find out exactly why customers are dropping off and fix the highest-impact issues first. Product Teams Identify recurring product defects or sizing issues from real customer feedback at scale. CX / Support Teams Spot delivery and returns patterns before they become widespread complaints. Conversion Rate Optimization Prioritize checkout and UX improvements based on actual revenue impact data. Competitive Analysis Analyze competitor product reviews to uncover weaknesses you can capitalize on. Connect with Me Website: https://www.nofluff.online YouTube: https://www.youtube.com/@YaronBeen/videos LinkedIn: https://www.linkedin.com/in/yaronbeen/ Get Bright Data: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) Tags #n8n #automation #brightdata #webscraping #ecommerce #conversionrate #amazonreviews #customerfriction #productreviews #revenueoptimization #checkoutoptimization #deliveryissues #returnrates #cro #n8nworkflow #workflow #nocode #businessintelligence #customerexperience #productfeedback #reviewanalysis #ecommerceautomation #amazondata #sentimentanalysis #customerinsights
by Destiya Wijayanto
This template provides a set of MCP tools to manage personal budgets and expenses. This MCP tools can be integrated to any AI client that support MCP integration. How it works It stores transaction records and budget in google sheet It will give warning if expense is above budget How to setup Sign in with google in google sheet nodes Copy google sheet template (link available in the sticky note) Target google sheet nodes to the right sheet Integrate with AI client Enjoy!!
by David Ashby
🛠️ NASA Tool MCP Server Complete MCP server exposing all NASA Tool operations to AI agents. Zero configuration needed - all 15 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every NASA Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n NASA Tool tool with full error handling 📋 Available Operations (15 total) Every possible NASA Tool operation is included: 🔧 Asteroidneobrowse (1 operations) • Get many asteroid neos 🔧 Asteroidneofeed (1 operations) • Get an asteroid neo feed 🔧 Asteroidneolookup (1 operations) • Get an asteroid neo lookup 🔧 Astronomypictureoftheday (1 operations) • Get the astronomy picture of the day 🔧 Donkicoronalmassejection (1 operations) • Get a DONKI coronal mass ejection 🔧 Donkihighspeedstream (1 operations) • Get a DONKI high speed stream 🔧 Donkiinterplanetaryshock (1 operations) • Get a DONKI interplanetary shock 🔧 Donkimagnetopausecrossing (1 operations) • Get a DONKI magnetopause crossing 🔧 Donkinotifications (1 operations) • Get a DONKI notifications 🔧 Donkiradiationbeltenhancement (1 operations) • Get a DONKI radiation belt enhancement 🔧 Donkisolarenergeticparticle (1 operations) • Get a DONKI solar energetic particle 🔧 Donkisolarflare (1 operations) • Get a DONKI solar flare 🔧 Donkiwsaenlilsimulation (1 operations) • Get a DONKI wsa enlil simulation 🔧 Earthassets (1 operations) • Get Earth assets 🔧 Earthimagery (1 operations) • Get Earth imagery 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native NASA Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every NASA Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 14 Domains-Index API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Domains-Index API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Domains-Index API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to /v1 • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (14 total) 🔧 Domains (9 endpoints) • GET /domains/search: Domains Database Search • GET /domains/tld/{zone_id}: Get TLD records • GET /domains/tld/{zone_id}/download: Download Whole Dataset for TLD • GET /domains/tld/{zone_id}/search: Domains Search for TLD • GET /domains/updates/added: Get added domains, latest if date not specified • GET /domains/updates/added/download: Download added domains, latest if date not specified • GET /domains/updates/deleted: Get deleted domains, latest if date not specified • GET /domains/updates/deleted/download: Download deleted domains, latest if date not specified • GET /domains/updates/list: List of updates 🔧 Info (5 endpoints) • GET /info/api: GET /info/api • GET /info/stat/: Returns overall stagtistics • GET /info/stat/{zone}: Returns statistics for specific zone • GET /info/tld/: Returns overall Tld info • GET /info/tld/{zone}: Returns statistics for specific zone 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Domains-Index API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 9 Api2Pdf - PDF Generation, Powered by AWS Lambda API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Api2Pdf - PDF Generation, Powered by AWS Lambda credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Api2Pdf - PDF Generation, Powered by AWS Lambda API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://v2018.api2pdf.com • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (9 total) 🔧 Chrome (3 endpoints) • POST /chrome/html: Convert raw HTML to PDF • GET /chrome/url: Convert URL to PDF • POST /chrome/url: Convert URL to PDF 🔧 Libreoffice (1 endpoints) • POST /libreoffice/convert: Convert office document or image to PDF 🔧 Merge (1 endpoints) • POST /merge: Merge multiple PDFs together 🔧 Wkhtmltopdf (3 endpoints) • POST /wkhtmltopdf/html: Convert raw HTML to PDF • GET /wkhtmltopdf/url: Convert URL to PDF • POST /wkhtmltopdf/url: Convert URL to PDF 🔧 Zebra (1 endpoints) • GET /zebra: Generate bar codes and QR codes with ZXING. 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Api2Pdf - PDF Generation, Powered by AWS Lambda API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 15 BulkSMS JSON REST API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add BulkSMS JSON REST API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the BulkSMS JSON REST API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.bulksms.com/v1 • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (15 total) 🔧 Blocked-Numbers (2 endpoints) • GET /blocked-numbers: Block Phone Number • POST /blocked-numbers: Create a blocked number 🔧 Credit (1 endpoints) • POST /credit/transfer: Transfer Account Credits 🔧 Messages (5 endpoints) • GET /messages: List Related Messages • POST /messages: Send Messages • GET /messages/send: Send message by simple GET or POST • GET /messages/{id}: Show Message • GET /messages/{id}/relatedReceivedMessages: List Related Messages 🔧 Profile (1 endpoints) • GET /profile: Retrieve User Profile 🔧 Rmm (1 endpoints) • POST /rmm/pre-sign-attachment: Generate Attachment Upload URL 🔧 Webhooks (5 endpoints) • GET /webhooks: Update Webhook Settings • POST /webhooks: Create a webhook • DELETE /webhooks/{id}: Delete a webhook • GET /webhooks/{id}: Read a webhook • POST /webhooks/{id}: Update a webhook 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native BulkSMS JSON REST API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 9 NPR Listening Service API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add NPR Listening Service credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the NPR Listening Service API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://listening.api.npr.org • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (9 total) 🔧 V2 (9 endpoints) • GET /v2/aggregation/{aggId}/recommendations: Get a set of recommendations for an aggregation independent of the user's lis... • GET /v2/channels: List Available Channels • GET /v2/history: Get User Ratings History • GET /v2/organizations/{orgId}/categories/{category}/recommendations: Get a list of recommendations from a category of content from an organization • GET /v2/organizations/{orgId}/recommendations: Get a variety of details about an organization including various lists of rec... • GET /v2/promo/recommendations: Get Recent Promo Audio • POST /v2/ratings: Submit Media Ratings • GET /v2/recommendations: Get User Recommendations • GET /v2/search/recommendations: Get Search Recommendations 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native NPR Listening Service API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Matthew
Automated Cold Email Personalization This workflow automates the creation of highly personalized cold outreach emails by extracting lead data, scraping company websites, and leveraging AI to craft unique email components. This is ideal for sales teams, marketers, and business development professionals looking to scale their outreach efforts while maintaining a high degree of personalization. How It Works Generate Batches: The workflow starts by generating a sequence of numbers, defining how many leads to process in batches. Scrape Lead Data: It uses an external API (Apify) to pull comprehensive lead information, including contact details, company data, and social media links. Fetch Client Data: The workflow then retrieves relevant client details from your Google Sheet based on the scraped data. Scrape Company Website: The lead's company website is automatically scraped to gather content for personalization. Summarize Prospect Data: An OpenAI model analyzes both the scraped website content and the individual's profile data to create concise summaries and identify unique angles for outreach. Craft Personalized Email: A more advanced OpenAI model uses these summaries and specific instructions to generate the "icebreaker," "intro," and "value proposition" components of a personalized cold email. Update Google Sheet: Finally, these generated email components are saved back into your Google Sheet, enriching your lead records for future outreach. Google Sheet Structure Your Google Sheet must have the following exact column headers to ensure proper data flow: Email** (unique identifier for each lead) Full Name** Headline** LinkdIn** cityName** stateName** company/cityName** Country** Company Name** Website** company/businessIndustry** Keywords** icebreaker** (will be populated by the workflow) intro** (will be populated by the workflow) value\_prop** (will be populated by the workflow) Setup Instructions Add Credentials: In n8n, add your OpenAI API key via the Credentials menu. Connect your Google account via the Credentials menu for Google Sheets access. You will also need an Apify API key for the Scraper node. Configure Google Sheets Nodes: Select the Client data and Add email data to sheet nodes. For each, choose your Google Sheets credential, select your spreadsheet, and the specific sheet name. Ensure all column mappings are correct according to the "Google Sheet Structure" section above. Configure Apify Scraper Node: Select the Scraper node. Update the Authorization header with your Apify API token (Bearer KEY). In the JSON Body, set the searchUrl to your Apollo link (or equivalent source URL for lead data). Configure OpenAI Nodes: Select both Summarising prospect data and Creating detailed email nodes. Choose your OpenAI credential from the dropdown. In the Creating detailed email node's prompt, replace PUT YOUR COMPANY INFO HERE with your company's context and verify the target sector for the email generation. Verify Update Node: On the final Add email data to sheet node, ensure the Operation is set to Append Or Update and the Matching Columns field is set to Email. Customization Options 💡 Trigger: Change the When clicking 'Execute workflow' node to an automatic trigger, such as a **Cron node for daily runs, or a Google Sheets trigger when new rows are added. Lead Generation: Modify the **Code node to change the number of leads processed per run (currently set to 50). Scraping Logic**: Adjust the Scraper node's parameters (e.g., count) or replace the Apify integration with another data source if needed. AI Prompting: Experiment with the prompts in the **Summarising prospect data and Creating detailed email OpenAI nodes to refine the tone, style, length, or content focus of the generated summaries and emails. AI Models**: Test different OpenAI models (e.g., gpt-3.5-turbo, gpt-4o) in the OpenAI nodes to find the optimal balance between cost, speed, and output quality. Data Source/CRM**: Replace the Google Sheets nodes with integrations for your preferred CRM (e.g., HubSpot, Salesforce) or a database (e.g., PostgreSQL, Airtable) to manage your leads.