by Daniel Shashko
How it Works This workflow automates competitive price intelligence using Bright Data's enterprise web scraping API. On a scheduled basis (default: daily at 9 AM), the system loops through configured competitor product URLs, triggers Bright Data's web scraper to extract real-time pricing data from each site, and intelligently compares competitor prices against your current pricing. The workflow handles the full scraping lifecycle: it sends scraping requests to Bright Data, waits for completion, fetches the scraped product data, and parses prices from various formats and website structures. All pricing data is automatically logged to Google Sheets for historical tracking and trend analysis. When a competitor's price drops below yours by more than the configured threshold (e.g., 10% cheaper), the system immediately sends detailed alerts via Slack and email to your pricing team with actionable intelligence. At the end of each monitoring run, the workflow generates a comprehensive daily summary report that aggregates all competitor data, calculates average price differences, identifies the lowest and highest competitors, and provides a complete competitive landscape view. This eliminates hours of manual competitor research and enables data-driven pricing decisions in real-time. Who is this for? E-commerce businesses and online retailers needing automated competitive price monitoring Product managers and pricing strategists requiring real-time competitive intelligence Revenue operations teams managing dynamic pricing strategies across multiple products Marketplaces competing in price-sensitive categories where margins matter Any business that needs to track competitor pricing without manual daily checks Setup Steps Setup time: Approx. 30-40 minutes (Bright Data configuration, credential setup, competitor URL configuration) Requirements: Bright Data account with Web Scraper API access Bright Data API token (from dashboard) Google account with a spreadsheet for price tracking Slack workspace with pricing channels SMTP email provider for alerts Sign up for Bright Data and create a web scraping dataset (use e-commerce template for product data) Obtain your Bright Data API token and dataset ID from the dashboard Configure these nodes: Schedule Daily Check: Set monitoring frequency using cron expression (default: 9 AM daily) Load Competitor URLs: Add competitor product URLs array, configure your current price, set alert threshold percentage Loop Through Competitors: Automatically handles multiple URLs (no configuration needed) Scrape with Bright Data: Add Bright Data
by koichi nagino
Overview This workflow, "Mood Graph Studio," offers a comprehensive solution to track and visualize your emotional well-being. By simply inputting a single sentence about your mood, this template uses AI to perform a sentiment analysis, generates a visual graph via Wolfram Alpha, provides personalized feedback, and logs everything to Google Sheets. It is designed for anyone interested in mindfulness, self-reflection, or quantified self-tracking. How It Works The workflow is divided into two main API functionalities and a manual trigger for easy testing. Analyze a Single Mood (/mood endpoint) An AI Agent (OpenAI) quantifies your mood text into valence (positivity) and energy (activity). A query is sent to Wolfram Alpha to generate a simple linear graph based on the score. A second AI Agent provides short, encouraging advice in Japanese. The complete entry is logged as a new row in Google Sheets. Returns a JSON response containing the full analysis and the graph image. Generate Mood History Graph (/history endpoint) Retrieves historical mood data for a specified user from Google Sheets. A Code node formats the data into a time-series plot query. Wolfram Alpha** generates a line graph visualizing the mood trend over time. The resulting graph is automatically posted to Slack. How to Set Up 1. Credentials You must add your own credentials for the following services in the respective nodes: OpenAI**: Used in both Chat Model nodes. Google Sheets**: Used in the "Log Mood" and "Get History" nodes. Slack**: Used in the "Send History" node. 2. Wolfram Alpha App ID This workflow uses the HTTP Request node to call the Wolfram Alpha API. Get a free App ID from the Wolfram|Alpha Developer Portal. Paste your App ID into the appid parameter value in both Generate...Graph nodes. 3. Google Sheet Configuration Create a new Google Sheet. Paste the Sheet ID into the Document ID field in both Google Sheets nodes. Crucially**, ensure the first row has the following headers exactly (case-sensitive): userId, moodText, valence, energy, createdAt, wolframQuery, feedback How to Use For Testing: Use the Manual Trigger. Modify the sample text in the "Set Test Data" node and click "Execute Workflow" on the canvas. For Production: Activate the workflow. Send POST requests to the Production URL of the Webhook nodes.
by 飯盛 正幹
Description This workflow automates the process of finding new content ideas by scraping trending news and social media posts, analyzing them with AI, and delivering a summarized report to Slack. It is perfect for content marketers, social media managers, and strategists who spend hours researching trending topics manually. Who is this for Content Marketers: To discover trending topics for blogs or newsletters. Social Media Managers: To keep up with competitor activity or industry news. Market Researchers: To monitor specific keywords or brands. How it works Schedule: The workflow runs automatically on a weekly schedule (default is Monday morning). Data Collection: It uses Apify to scrape the latest news from Google Search and recent posts from specific Facebook pages. Data Processing: The results are merged, and the top 5 most relevant items are selected to prevent information overload. AI Analysis: An AI Agent (powered by OpenRouter/LLM) analyzes each article to classify it into a theme (e.g., Marketing, Technology, Strategy) and extracts 3 catchy keywords. Notification: The analyzed insights, including the theme, keywords, summary, and original URL, are formatted and sent directly to Slack. Requirements Apify Account: You need an API token and access to the Google Search Results Scraper and Facebook Posts Scraper actors. OpenRouter API Key: Used to power the AI analysis (can be swapped for OpenAI/Anthropic if preferred). Slack Account: To receive the notifications. How to set up Configure Credentials: Open the Workflow Configuration node and paste your Apify API Token and OpenRouter API Key. Connect your Slack account in the Slack node. Adjust Apify Settings: In the Apify Google news node, change the search query (currently set to "Top News" in Japanese) to your desired topic. In the Apify Facebook node, update the startUrls to the Facebook pages you want to monitor. Customize AI Prompt: (Optional) Open the AI Agent node to adjust the language or the specific themes you want the AI to classify. How to customize Change the LLM: Replace the OpenRouter model with the OpenAI or Anthropic Chat Model node if you prefer those providers. Increase Data Volume: Adjust the "Limit 5 items" Code node to process more articles at once (mind your API usage limits). Change Destination: Replace the Slack node with Notion, Google Sheets, or Email to save the ideas elsewhere. ⚠️ Crucial Checklist Before Submission The n8n team will reject templates that contain non-English text in the nodes. Please apply these changes to your workflow in the n8n editor before exporting the JSON for submission: Rename Nodes to English: Cron トリガー → Schedule Trigger Function: Slackメッセージ作成 → Format Slack Message Slack: 企画ネタ投稿 → Slack Post Function: LLMレスポンス整形 → Parse LLM Response Merge Data (統合) → Merge Data Function: データ抽出・5件制限 → Limit to 5 Items Function: Googleデータ抽出 → Extract Google Data Function: Facebookデータ抽出 → Extract FB Data Translate Code Comments & Prompts: Inside the Code nodes, ensure comments are in English (e.g., // Slackへの投稿メッセージを作成します → // Create message for Slack). Inside the AI Agent node, translate the System Prompt into English (e.g., "You are a professional content planner..." instead of "あなたはプロの..."). Even if you want the output in Japanese, the template default should usually be English, or clearly labeled as a Japanese template. Add the Mandatory Sticky Note: Add a Yellow Sticky Note to the canvas. Paste the "Description" text (from step 2 above) into this sticky note. Place it clearly next to the start of the workflow. Remove Hardcoded IDs: Ensure MASKED_USER_ID and MASKED_WEBHOOK_ID are cleared out or set to expressions that reference the user's setup.
by A Z
⚡ Quick Setup Import this workflow into your n8n instance. Add your Apify, Google Sheets, and Firecrawl credentials. Activate the workflow to start your automated lead enrichment system. Copy the webhook URL from the MCP trigger node. Connect AI agents using the MCP URL. 🔧 How it Works This solution combines two powerful workflows to deliver fully enriched, AI-ready business leads from Google Maps: Apify Google Maps Scraper Node: Collects business data and, if enabled, enriches each lead with contact details and social profiles. Leads Missing Enrichment: Any leads without contact or social info are automatically saved to a Google Sheet. Firecrawl & Code Node Workflow: A second workflow monitors the Google Sheet, crawls each business’s website using Firecrawl, and extracts additional social media profiles or contact info using a Code node. Personalization Logic: AI-powered nodes generate tailored outreach content for each enriched lead. Native Integration: The entire process is exposed as an MCP-compatible interface, returning enriched and personalized lead data directly to the AI agent. 📋 Available Operations Business Search: Find businesses on Google Maps by location, category, or keyword. Lead Enrichment: Automatically append contact details, social profiles, and other business info using Apify and Firecrawl. Personalized Outreach Generation: Create custom messages or emails for each lead. Batch Processing: Handle multiple leads in a single request. Status & Error Reporting: Get real-time feedback on processing, enrichment, and crawling. 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: Search queries (location, keywords, categories) Enrichment options (contact, social, etc.) Personalization variables (name, business type, etc.) Response Format: Returns fully enriched lead data and personalized outreach content in a structured format.
by Khairul Muhtadin
This automated TLDW (Too Long; Didn't Watch) generator using Decodo's scraping API to extract complete video transcripts and metadata, then uses Google Gemini 3 to create intelligent summaries with key points, chapters breakdown, tools mentioned, and actionable takeaways—eliminating hours of manual note-taking and video watching. Why Use This Workflow? Time Savings: Convert a 2-hour video into a readable 5-minute summary, reducing research time by 95% Comprehensive Coverage: Captures key points, chapters, tools, quotes, and actionable steps that manual notes often miss Instant Accessibility: Receive structured summaries directly in Telegram within 30-60 seconds of sharing a link Multi-Language Support: Process transcripts in multiple languages supported by YouTube's auto-caption system Ideal For Content Creators & Researchers:** Quickly extract insights from competitor videos, educational content, or industry talks without watching hours of footage Students & Educators:** Generate study notes from lecture recordings, online courses, or tutorial videos with chapter-based breakdowns Marketing Teams:** Analyze competitor content strategies, extract tools and techniques mentioned, and identify trending topics across multiple videos Busy Professionals:** Stay updated with conference talks, webinars, or industry updates by reading summaries instead of watching full recordings How It Works Trigger: User sends any YouTube URL (youtube.com or youtu.be) to a configured Telegram bot Data Collection: Workflow extracts video ID and simultaneously fetches full transcript and metadata (title, channel, views, duration, chapters, tags) via Decodo API Processing: Raw transcript data is extracted and cleaned, while metadata is parsed into structured fields including formatted statistics and chapter timestamps AI Processing: Google Gemini Flash analyzes the transcript to generate a structured summary covering one-line overview, key points, main topics by chapter, tools mentioned, target audience, practical takeaways, and notable quotes Setup Guide Prerequisites | Requirement | Type | Purpose | |-------------|------|---------| | n8n instance | Essential | Workflow execution platform | | Telegram Bot API | Essential | Receives video links and delivers summaries | | Decodo Scraper API | Essential | Extracts YouTube transcripts and metadata | | Google Gemini API | Essential | AI-powered summary generation | Installation Steps Import the JSON file to your n8n instance Configure credentials: Telegram Bot API: Create a bot via @BotFather on Telegram, obtain the API token, and configure in n8n Telegram credentials Decodo API: Sign up at Decodo Dashboard, get your API key, create HTTP Header Auth credential with header name "Authorization" and value "Basic [YOUR_API_KEY]" Google Gemini API: Obtain API key from Google AI Studio, configure in n8n Google Palm API credentials Update environment-specific values: In the "Alert Admin" node, replace YOUR_CHAT_ID with your personal Telegram user ID for error notifications Optionally adjust the languageCode in "Set: Video ID & Config" node (default: "en") Customize settings: Modify the AI prompt in "Generate TLDR" node to adjust summary structure and depth Test execution: Send a YouTube link to your Telegram bot Verify you receive the "Processing..." notification, video info card, and formatted summary chunks Technical Details Workflow Logic The workflow employs parallel processing for efficiency. Transcript and metadata are fetched simultaneously after video ID extraction. Once both API calls complete, the transcript feeds directly into Gemini AI while metadata is parsed separately. The merge node combines AI output with structured metadata before splitting into Telegram-friendly chunks. Error handling is isolated on a separate branch triggered by any node failure, formatting error details and alerting admins without disrupting the main flow. Customization Options Basic Adjustments: Language Selection**: Change languageCode from "en" to "id", "es", "fr", etc. to fetch transcripts in different languages (YouTube must have captions available) Summary Style**: Edit the prompt in "Generate TLDR" to focus on specific aspects (e.g., "focus only on technical tools mentioned" or "create a summary for beginners") Message Length**: Adjust maxCharsPerChunk (currently 4000) to create longer or shorter message splits based on preference Advanced Enhancements: Database Storage**: Add a Postgres/Airtable node after "Merge: Data + Summary" to archive all summaries with timestamps and user IDs for searchable knowledge base (medium complexity) Multi-Model Comparison**: Duplicate the "Generate TLDR" chain and connect GPT-4 or Claude, merge results to show different AI perspectives on the same video (high complexity) Auto-Translation**: Insert a translation node after summary generation to deliver summaries in user's preferred language automatically (medium complexity) Troubleshooting Common Issues: | Problem | Cause | Solution | |---------|-------|----------| | "Not a YouTube URL" error | URL format not recognized | Ensure UR sent contains youtube.com or youtu.be | | No transcript available | Video lacks captions or wrong language | Check video has auto-generated or manual captions change languageCode to match available options | | Decodo API 401/403 error | Invalid or expired API key | Verify API key in HTTP Header Auth credential. regenerate if needed from Decodo dashboard || | Error notifications not received | Wrong chat ID in Alert Admin node | Get your Telegram user ID from @userinfobot and update the node | Use Case Examples Scenario 1: Marketing Agency Competitive Analysis Challenge: Agency needs to analyze 50+ competitor YouTube videos monthly to identify content strategies, tools used, and messaging angles—watching all videos would require 80+ hours Solution: Drop youtube links into a shared Telegram group with the bot. Summaries are generated instantly, highlighting tools mentioned, key talking points, and target audience insights Result: Research time reduced from 80 hours to 6 hours monthly (93% time savings), with searchable archive of all competitor content strategies Created by: Khaisa Studio Category: AI-Powered Automation Tags: YouTube, AI, Telegram, Summarization, Content Analysis, Decodo, Gemini Need custom workflows? Contact us Connect with the creator: Portfolio • Workflows • LinkedIn • Medium • Threads
by Nskha
This n8n template provides a comprehensive solution for managing Key-Value (KV) pairs using Cloudflare's KV storage. It's designed to simplify the interaction with Cloudflare's KV storage APIs, enabling users to perform a range of actions like creating, reading, updating, and deleting namespaces and KV pairs. Features Efficient Management**: Handle multiple KV operations seamlessly. User-Friendly**: Easy to use with pre-configured Cloudflare API credentials within n8n. Customizable**: Flexible for integration into larger workflows (Copy / paste your prefered part). Prerequisites n8n workflow automation tool (version 1.19.0 or later). A Cloudflare account with access to KV storage. Pre-configured Cloudflare API credentials in n8n. Workflow Overview This workflow is divided into three main sections for ease of use: Single Actions: Perform individual operations on KV pairs. Bulk Actions: Handle multiple KV pairs simultaneously. Specific Actions: Execute specific tasks like renaming namespaces. Key Components Manual Trigger**: Initiates the workflow. Account Path Node**: Sets the path for account details, a prerequisite for all actions. HTTP Request Nodes**: Facilitate interaction with Cloudflare's API for various operations. Sticky Notes**: Provide quick documentation links and brief descriptions of each node's function. Usage Setup Account Path: Input your Cloudflare account details in the 'Account Path' node. you can get your account path by your cloudflare URL Choose an Action: Select the desired operation from the workflow. Configure Nodes: Adjust parameters in the HTTP request nodes as needed. (each node contain sticky note with direct link to it own document page) Execute Workflow: Trigger the workflow manually to perform the selected operations. Detailed Node Descriptions I covered in this Workflow the full api calls of Cloudflare KV product. API NODE: Delete KV Type**: HTTP Request Function**: Deletes a specified KV pair within a namespace. Configuration**: This node requires the namespace ID and KV pair name. It automatically fetches these details from preceding nodes, specifically from the "List KV-NMs" and "Set KV-NM Name" nodes. Documentation**: Delete KV Pair API API NODE: Create KV-NM Type**: HTTP Request Function**: Creates a new Key-Value Namespace. Configuration**: Users need to input the title for the new namespace. This node uses the account information provided by the "Account Path" node. Documentation**: Create Namespace API API NODE: Delete KV1 Type**: HTTP Request Function**: Renames an existing Key-Value Namespace. Configuration**: Requires the old namespace name and the new desired name. It retrieves these details from the "KV to Rename" and "List KV-NMs" nodes. Documentation**: Rename Namespace API API NODE: Write KVs inside NM Type**: HTTP Request Function**: Writes multiple Key-Value pairs inside a specified namespace. Configuration**: This node needs a JSON array of key-value pairs along with their namespace identifier. It fetches the namespace ID from the "List KV-NMs" node. Documentation**: Write Multiple KV Pairs API API NODE: Read Value Of KV In NM Type**: HTTP Request Function**: Reads the value of a specific Key-Value pair in a namespace. Configuration**: Requires the Key's name and Namespace ID, which are obtained from the "Set KV-NM Name" and "List KV-NMs" nodes. Documentation**: Read KV Pair API API NODE: Read MD from Key Type**: HTTP Request Function**: Reads the metadata of a specific Key in a namespace. Configuration**: Similar to the "Read Value Of KV In NM" node, it needs the Key's name and Namespace ID, which are obtained from the "Set KV-NM Name" and "List KV-NMs" nodes. Documentation**: Read Metadata API > The rest can be found inside the workflow with sticky/onflow note explain what to do. Best Practices Modular Use**: Extract specific parts of the workflow for isolated tasks. Validation**: Ensure correct namespace and KV pair names before execution. Security**: Regularly update your Cloudflare API credentials for secure access, and make sure to give your API only access to the KV. Keywords: Cloudflare KV, n8n workflow automation, API integration, key-value storage management.
by David Roberts
Sometimes you want to take a different action in your error workflow based on the data that was flowing through it. This template illustrates how you can do that (more specifically, how you can retrieve the data of a webhook node). How it works Use the 'n8n' node to fetch the data of the failed execution Parse that data to find webhook nodes and extract the data of the one that was executed
by Destiya Wijayanto
This template provides a set of MCP tools to manage personal budgets and expenses. This MCP tools can be integrated to any AI client that support MCP integration. How it works It stores transaction records and budget in google sheet It will give warning if expense is above budget How to setup Sign in with google in google sheet nodes Copy google sheet template (link available in the sticky note) Target google sheet nodes to the right sheet Integrate with AI client Enjoy!!
by David Ashby
🛠️ NASA Tool MCP Server Complete MCP server exposing all NASA Tool operations to AI agents. Zero configuration needed - all 15 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every NASA Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n NASA Tool tool with full error handling 📋 Available Operations (15 total) Every possible NASA Tool operation is included: 🔧 Asteroidneobrowse (1 operations) • Get many asteroid neos 🔧 Asteroidneofeed (1 operations) • Get an asteroid neo feed 🔧 Asteroidneolookup (1 operations) • Get an asteroid neo lookup 🔧 Astronomypictureoftheday (1 operations) • Get the astronomy picture of the day 🔧 Donkicoronalmassejection (1 operations) • Get a DONKI coronal mass ejection 🔧 Donkihighspeedstream (1 operations) • Get a DONKI high speed stream 🔧 Donkiinterplanetaryshock (1 operations) • Get a DONKI interplanetary shock 🔧 Donkimagnetopausecrossing (1 operations) • Get a DONKI magnetopause crossing 🔧 Donkinotifications (1 operations) • Get a DONKI notifications 🔧 Donkiradiationbeltenhancement (1 operations) • Get a DONKI radiation belt enhancement 🔧 Donkisolarenergeticparticle (1 operations) • Get a DONKI solar energetic particle 🔧 Donkisolarflare (1 operations) • Get a DONKI solar flare 🔧 Donkiwsaenlilsimulation (1 operations) • Get a DONKI wsa enlil simulation 🔧 Earthassets (1 operations) • Get Earth assets 🔧 Earthimagery (1 operations) • Get Earth imagery 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native NASA Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every NASA Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Onur
Amazon Product Scraper with Scrape.do & AI Enrichment > This workflow is a fully automated Amazon product data extraction engine. It reads product URLs from a Google Sheet, uses Scrape.do to reliably fetch each product page’s HTML without getting blocked, and then applies an AI-powered extraction process to capture key product details such as name, price, rating, review count, and description. All structured results are neatly stored back into a Google Sheet for easy access and analysis. This template is designed for consistency and scalability—ideal for marketers, analysts, and e-commerce professionals who need clean product data at scale. 🚀 What does this workflow do? Reads Input URLs:** Pulls a list of Amazon product URLs from a Google Sheet. Scrapes HTML Reliably:* Uses *Scrape.do** to bypass Amazon’s anti-bot measures, ensuring the page HTML is always retrieved successfully. Cleans & Pre-processes HTML:** Strips scripts, styles, and unnecessary markup, isolating only relevant sections like title, price, ratings, and feature bullets. AI-Powered Data Extraction:** A LangChain/OpenRouter GPT-4 node verifies and enriches key fields—product name, price, rating, reviews, and description. Stores Structured Results:** Appends all extracted and verified product data to a results tab in Google Sheets. Batch & Loop Control:** Handles multiple URLs efficiently with Split In Batches to process as many products as you need. 🎯 Who is this for? E-commerce Sellers & Dropshippers:** Track competitor prices, ratings, and key product features automatically. Marketing & SEO Teams:** Collect product descriptions and reviews to optimize campaigns and content. Analysts & Data Teams:** Build accurate product databases without manual copy-paste work. ✨ Benefits High Success Rate:* *Scrape.do** handles proxy rotation and CAPTCHA challenges automatically, outperforming traditional scrapers. AI Validation:** LLM verification ensures data accuracy and fills in gaps when HTML elements vary. Full Automation:** Runs on-demand or on a schedule to keep product datasets fresh. Clean Output:** Results are neatly organized in Google Sheets, ready for reporting or integration with other tools. ⚙️ How it Works Manual or Scheduled Trigger: Start the workflow manually or via a cron schedule. Input Source: Fetch URLs from a Google Sheet (TRACK_SHEET_GID). Scrape with Scrape.do: Retrieve full HTML from each Amazon product page using your SCRAPEDO_TOKEN. Clean & Pre-Extract: Strip irrelevant code and use regex to pre-extract key fields. AI Extraction & Verification: LangChain GPT-4 model refines and validates product name, description, price, rating, and reviews. Save Results: Append enriched product data to the results sheet (RESULTS_SHEET_GID). 📋 n8n Nodes Used Manual Trigger / Schedule Trigger Google Sheets (read & append) Split In Batches HTTP Request (Scrape.do) Code (clean & pre-extract HTML) LangChain LLM (OpenRouter GPT-4) Structured Output Parser 🔑 Prerequisites Active n8n instance. Scrape.do API token** (bypasses Amazon anti-bot measures). Google Sheets** with: TRACK_SHEET_GID: tab containing product URLs. RESULTS_SHEET_GID: tab for results. Google Sheets OAuth2 credentials** shared with your service account. OpenRouter / OpenAI API credentials** for the GPT-4 model. 🛠️ Setup Import the Workflow into your n8n instance. Set Workflow Variables: SCRAPEDO_TOKEN – your Scrape.do API key. WEB_SHEET_ID – Google Sheet ID. TRACK_SHEET_GID – sheet/tab name for input URLs. RESULTS_SHEET_GID – sheet/tab name for results. Configure Credentials for Google Sheets and OpenRouter. Map Columns in the “add results” node to match your Google Sheet (e.g., name, price, rating, reviews, description). Run or Schedule: Start manually or configure a schedule for continuous data extraction. This Amazon Product Scraper delivers fast, reliable, and AI-enriched product data, ensuring your e-commerce analytics, pricing strategies, or market research stay accurate and fully automated.
by David Ashby
Complete MCP server exposing 14 Domains-Index API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Domains-Index API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Domains-Index API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to /v1 • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (14 total) 🔧 Domains (9 endpoints) • GET /domains/search: Domains Database Search • GET /domains/tld/{zone_id}: Get TLD records • GET /domains/tld/{zone_id}/download: Download Whole Dataset for TLD • GET /domains/tld/{zone_id}/search: Domains Search for TLD • GET /domains/updates/added: Get added domains, latest if date not specified • GET /domains/updates/added/download: Download added domains, latest if date not specified • GET /domains/updates/deleted: Get deleted domains, latest if date not specified • GET /domains/updates/deleted/download: Download deleted domains, latest if date not specified • GET /domains/updates/list: List of updates 🔧 Info (5 endpoints) • GET /info/api: GET /info/api • GET /info/stat/: Returns overall stagtistics • GET /info/stat/{zone}: Returns statistics for specific zone • GET /info/tld/: Returns overall Tld info • GET /info/tld/{zone}: Returns statistics for specific zone 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Domains-Index API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 9 Api2Pdf - PDF Generation, Powered by AWS Lambda API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Api2Pdf - PDF Generation, Powered by AWS Lambda credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Api2Pdf - PDF Generation, Powered by AWS Lambda API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://v2018.api2pdf.com • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (9 total) 🔧 Chrome (3 endpoints) • POST /chrome/html: Convert raw HTML to PDF • GET /chrome/url: Convert URL to PDF • POST /chrome/url: Convert URL to PDF 🔧 Libreoffice (1 endpoints) • POST /libreoffice/convert: Convert office document or image to PDF 🔧 Merge (1 endpoints) • POST /merge: Merge multiple PDFs together 🔧 Wkhtmltopdf (3 endpoints) • POST /wkhtmltopdf/html: Convert raw HTML to PDF • GET /wkhtmltopdf/url: Convert URL to PDF • POST /wkhtmltopdf/url: Convert URL to PDF 🔧 Zebra (1 endpoints) • GET /zebra: Generate bar codes and QR codes with ZXING. 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Api2Pdf - PDF Generation, Powered by AWS Lambda API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.