by David Ashby
Complete MCP server exposing 4 AWS Cost and Usage Report Service API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add AWS Cost and Usage Report Service credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the AWS Cost and Usage Report Service API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to http://cur.{region}.amazonaws.com • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (4 total) 🔧 #X-Amz-Target=Awsorigamiservicegatewayservice.Deletereportdefinition (1 endpoints) • POST /#X-Amz-Target=AWSOrigamiServiceGatewayService.DeleteReportDefinition: Deletes the specified report. 🔧 #X-Amz-Target=Awsorigamiservicegatewayservice.Describereportdefinitions (1 endpoints) • POST /#X-Amz-Target=AWSOrigamiServiceGatewayService.DescribeReportDefinitions: Lists the AWS Cost and Usage reports available to this account. 🔧 #X-Amz-Target=Awsorigamiservicegatewayservice.Modifyreportdefinition (1 endpoints) • POST /#X-Amz-Target=AWSOrigamiServiceGatewayService.ModifyReportDefinition: Allows you to programatically update your report preferences. 🔧 #X-Amz-Target=Awsorigamiservicegatewayservice.Putreportdefinition (1 endpoints) • POST /#X-Amz-Target=AWSOrigamiServiceGatewayService.PutReportDefinition: Creates a new report using the description that you provide. 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native AWS Cost and Usage Report Service API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 8 Bulk WHOIS API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Bulk WHOIS API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Bulk WHOIS API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to http://localhost:5000 • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (8 total) 🔧 Batch (4 endpoints) • GET /batch: Get your batches • POST /batch: Create batch. Batch is then being processed until all provided items have been completed. At any time it can be get to provide current status with results optionally. • DELETE /batch/{id}: Delete batch • GET /batch/{id}: Get batch 🔧 Db (1 endpoints) • GET /db: Query domain database 🔧 Domains (3 endpoints) • GET /domains/{domain}/check: Check domain availability • GET /domains/{domain}/rank: Check domain rank (authority). • GET /domains/{domain}/whois: WHOIS query for a domain 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Bulk WHOIS API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 11 hashlookup CIRCL API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add hashlookup CIRCL API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the hashlookup CIRCL API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://hashlookup.circl.lu • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (11 total) 🔧 Bulk (2 endpoints) • POST /bulk/md5: Bulk Search MD5 Hashes • POST /bulk/sha1: Bulk Search SHA1 Hashes 🔧 Children (1 endpoints) • GET /children/{sha1}/{count}/{cursor}: Return children from a given SHA1. A number of element to return and an offset must be given. If not set it will be the 100 first elements. A cursor must be given to paginate over. The starting cursor is 0. 🔧 Info (1 endpoints) • GET /info: Get Database Info 🔧 Lookup (3 endpoints) • GET /lookup/md5/{md5}: Lookup MD5. • GET /lookup/sha1/{sha1}: Lookup SHA-1. • GET /lookup/sha256/{sha256}: Lookup SHA-256. 🔧 Parents (1 endpoints) • GET /parents/{sha1}/{count}/{cursor}: Return parents from a given SHA1. A number of element to return and an offset must be given. If not set it will be the 100 first elements. A cursor must be given to paginate over. The starting cursor is 0. 🔧 Session (2 endpoints) • GET /session/create/{name}: Create a session key to keep search context. The session is attached to a name. After the session is created, the header hashlookup_session can be set to the session name. • GET /session/get/{name}: Return set of matching and non-matching hashes from a session. 🔧 Stats (1 endpoints) • GET /stats/top: Get Top Queries 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native hashlookup CIRCL API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Ezema Kingsley Chibuzo
🧠 What It Does This n8n workflow turns your Telegram bot into a smart, multi-modal AI assistant that accepts text, documents, images, and audio messages, interprets them using OpenAI models, and responds instantly with context-aware answers. It integrates a Supabase vector database to store document embeddings and retrieve relevant information before sending a prompt to OpenAI — enabling a full RAG experience 💡 Why This Workflow? Most support bots can only handle basic text input. This workflow: Supports multiple input formats (voice, documents, images, text) Dynamically extracts and processes data from uploaded files Implements RAG by combining user input with relevant memory or vector-based context Delivers more accurate, relevant, and human-like AI responses. 👤 Who It's For Businesses looking to automate support using Telegram Freelancers or solopreneurs offering AI Chatbots for businesses. Creators building AI-powered bots for real use cases as it's great for Customer support knowledge, Legal or Policy document, long FAQs, Project documentation, and Product information retrieval. Devs or analysts exploring AI + multi-format input + vector memory. ⚙️ How It Works 🗂️ Knowledge Base Setup Run the “Add to Supabase Vector DB” workflow manually to upload a document from your google drive and embed it into your vector database. This powers the Telegram chatbot’s ability to answer questions using your content. 🔁 Telegram Message Routing Telegram Trigger captures the user message (Text, Image, Voice, Document) Message Router routes input by type using a Switch node Each type is handled separately: Voice → Translate recording to text (.ogg, .mp3) Image → Analyze image to text. Text → Sent directly to AI Agent (.txt). Document → Parsed (e.g. .docx to .txt) accordingly. 📎 Document Type Routing Before routing documents by type, the Supported Document File Types node first checks if the file extension is allowed. If not supported, it exits early with an error message — preventing unnecessary processing. Supported documents are then routed using the Document Router node, and converted to text for further processing. Supported Document File Types .jpg .jpeg .png .webp .pdf .doc .docx .xls .xlsx .json .xml. The text content is combined with stored memory and embedded knowledge using a RAG approach, enabling the AI to respond based on real uploaded data. 🧠 RAG via Supabase Uploaded documents are vectorized using OpenAI Embeddings. Embeddings are stored in Supabase with metadata. On new questions, the chatbot: Extracts question intent Queries Supabase for semantically similar chunks Ranks retrieved chunks to find the most relevant match. Injects them into the prompt for OpenAI. OpenAI generates a grounded response based on actual document content. Response is sent to the Telegram user with content awareness. 🛠 How to Set It Up Open n8n or your local/self-hosted instance. Import the `.json ` workflow file. Set up these credentials: Google drive API Key Telegram API (Bot Token) Guide OpenAI API Supabase API Key + Environment ConvertAPI API Key Postgres API Key Cohere API Key Add a prompt suited to your business. Add a custom AI agent prompt that reflects your business domain, tone, and purpose. This is very important. Without it, your agent won't know how best to respond. Activate the workflow. Start testing by sending a message or document to your Telegram bot.
by David Ashby
Complete MCP server exposing 4 Transportation Laws and Incentives API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Transportation Laws and Incentives credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Transportation Laws and Incentives API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to http://developer.nrel.gov/api/transportation-incentives-laws • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (4 total) 🔧 V1.{Output_Format} (1 endpoints) • GET /v1.{output_format}: Return a full list of laws and incentives that match your query. 🔧 V1 (3 endpoints) • GET /v1/category-list.{output_format}: Return the law categories for a given category type. • GET /v1/pocs.{output_format}: Get the points of contact for a given jurisdiction. • GET /v1/{id}.{output_format}: Fetch the details of a specific law given the law's ID. 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Transportation Laws and Incentives API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Rudi Afandi
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow allows you to create and manage custom short URLs directly via Telegram, with all data stored in MongoDB, and redirects handled efficiently via Nginx. How it works This flow provides a seamless URL shortening experience: Create via Telegram: Send a long URL to your bot. It will ask if you want a custom short code. Store in MongoDB: All long URLs and their corresponding short codes are securely stored in your MongoDB instance. Fast Redirects: When a user accesses a short URL, Nginx forwards the request to a dedicated n8n webhook, which then quickly redirects them to the original long URL. Set up steps This setup is straightforward, especially if you already have a running n8n instance and a VPS. Difficulty: Medium (Basic n8n/VPS knowledge required) Estimated Time: 15-30 minutes n8n Instance & VPS: Ensure you have n8n running on your VPS (e.g., 2 core 2GB, as you have). Telegram Bot: Create a new bot via @BotFather and get your Bot Token. Add this as a Telegram credential in n8n. MongoDB Database: Set up a MongoDB instance (either on your VPS or a cloud service like MongoDB Atlas). Create a database and a collection (e.g., url or short_urls). Add your MongoDB credentials in n8n. Here's MongoDB data structure JSON: >[ {"_id": "686a11946a48b580d72d0397", "longUrl": "https://longurl.com/abcdefghijklm/", "shortUrl": "short-code"} ] Domain/Subdomain: Point a domain or subdomain (e.g., s.yourdomain.com) to your VPS IP address. This will be your short URL base. Nginx/Caddy Configuration: Configure your web server (Nginx or Caddy) on the VPS to proxy requests from your short URL domain to the n8n webhook for redirects. (Detailed Nginx config is provided as sticky notes in the redirect workflow) Workflow Setup: Import both provided n8n workflows (Telegram URL Shortener Creator and URL Redirect Handler). Activate both workflows. Crucial: Set an environment variable in your n8n instance (or .env file) named SHORTENER_DOMAIN with the value of your short URL domain (e.g., https://s.yourdomain.com). Refer to sticky notes inside the workflows for detailed node configurations and expressions.
by Adnan
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. 👥 Who is this for? This workflow is designed for a variety of professionals who manage vendor relationships and data security. It is especially beneficial for: 🛡️ GRC (Governance, Risk, and Compliance) Professionals**: Streamline your risk assessment processes 🔒 Information Security Teams**: Quickly evaluate the security posture of third-party vendors 📋 Procurement Departments**: Enhance due diligence when onboarding new service providers 🚀 Startup Founders**: Efficiently assess vendors without a dedicated security team This tool is perfect for anyone looking to automate the manual review of vendor websites, policies, and company data. ✨ 🎯 What problem is this workflow solving? Manual vendor due diligence is a time-consuming process that can take hours for a single vendor. This workflow automates over 80% of these manual tasks, which typically include: 🔍 Finding and organizing basic vendor information 🏢 Researching the company's background 📄 Collecting links to key documents like Privacy Policies, Terms of Service, and Trust Pages 📖 Manually reviewing each document to extract risk-relevant information 📊 Compiling all findings into a formatted report or spreadsheet for record-keeping By leveraging Gemini for structured parsing and web scraping with live internet data, this workflow frees you up to focus on critical analysis and final review. ⚡ ⚙️ What this workflow does This end-to-end automated n8n workflow performs the following steps: 📝 Intake: Begins with a simple form to capture the vendor's name, the business use case, and the type of data they will handle 🔎 Background Research: Gathers essential background information on the company ⚠️ Risk Analysis: Conducts comprehensive research on various risk-related topics 🔗 URL Extraction: Finds and validates public URLs for privacy policies, security pages, and trust centers 📈 Risk Assessment: Generates a structured risk score and a detailed assessment based on the collected content and context 📤 Export: Exports the final results to a Google Sheet for easy access and record-keeping 🚀 Setup To get started with this workflow, follow these steps: 🔑 Configure Credentials: Set up your API credentials for Gemini and Jina AI 📊 Connect Google Sheets: Authenticate your Google Sheets account and configure the the Sheet where you want to store the results 🔗 Download the Google Sheet template for your assessment ouput from here ⚙️ (Optional) Customize Prompts: Adjust the prompts within the workflow to better suit your specific needs 🎯 (Optional) Align Risk Framework: Modify the risk questions to align with your organization's internal vendor risk framework
by Davide
Voiceflow is a no-code platform that allows you to design, prototype, and deploy conversational assistants across multiple channels—such as chat, voice, and phone—with advanced logic and natural language understanding. It supports integration with APIs, webhooks, and even tools like Twilio for phone agents. It's perfect for building customer support agents, voice bots, or intelligent assistants. This workflow connects n8n and Voiceflow with tools like Google Calendar, Qdrant (vector database), OpenAI, and an order tracking API to power a smart, multi-channel conversational agent. There are 3 main webhook endpoints in n8n that Voiceflow interacts with: n8n_order – receives user input related to order tracking, queries an API, and responds with tracking status. n8n_appointment – processes appointment booking, reformats date input using OpenAI, and creates a Google Calendar event. n8n_rag – handles general product/service questions using a RAG (Retrieval-Augmented Generation) system backed by: Google Drive document ingestion, Qdrant vector store for search, and OpenAI models for context-based answers. Each webhook is connected to a corresponding "Capture" block inside Voiceflow, which sends data to n8n and waits for the response. How It Works This n8n workflow integrates Voiceflow for chatbot/voice interactions, Google Calendar for appointment scheduling, and RAG (Retrieval-Augmented Generation) for knowledge-based responses. Here’s the flow: Trigger**: Three webhooks (n8n_order, n8n_appointment, n8n_rag) receive inputs from Voiceflow (chat, voice, or phone calls). Each webhook routes requests to specific functions: Order Tracking: Fetches order status via an external API. Appointment Scheduling: Uses OpenAI to parse dates, creates Google Calendar events, and confirms via WhatsApp. RAG System: Queries a Qdrant vector store (populated with Google Drive documents) to answer customer questions using GPT-4. AI Processing**: OpenAI Chains: Convert natural language dates to Google Calendar formats and generate responses. RAG Pipeline: Embeds documents (via OpenAI), stores them in Qdrant, and retrieves context-aware answers. Voiceflow Integration: Routes responses back to Voiceflow for multi-channel delivery (chat, voice, or phone). Outputs**: Confirmation messages (e.g., "Event created successfully"). Dynamic responses for orders, appointments, or product support. Setup Steps Prerequisites: APIs**: Google Calendar & Drive OAuth credentials. Qdrant vector database (hosted or cloud). OpenAI API key (for GPT-4 and embeddings). Configuration: Qdrant Setup: Run the "Create collection" and "Refresh collection" nodes to initialize the vector store. Populate it with documents using the Google Drive → Qdrant pipeline (embeddings generated via OpenAI). Voiceflow Webhooks: Link Voiceflow’s "Captures" to n8n’s webhook URLs (n8n_order, n8n_appointment, n8n_rag). Google Calendar: Authenticate the Google Calendar node and set event templates (e.g., summary, description). RAG System: Configure the Qdrant vector store and OpenAI embeddings nodes. Adjust the Retrieve Agent’s system prompt for domain-specific queries (e.g., electronics store support). Optional: Add Twilio for phone-agent capabilities. Customize OpenAI prompts for tone/accuracy. PS. You can import a Twilio number to assign it to your agent for becoming a Phone Agent Need help customizing? Contact me for consulting and support or add me on Linkedin
by Fahmi Oktafian
This n8n workflow is a Telegram bot that allows users to either: Generate AI images using Pollinations API, or Generate blog articles using Gemini AI Users simply type image your prompt or blog your title, and the bot responds with either an AI-generated image or article. Who's it for This template is ideal for: Content creators and marketers who want to generate visual and written content quickly Telegram bot developers looking for real-world AI integration Educators or students automating content workflows Anyone managing content pipelines using Google Sheets What it does / How it works Telegram Interaction Trigger Telegram Message: Listens for new messages or button clicks via Telegram Classify Telegram Input: JavaScript logic to classify input as /start, /help, normal text, or callback Switch Input Type: Directs the flow based on the classification Menu & Help Send Main Menu to User: Shows "Generate Image", "Blog Article", "Help" options Switch Callback Selection: Routes based on button pressed (image, blog, or help) Send Help Instructions: Sends markdown instructions on how to use the bot Input Validation Validate Command Format: Ensures input starts with image or blog Notify Invalid Input Format: If validation fails, informs user of correct format Image Generator Prompt User for Image Description → When user clicks Generate Image Detect Text-Based Input Type → Detects if text is image or blog Switch Text Command Type → Directs whether to generate image or article Show Typing for Image Generation → Sends "uploading photo..." typing status Build Image Generation URL → Constructs Pollinations API image URL from prompt Download AI Image → Makes HTTP request to get the image Send Image Result to Telegram → Sends image to user via Telegram Log Image Prompt to Google Sheets → Logs prompt, image URL, date, and user ID Upload Image to Google Drive → Saves image to Google Drive folder Blog Article Generator Prompt User for Blog Title → When user clicks Blog Article Store Blog Prompt → Saves prompt for later use Log Blog Prompt to Google Sheets → Writes title + user ID to Google Sheets Send Article Style Options → Offers: Formal, Casual, or News style Store Selected Article Style → Updates row with chosen style in Google Sheets Fetch Last User Prompt → Finds the latest prompt submitted by this user Extract Last Blog Prompt → Extracts row for use in AI request Gemini Chat Wrapper → Handles input into LangChain node for AI processing Generate Article with Gemini → Calls Gemini to create 3-paragraph blog post Parse Gemini Response → Parses JSON string to extract title and content Send Article to Telegram → Sends blog article result back to user Log Final Article to Google Sheets → Updates row with final content and timestamp Requirements Telegram bot (via @BotFather) Pollinations API (free and public endpoint) Google Sheets & Drive (OAuth credential setup in n8n) Google Gemini / PaLM API key via LangChain Self-hosted or cloud n8n setup Setup Instructions Clone the workflow and import it into your n8n instance Set credentials: Telegram API Google Sheets OAuth Google Drive OAuth Gemini (via LangChain) Replace: Sheet ID with your own Google Sheet Folder ID on Google Drive chat_id placeholders if needed (use expressions instead) Deploy and send /start in your Telegram bot 🔧 Customization Tips Edit the Gemini prompt to adjust article length or tone Add extra style buttons like "SEO", "Story", "Academic" Add image post-processing (e.g. compression, renaming) Add error catching logic (e.g. if Pollinations image fails) Store images with filenames based on timestamp/user Security Considerations Use n8n credentials for all tokens (Telegram, Gemini, Sheets, Drive) Never hardcode your token inside HTTP nodes Do not expose real Google Sheet or Drive links in shared version Use Set node to collect all editable variables (like folder ID, sheet name)
by scrapeless official
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How it works This n8n workflow helps you build a fully automated SEO content engine using Scrapeless and AI. It’s designed for teams running international websites—such as SaaS products, e-commerce platforms, or content-driven businesses—who want to grow targeted search traffic through high-conversion content, without relying on manual research or hit-or-miss topics. The flow runs in three key phases: 🔍 Phase 1: Topic Discovery Automatically find high-potential long-tail keywords based on a seed keyword using Google Trends via Scrapeless. Each keyword is analyzed for trend strength and categorized by priority (P0–P3) with the help of an AI agent. 🧠 Phase 2: Competitor Research For each P0–P2 keyword, the flow performs a Google Search (via Deep SerpAPI) and extracts the top 3 organic results. Scrapeless then crawls each result to extract full article content in clean Markdown. This gives you a structured, comparable view of how competitors are writing about each topic. ✍️ Phase 3: AI Article Generation Using AI (OpenAI or other LLM), the workflow generates a complete SEO article draft, including: SEO title Slug Meta description Trend-based strategy summary Structured JSON-based article body with H2/H3 blocks Finally, the article is stored in Supabase (or any other supported DB), making it ready for review, API-based publishing, or further automation. Set up steps This flow requires intermediate familiarity with n8n and API key setup. Full configuration may take 30–60 minutes. ✅ Prerequisites Scrapeless** account (for Google Trends and web crawling) LLM provider** (e.g. OpenAI or Claude) Supabase* or *Google Sheets** (to store keywords & article output) 🧩 Required Credentials in n8n Scrapeless API Key OpenAI (or other LLM) credentials Supabase or Google Sheets credentials 🔧 Setup Instructions (Simplified) Input Seed Keyword Edit the “Set Seed Keyword” node to define your niche, e.g., "project management". Google Trends via Scrapeless Use Scrapeless to retrieve “related queries” and their interest-over-time data. Trend Analysis with AI Agent AI evaluates each keyword's trend strength and assigns a priority (P0–P3). Filter & Store Keyword Data Group and sort keywords by priority, then store them in Google Sheets. Competitor Research Use Deep SerpAPI to get top 3 Google results. Crawl each using Scrapeless. AI Content Generation Feed competitor content + trend data into AI. Output a structured SEO blog article. Store Final Article Save full article JSON (title, meta, slug, content) to Supabase.
by Incrementors
LinkedIn & Indeed Job Scraper with Bright Data & Google Sheets Export Overview This n8n workflow automates the process of scraping job listings from both LinkedIn and Indeed platforms simultaneously, combining results, and exporting data to Google Sheets for comprehensive job market analysis. It integrates with Bright Data for professional web scraping, Google Sheets for data storage, and provides intelligent status monitoring with retry mechanisms. Workflow Components 1. 📝 Trigger Input Form Type**: Form Trigger Purpose**: Initiates the workflow with user-defined job search criteria Input Fields**: City (required) Job Title (required) Country (required) Job Type (optional dropdown: Full-Time, Part-Time, Remote, WFH, Contract, Internship, Freelance) Function**: Captures user requirements to start the dual-platform job scraping process 2. 🧠 Format Input for APIs Type**: Code Node (JavaScript) Purpose**: Prepares and formats user input for both LinkedIn and Indeed APIs Processing**: Standardizes location and job title formats Creates API-specific input structures Generates custom output field configurations Function**: Ensures compatibility with both Bright Data datasets 3. 🚀 Start Indeed Scraping Type**: HTTP Request (POST) Purpose**: Initiates Indeed job scraping via Bright Data Endpoint**: https://api.brightdata.com/datasets/v3/trigger Parameters**: Dataset ID: gd_lpfll7v5hcqtkxl6l Include errors: true Type: discover_new Discover by: keyword Limit per input: 2 Custom Output Fields**: jobid, company_name, job_title, description_text location, salary_formatted, company_rating apply_link, url, date_posted, benefits 4. 🚀 Start LinkedIn Scraping Type**: HTTP Request (POST) Purpose**: Initiates LinkedIn job scraping via Bright Data (parallel execution) Endpoint**: https://api.brightdata.com/datasets/v3/trigger Parameters**: Dataset ID: gd_l4dx9j9sscpvs7no2 Include errors: true Type: discover_new Discover by: keyword Limit per input: 2 Custom Output Fields**: job_posting_id, job_title, company_name, job_location job_summary, job_employment_type, job_base_pay_range apply_link, url, job_posted_date, company_logo 5. 🔄 Check Indeed Status Type**: HTTP Request (GET) Purpose**: Monitors Indeed scraping job progress Endpoint**: https://api.brightdata.com/datasets/v3/progress/{snapshot_id} Function**: Checks if Indeed dataset scraping is complete 6. 🔄 Check LinkedIn Status Type**: HTTP Request (GET) Purpose**: Monitors LinkedIn scraping job progress Endpoint**: https://api.brightdata.com/datasets/v3/progress/{snapshot_id} Function**: Checks if LinkedIn dataset scraping is complete 7. ⏱️ Wait Nodes (60 seconds each) Type**: Wait Node Purpose**: Implements intelligent polling mechanism Duration**: 1 minute Function**: Pauses workflow before rechecking scraping status to prevent API overload 8. ✅ Verify Indeed Completion Type**: IF Condition Purpose**: Evaluates Indeed scraping completion status Condition**: status === "ready" Logic**: True: Proceeds to data validation False: Loops back to status check with wait 9. ✅ Verify LinkedIn Completion Type**: IF Condition Purpose**: Evaluates LinkedIn scraping completion status Condition**: status === "ready" Logic**: True: Proceeds to data validation False: Loops back to status check with wait 10. 📊 Validate Indeed Data Type**: IF Condition Purpose**: Ensures Indeed returned job records Condition**: records !== 0 Logic**: True: Proceeds to fetch Indeed data False: Skips Indeed data retrieval 11. 📊 Validate LinkedIn Data Type**: IF Condition Purpose**: Ensures LinkedIn returned job records Condition**: records !== 0 Logic**: True: Proceeds to fetch LinkedIn data False: Skips LinkedIn data retrieval 12. 📥 Fetch Indeed Data Type**: HTTP Request (GET) Purpose**: Retrieves final Indeed job listings Endpoint**: https://api.brightdata.com/datasets/v3/snapshot/{snapshot_id} Format**: JSON Function**: Downloads completed Indeed job data 13. 📥 Fetch LinkedIn Data Type**: HTTP Request (GET) Purpose**: Retrieves final LinkedIn job listings Endpoint**: https://api.brightdata.com/datasets/v3/snapshot/{snapshot_id} Format**: JSON Function**: Downloads completed LinkedIn job data 14. 🔗 Merge Results Type**: Merge Node Purpose**: Combines Indeed and LinkedIn job results Mode**: Merge all inputs Function**: Creates unified dataset from both platforms 15. 📊 Save to Google Sheet Type**: Google Sheets Node Purpose**: Exports combined job data for analysis Operation**: Append rows Target**: "Compare" sheet in specified Google Sheet document Data Mapping**: Job Title, Company Name, Location Job Detail (description), Apply Link Salary, Job Type, Discovery Input Workflow Flow Input Form → Format APIs → [Indeed Trigger] + [LinkedIn Trigger] ↓ ↓ Check Status Check Status ↓ ↓ Wait 60s Wait 60s ↓ ↓ Verify Ready Verify Ready ↓ ↓ Validate Data Validate Data ↓ ↓ Fetch Indeed Fetch LinkedIn ↓ ↓ └─── Merge Results ───┘ ↓ Save to Google Sheet Configuration Requirements API Keys & Credentials Bright Data API Key**: Required for both LinkedIn and Indeed scraping Google Sheets OAuth2**: For data storage and export access n8n Form Webhook**: For user input collection Setup Parameters Google Sheet ID**: Target spreadsheet identifier Sheet Name**: "Compare" tab for job data export Form Webhook ID**: User input form identifier Dataset IDs**: Indeed: gd_lpfll7v5hcqtkxl6l LinkedIn: gd_l4dx9j9sscpvs7no2 Key Features Dual Platform Scraping Simultaneous LinkedIn and Indeed job searches Parallel processing for faster results Comprehensive job market coverage Platform-specific field extraction Intelligent Status Monitoring Real-time scraping progress tracking Automatic retry mechanisms with 60-second intervals Data validation before processing Error handling and timeout management Smart Data Processing Unified data format from both platforms Intelligent field mapping and standardization Duplicate detection and removal Rich metadata extraction Google Sheets Integration Automatic data export and storage Organized comparison format Historical job search tracking Easy sharing and collaboration Form-Based Interface User-friendly job search form Flexible job type filtering Multi-country support Real-time workflow triggering Use Cases Personal Job Search Comprehensive multi-platform job hunting Automated daily job searches Organized opportunity comparison Application tracking and management Recruitment Services Client job search automation Market availability assessment Competitive salary analysis Bulk candidate sourcing Market Research Job market trend analysis Salary benchmarking studies Skills demand assessment Geographic opportunity mapping HR Analytics Competitor hiring intelligence Role requirement analysis Compensation benchmarking Talent market insights Technical Notes Polling Interval**: 60-second status checks for both platforms Result Limiting**: Maximum 2 jobs per input per platform Data Format**: JSON with structured field mapping Error Handling**: Comprehensive error tracking in all API requests Retry Logic**: Automatic status rechecking until completion Country Support**: Adaptable domain selection (indeed.com, fr.indeed.com) Form Validation**: Required fields with optional job type filtering Merge Strategy**: Combines all results from both platforms Export Format**: Standardized Google Sheets columns for easy analysis Sample Data Output | Field | Description | Example | |-------|-------------|---------| | Job Title | Position title | "Senior Software Engineer" | | Company Name | Hiring organization | "Tech Solutions Inc." | | Location | Job location | "San Francisco, CA" | | Job Detail | Full description | "We are seeking a senior developer..." | | Apply Link | Direct application URL | "https://company.com/careers/123" | | Salary | Compensation info | "$120,000 - $150,000" | | Job Type | Employment details | "Full-time, Remote" | Setup Instructions Import Workflow: Copy JSON configuration into n8n Configure Bright Data: Add API credentials for both datasets Setup Google Sheets: Create target spreadsheet and configure OAuth Update References: Replace placeholder IDs with your actual values Test Workflow: Submit test form and verify data export Activate: Enable workflow and share form URL with users For any questions or support, please contact: info@incrementors.com or fill out this form: https://www.incrementors.com/contact-us/
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for This n8n-powered automation uses Bright Data's MCP Client to extract real-time data from a price drop site listing the amazon products, including price changes and related product details. The extracted data is enriched with structured data transformation, content summarization, and sentiment analysis using Google Gemini LLM. The Amazon Price Drop Intelligence Engine is designed for: Ecommerce Analysts** who need timely updates on competitor pricing trends Brand Managers** seeking to understand consumer sentiment around pricing Data Scientists** building pricing models or enrichment pipelines Affiliate Marketers** looking to optimize campaigns based on dynamic pricing AI Developers** automating product intelligence pipelines What problem is this workflow solving? This workflow solves several key pain points: Reliable Scraping: Uses Bright Data MCP, a managed crawling platform that handles proxies, captchas, and site structure changes automatically. Insight Generation: Transforms unstructured HTML into structured data and then into human-readable summaries using Google Gemini LLM. Sentiment Context: Goes beyond raw pricing data to reveal how customers feel about the price change, helping businesses and researchers measure consumer reaction. Automated Reporting: Aggregates and stores data for easy access and downstream automation (e.g., dashboards, notifications, pricing models). What this workflow does Scrape price drop site with Bright Data MCP The workflow begins by scraping targeted price drop site for Amazon listings using Bright Data's Model Context Protocol (MCP). You can configure this to target: Structured Data Extraction Once the HTML content is retrieved, Google Gemini is employed to: Parse and structure the product information (title, price, discount, brand, ratings) Summarization & Sentiment Analysis The extracted data is passed through an LLM chain to: Generate a concise summary of the product and its recent price movement Perform sentiment analysis on user reviews and public perception Store the Results Save to disk for archiving or bulk processing Updated in a Google Sheet, making it instantly shareable with your team or integrated into a BI dashboard Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token> How to customize this workflow to your needs Target different platforms**: Switch Amazon for Walmart, eBay, or any ecommerce source using Bright Data’s flexible scraping infrastructure. Enrich with more LLM tasks**: Add brand tone analysis, category classification, or competitive benchmarking using Gemini prompts. Visualize output**: Pipe the Google Sheet to Looker Studio, Tableau, or Power BI. Notification integrations**: Add Slack, Discord, or email notifications for price drop alerts.