by CustomJS
n8n Workflow: Automating Website Screenshots from Google Sheets This n8n workflow captures screenshots of websites listed in a Google Sheet and saves them to Google Drive using the CustomJS PDF Toolkit. @custom-js/n8n-nodes-pdf-toolkit Features Monitors** a Google Sheet for new rows with website URLs. Captures** screenshots of the websites using the CustomJS PDF Toolkit. Uploads** the screenshots to a specified Google Drive folder. Notice Community nodes can only be installed on self-hosted instances of n8n. Requirements Self-hosted** n8n instance A Google Sheets document containing website URLs and Titles. A Google Drive folder to store the screenshots. A CustomJS API key for website screenshots. n8n credentials** for Google Sheets and Google Drive. Workflow Steps Google Sheets Trigger Monitors a specified sheet for new rows. Extracts the URL and Title from the row. Website Screenshot Node Uses CustomJS PDF Toolkit to take a screenshot of the given URL. Google Drive Upload Saves the screenshot to a specific Google Drive folder. Uses the Title column as the filename. Setup Guide 1. Connect Google Sheets Ensure your Google Sheet has a column named Url for website URLs and Name for website names. Set up Google Sheets credentials in n8n. 2. Configure CustomJS API Sign up at CustomJS. Retrieve your API key from the profile page. Add your API key as n8n credentials. 3. Set Up Google Drive Create a folder in Google Drive to store screenshots. Copy the folder ID and set it in the Google Drive node in n8n. Perfect for: Website monitoring** Generating visual archives of web pages** Automating content curation** This workflow streamlines the process of capturing and organizing website screenshots efficiently.
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for This n8n-powered automation uses Bright Data's MCP Client to extract real-time data from a price drop site listing the amazon products, including price changes and related product details. The extracted data is enriched with structured data transformation, content summarization, and sentiment analysis using Google Gemini LLM. The Amazon Price Drop Intelligence Engine is designed for: Ecommerce Analysts** who need timely updates on competitor pricing trends Brand Managers** seeking to understand consumer sentiment around pricing Data Scientists** building pricing models or enrichment pipelines Affiliate Marketers** looking to optimize campaigns based on dynamic pricing AI Developers** automating product intelligence pipelines What problem is this workflow solving? This workflow solves several key pain points: Reliable Scraping: Uses Bright Data MCP, a managed crawling platform that handles proxies, captchas, and site structure changes automatically. Insight Generation: Transforms unstructured HTML into structured data and then into human-readable summaries using Google Gemini LLM. Sentiment Context: Goes beyond raw pricing data to reveal how customers feel about the price change, helping businesses and researchers measure consumer reaction. Automated Reporting: Aggregates and stores data for easy access and downstream automation (e.g., dashboards, notifications, pricing models). What this workflow does Scrape price drop site with Bright Data MCP The workflow begins by scraping targeted price drop site for Amazon listings using Bright Data's Model Context Protocol (MCP). You can configure this to target: Structured Data Extraction Once the HTML content is retrieved, Google Gemini is employed to: Parse and structure the product information (title, price, discount, brand, ratings) Summarization & Sentiment Analysis The extracted data is passed through an LLM chain to: Generate a concise summary of the product and its recent price movement Perform sentiment analysis on user reviews and public perception Store the Results Save to disk for archiving or bulk processing Updated in a Google Sheet, making it instantly shareable with your team or integrated into a BI dashboard Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token> How to customize this workflow to your needs Target different platforms**: Switch Amazon for Walmart, eBay, or any ecommerce source using Bright Data’s flexible scraping infrastructure. Enrich with more LLM tasks**: Add brand tone analysis, category classification, or competitive benchmarking using Gemini prompts. Visualize output**: Pipe the Google Sheet to Looker Studio, Tableau, or Power BI. Notification integrations**: Add Slack, Discord, or email notifications for price drop alerts.
by Ranjan Dailata
Who this is for? This workflow is designed for: Marketing analysts, **SEO specialists, and content strategists who want automated intelligence on their online competitors. Growth teams** that need quick insights from SERP (Search Engine Results Pages) without manual data scraping. Agencies** managing multiple clients’ SEO presence and tracking competitive positioning in real-time. What problem is this workflow solving? Manual competitor research is time-consuming, fragmented, and often lacks actionable insights. This workflow automates the entire process by: Fetching SERP results from multiple search engines (Google, Bing, Yandex, DuckDuckGo) using Thordata’s Scraper API. Using OpenAI GPT-4.1-mini to analyze, summarize, and extract keyword opportunities, topic clusters, and competitor weaknesses. Producing structured, JSON-based insights ready for dashboards or reports. Essentially, it transforms raw SERP data into strategic marketing intelligence — saving hours of research time. What this workflow does Here’s a step-by-step overview of how the workflow operates: Step 1: Manual Trigger Initiates the process on demand when you click “Execute Workflow.” Step 2: Set the Input Query The “Set Input Fields” node defines your search query, such as: > “Top SEO strategies for e-commerce in 2025” Step 3: Multi-Engine SERP Fetching Four HTTP request tools send the query to Thordata Scraper API to retrieve results from: Google Bing Yandex DuckDuckGo Each uses Bearer Authentication configured via “Thordata SERP Bearer Auth Account.” Step 4: AI Agent Processing The LangChain AI Agent orchestrates the data flow, combining inputs and preparing them for structured analysis. Step 5: SEO Analysis The SEO Analyst node (powered by GPT-4.1-mini) parses SERP results into a structured schema, extracting: Competitor domains Page titles & content types Ranking positions Keyword overlaps Traffic share estimations Strengths and weaknesses Step 6: Summarization The Summarize the content node distills complex data into a concise executive summary using GPT-4.1-mini. Step 7: Keyword & Topic Extraction The Keyword and Topic Analysis node extracts: Primary and secondary keywords Topic clusters and content gaps SEO strength scores Competitor insights Step 8: Output Formatting The Structured Output Parser ensures results are clean, validated JSON objects for further integration (e.g., Google Sheets, Notion, or dashboards). 4. Setup Prerequisites n8n Cloud or Self-Hosted instance** Thordata Scraper API Key** (for SERP data retrieval) OpenAI API Key** (for GPT-based reasoning) Setup Steps Add Credentials Go to Credentials → Add New → HTTP Bearer Auth → Paste your Thordata API token. Add OpenAI API Credentials for the GPT model. Import the Workflow Copy the provided JSON or upload it into your n8n instance. Set Input In the “Set the Input Fields” node, replace the example query with your desired topic, e.g.: “Google Search for Top SEO strategies for e-commerce in 2025” Execute Click “Execute Workflow” to run the analysis. How to customize this workflow to your needs Modify Search Query Change the search_query variable in the Set Node to any target keyword or topic. Change AI Model In the OpenAI Chat Model nodes, you can switch from gpt-4.1-mini to another model for better quality or lower cost. Extend Analysis Edit the JSON schema in the “Information Extractor” nodes to include: Sentiment analysis of top pages SERP volatility metrics Content freshness indicators Export Results Connect the output to: Google Sheets / Airtable** for analytics Notion / Slack** for team reporting Webhook / Database** for automated storage Summary This workflow creates an AI-powered Competitor Intelligence System inside n8n by blending: Real-time SERP scraping (Thordata) Automated AI reasoning (OpenAI GPT-4.1-mini) Structured data extraction (LangChain Information Extractors)
by Jimleuk
This template attempts to replicate OpenAI's DeepResearch feature which, at time of writing, is only available to their pro subscribers. > An agent that uses reasoning to synthesize large amount of online information and complete multi-step research tasks for you. Source Though the inner workings of DeepResearch have not been made public, it is presumed the feature relies on the ability to deep search the web, scrape web content and invoking reasoning models to generate reports. All of which n8n is really good at! Using this workflow, n8n users can enjoy a variation of the Deep Research experience for themselves and their teams at a fraction of the cost. Better yet, learn and customise this Deep Research template for their businesses and/or organisations. Check out the generated reports here: https://jimleuk.notion.site/19486dd60c0c80da9cb7eb1468ea9afd?v=19486dd60c0c805c8e0c000ce8c87acf How it works A form is used to first capture the user's research query and how deep they'd like the researcher to go. Once submitted, a blank Notion page is created which will later hold the final report and the researcher gets to work. The user's query goes through a recursive series of web serches and web scraping to collect data on the research topic to generate partial learnings. Once complete, all learnings are combined and given to a reasoning LLM to generate the final report. The report is then written to the placeholder Notion page created earlier. How to use Duplicate this Notion database template and make sure all Notion related nodes point to it. Sign-up for APIFY.com API Key for web search and scraping services. Ensure you have access to OpenAI's o3-mini model. Alternatively, switch this out for o1 series. You must publish this workflow and ensure the form url is publically accessible. On depth & breadth configuration For more detailed reports, increase depth and breadth but be warned the workflow will take exponentially longer and cost more to complete. The recommended defaults are usually good enough. Depth=1 & Breadth=2 - will take about 5 - 10mins. Depth=1 & Breadth=3 - will take about 15 - 20mins. Dpeth=3 & Breadth=5 - will take about 2+ hours! Customising this workflow I deliberately chose not to use AI-powered scrapers like Firecrawl as I felt these were quite costly and quotas would be quickly exhausted. However, feel free to switch web search and scraping services which suit your environment. Maybe you don't decide to source the web and instead, data collection comes from internal documents instead. This template gives you freedom to change this. Experiment with different Reasoning/Thinking models such as Deepseek and Google's Gemini 2.0. Finally, the LLM prompts could definitely be improved. Refine them to fit your use-case. Credits This template is largely based off the work by David Zhang (dzhng) and his open source implementation of Deep Research: https://github.com/dzhng/deep-research
by Luciano Gutierrez
Healthcare Clinic Assistant with WhatsApp and Telegram Integration Version: 1.1.0 n8n Version: 1.88.0+ License: MIT 📋 Description A comprehensive and modular automation workflow designed for healthcare clinics. It manages patient communication, appointment scheduling, confirmations, rescheduling, internal tasks, and media processing by integrating WhatsApp, Telegram, Google Calendar, and Google Tasks, combined with AI-powered agents for maximum efficiency. This system guarantees proactive communication with patients, streamlined internal clinic management, and consistent data synchronization across platforms. 🌟 Key Features 🤖 AI-Powered Specialized Agents: Distinct agents handle WhatsApp patient support, appointment confirmations, and internal rescheduling tasks. 📱 Omnichannel Communication: Handles patient interactions via WhatsApp and staff commands via Telegram. 📅 Google Calendar Appointment Management: Full synchronization for creating, updating, canceling, and confirming appointments. 📋 Task Management with Google Tasks: Manages shopping lists and administrative tasks efficiently through staff Telegram requests. 🔔 Automated Appointment Reminders: Daily-triggered system proactively sends WhatsApp confirmations to patients for next-day appointments. 🖼️ Intelligent Media Processing: Transcribes audios, extracts text from images, and processes documents using OpenAI and OpenRouter AI models. 🛡️ Escalation to Human Support: Automatically detects sensitive or urgent cases and escalates them to a human agent when needed. 🏥 Use Cases Patient Communication:** Respond to inquiries, schedule, reschedule, and confirm appointments seamlessly via WhatsApp. Internal Clinic Operations:** Allow staff to modify appointments or add shopping list reminders directly from Telegram. Appointment Confirmation System:** Automatically contacts patients one day prior to appointments for confirmation or rescheduling. Task and Reminder Management:** Keeps clinic operations organized through automatic task management with Google Tasks. 🛠️ Technical Implementation WhatsApp Patient Interaction Flow Webhook Reception:** Incoming WhatsApp messages captured via Evolution API webhook. Message Classification:** Intelligent routing of messages based on content type (text, image, audio, document). Media Content Processing:** Audios: Download, convert, and transcribe via OpenAI Whisper. Images: Analyze and extract text/descriptions with OpenAI Vision model. Patient Request Handling:** Specialized WhatsApp assistant responds appropriately using AI prompts. Outbound Message Formatting:** Ensures messages comply with WhatsApp format standards. Message Delivery:** Sends responses back via Evolution API. Telegram Staff Management Flow Telegram Webhook Reception:** Captures messages from authorized staff accounts. Internal Assistant Processing:** Appointment Rescheduling: Identifies and updates appointments through MCP Google Calendar. Task Creation: Adds new entries to the clinic's shopping list using Google Tasks. Notifications and Confirmations:** Sends confirmations back to staff through Telegram. Appointment Reminder System Daily Trigger Activation:** Fires every weekday at 08:00 AM. Calendar Scraping:** Lists next day's appointments from Google Calendar. Patient Contact:** Sends WhatsApp confirmation messages for each appointment. Response Management:** Redirects confirmation or rescheduling replies to appropriate agents. ⚙️ Setup Instructions Import the Workflow n8n → Workflows → Import from File → Upload this JSON file. Configure Credentials Evolution API (WhatsApp Communication) Telegram Bot API (Staff Communication) Google Calendar OAuth2 (Appointment Management) Google Tasks OAuth2 (Task Management) OpenAI and OpenRouter APIs (AI Agents) PostgreSQL Database (Chat Memory) Set Sensitive Variables Replace placeholder values: {sua instância aqui} → Evolution API instance name {número_whatsapp} → WhatsApp numbers {url_do_servidor} → Server URLs {a sua apikey aqui} → API keys {seu_calendario} → Google Calendar ID Customize AI Prompts Adjust system prompts to fit your clinic’s tone, service style, and patient communication guidelines. Set clinic operating hours, escalation rules, and cancellation procedures in AI prompts. Activate and Test Simulate patient messages via WhatsApp. Test Telegram commands from staff members. Validate daily appointment reminders using the scheduled trigger. 🏷️ Tags Healthcare Clinic Management WhatsApp Integration Telegram Bot Appointment Scheduling Google Calendar Google Tasks AI Agents n8n Automation 📚 Technical Notes PostgreSQL is used for persistent chat memory across sessions. Multiple AI Models Used: OpenAI GPT-4.1-nano OpenAI GPT-4.1-mini Google Gemini 2.0 and 2.5 Full media content processing supported (audio, image, text). Compliant escalation workflows ensure patient safety and proper handoff to human staff when necessary. All sensitive patient data are securely stored inside calendar event descriptions for easy retrieval by agents. 📜 License This workflow is provided under the MIT License. Feel free to adapt and customize it for your clinic’s specific needs.
by Jez
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Uncover new business leads with this AI-Powered Prospect Discovery Agent! This n8n workflow acts as a specialized intelligent assistant that, given a business type and location, uses multiple search strategies to identify a list of potential prospect companies and their websites. Stop manually trawling through search results! This agent automates the initial phase of lead generation by: Understanding your target business profile (type, location, keywords). Strategically using web search tools (Brave Search, Google Gemini Search) to find relevant businesses. Performing quick validations to confirm relevance. Returning a clean, structured JSON list of prospect names and their website URLs. How it Works: The workflow is built around an AI agent powered by Google Gemini. This agent is equipped with tools like: Brave Web Search:** For broad initial sourcing of potential business candidates. Google Gemini Search:** For advanced, context-aware discovery and finding businesses mentioned in various online sources. Brave Local Search (Selective):** For quick verification of local presence or finding website URLs for identified names. Jina AI Web Page Scraper (Very Selective):** For extremely rapid relevance checks on uncertain websites by scanning page content for keywords. The agent's system prompt guides it to use these tools efficiently to build a list of prospects without getting bogged down in deep research on any single one at this discovery stage. Use Cases: Lead Generation:** Automatically generate lists of potential clients based on industry and location. Market Research:** Identify key players or types of businesses in a specific geographical area. Sales Development:** Provide SDRs with initial lists of companies to research further. Called as a Sub-Workflow:** Designed to be easily integrated as a "tool" into more complex orchestrating AI agents (e.g., a BNI Pitch Planner that first needs to identify who to target). Setup: Import the workflow. Configure Credentials: You'll need n8n credentials for: Google Gemini (for the Chat model and the Gemini Search/Vertex AI Search tool). Brave Search (e.g., via Smithery MCP, or adapt if you have direct API access). Jina AI (for the web scraper). Assign these to the respective nodes. Review System Prompt: The prospect_discovery_agent node contains a detailed system prompt. You can fine-tune this to adjust its search strategies or the strictness of its matching. Inputs: This workflow is triggered by an "Execute Workflow Trigger" node (prospect_discovery_workflow). It expects the following inputs: business_type (string): e.g., "artisan bakery" location_query (string): e.g., "Portland, Oregon" desired_num_prospects (number): e.g., 5 additional_keywords (string, optional): e.g., "organic, gluten-free" To Use (as a Sub-Workflow/Tool): This workflow is typically called by another n8n workflow (e.g., using a "Tool Workflow" node from the Langchain nodes). The calling workflow would provide the inputs listed above. The "Prospect Discovery" workflow will then execute and its final node (the prospect_discovery_agent) will output a JSON array of found prospects, like: [ { "business_name": "Rose Petal Bakery", "website_url": "https://rosepetalbakerypdx.com" }, { "business_name": "The Daily Bread Artisans", "website_url": "https://dailybreadpdx.com" } ] If no prospects are found, it returns an empty array []. This template provides a powerful and focused tool for automating the initial stages of prospect identification.
by Marcial Ambriz
Remixed Backup your workflows to GitHub from Solomon's work. Check out his templates. How it works This workflow will backup your workflows to GitHub. It uses the n8n API node to export all workflows. It then loops over the data, checks in GitHub to see if a file exists that uses the credential's ID. Once checked it will: update the file on GitHub if it exists; create a new file if it doesn't exist; ignore if it's the same. In addition, it also checks if any workflows have been deleted from n8n. If a workflow no longer exists in n8n, the corresponding file will be removed from the repository to keep everything in sync. Who is this for? People wanting to backup their workflows outside the server for safety purposes or to migrate to another server.
by I versus AI
WATCH THE n8n STARTER GUIDE 👇 This template is featured in the n8n Starter Guide series. The template is free, but comes with two additional PDFs and a Quick Start video if you grab the full download pack on gumroad. How it works This template is a visual map of many useful n8n nodes. It groups nodes like Triggers, AI tools, and App connectors onto the canvas. Explore the sections to learn about different nodes and easily copy them for your own workflows. It acts as a handy visual reference guide. Set up steps • Setup takes about 5 minutes. • Import the template into your n8n instance. • Explore the node categories visually on the canvas. • A Quick Start video is included in the download pack, along with a prompts PDF and PDF with links to other awesome n8n templates here on the n8n template gallery.
by Polina Medvedieva
This workflow automates the process of discovering and extracting APIs from various services, followed by generating custom schemas. It works in three distinct stages: research, extraction, and schema generation, with each stage tracking progress in a Google Sheet. 🙏 Jim Le deserves major kudos for helping to build this sophisticated three-stage workflow that cleverly automates API documentation processing using a smart combination of web scraping, vector search, and LLM technologies. How it works Stage 1 - Research: Fetches pending services from a Google Sheet Uses Google search to find API documentation Employs Apify for web scraping to filter relevant pages Stores webpage contents and metadata in Qdrant (vector database) Updates progress status in Google Sheet (pending, ok, or error) Stage 2 - Extraction: Processes services that completed research successfully Queries vector store to identify products and offerings Further queries for relevant API documentation Uses Gemini (LLM) to extract API operations Records extracted operations in Google Sheet Updates progress status (pending, ok, or error) Stage 3 - Generation: Takes services with successful extraction Retrieves all API operations from the database Combines and groups operations into a custom schema Uploads final schema to Google Drive Updates final status in sheet with file location Ideal for: Development teams needing to catalog multiple APIs API documentation initiatives Creating standardized API schema collections Automating API discovery and documentation Accounts required: Google account (for Sheets and Drive access) Apify account (for web scraping) Qdrant database Gemini API access Set up instructions: Prepare your Google Sheets document with the services information. Here's an example of a Google Sheet – you can copy it and change or remove the values under the columns. Also, make sure to update Google Sheets nodes with the correct Google Sheet ID. Configure Google Sheets OAuth2 credentials, required third-party services (Apify, Qdrant) and Gemini. Ensure proper permissions for Google Drive access.
by Cheng Siong Chin
How It Works Automated daily scraping of competitor websites, pricing pages, job postings, and press releases feeds into an AI analyzer powered by OpenAI or Claude to extract trends, insights, and key metrics. The analyzed results are stored in Google Sheets for historical tracking and longitudinal comparison, and are automatically shared with stakeholders via email or WhatsApp. This workflow eliminates manual competitive research, enabling product teams and executives to monitor market movements efficiently and make timely, data-driven decisions. Setup Steps Configure the daily schedule trigger. Define competitor URLs using workflow variables. Add and configure the OpenAI or Claude API key. Authenticate Google Sheets using the appropriate credentials. Connect Gmail and WhatsApp Business API tokens. Test each scraper individually before enabling full automation. Prerequisites OpenAI/Claude API account, Google Sheets access, Gmail account, WhatsApp Business API, web scraping enabled Use Cases Market share monitoring, pricing strategy updates, competitor talent movement tracking, industry trend reports Customization Modify scrape targets, adjust AI prompt for different metrics, change alert frequency Benefits Real-time competitive visibility, reduced manual research 15+ hours/week, faster decision-making
by Cyril Nicko Gaspar
📌 HubSpot Lead Enrichment with Bright Data MCP This template enables natural-language-driven automation using Bright Data's MCP tools, triggered directly by new leads in HubSpot. It dynamically extracts and executes the right tool based on lead context—powered by AI and configurable in N8N. ❓ What Problem Does This Solve? Manual lead enrichment is slow, inconsistent, and drains valuable time. This solution automates the process using a no-code workflow that connects HubSpot, Bright Data MCP, and an AI agent—without requiring scripts or technical skills. Perfect for marketing, sales, and RevOps teams. 🧰 Prerequisites To use this template, you’ll need: A self-hosted or cloud instance of N8N A Bright Data MCP API token A valid OpenAI API key (or compatible AI model) A HubSpot account Either a Private App token or OAuth credentials for HubSpot Basic familiarity with N8N workflows ⚙️ Setup Instructions 1. Set Up Authentication in HubSpot 🔐 Option 1: Use a Private App Token (Simple Setup) Log in to your HubSpot account. Navigate to Settings → Integrations → Private Apps. Create a new Private App with the following scopes: crm.objects.contacts.read crm.objects.contacts.write crm.schemas.contacts.read crm.objects.companies.read (optional) Copy the Access Token. In N8N, create a credential for HubSpot App Token and paste the app token in the field. Go back to Hubspot Private App settings to setup a webhook. Copy the url in your workflow's Webhook node and paste it here. 🔁 Option 2: Use OAuth (Advanced + Secure) In HubSpot, go to Settings → Integrations → Apps → Create App. Set your Redirect URL to match your N8N OAuth2 redirect path. Choose scopes like: crm.objects.companies.read crm.objects.contacts.read crm.objects.deals.read crm.schemas.companies.read crm.schemas.contacts.read crm.schemas.deals.read crm.objects.contacts.write (conditionally required) Note the Client ID and Client Secret. Copy the App ID and the developer API key In N8N, create a credential for HubSpot Developer API and paste those info from previous step. Attach these credentials to the HubSpot node in N8N. 2. Setup and obtain API token and other necessary information from Bright Data In your Bright Data account, obtain the following information: API token Web Unlocker zone name (optional) Browser API username and password string separated by colon (optional) 3. Host SSE server from STDIO command The methods below will allow you to receive SSE (Server-Sent Events) from Bright Data MCP via a local Supergateway or Smithery ** Method 1: Run Supergateway in a separate web service (Recommended) This method will work for both cloud version and self-hosted N8N. Signup to any cloud services of your choice (DigitalOcean, Heroku, Hetzner, Render, etc.). For NPM based installation: Create a new web service. Choose Node.js as runtime environment and setup a custom server without repository. In your server’s settings to define environment variables or .env file, add: `API_TOKEN=your_brightdata_api_token WEB_UNLOCKER_ZONE=optional_zone_name BROWSER_AUTH=optional_browser_auth` Paste the following text as a start command: npx -y supergateway --stdio "npx -y @brightdata/mcp" --port 8000 --baseUrl http://localhost:8000 --ssePath /sse --messagePath /message Deploy it and copy the web server URL, then append /sse into it. Your SSE server should now be accessible at: https://your_server_url/sse For Docker based installation: Create a new web service. Choose Docker as the runtime environment. Set up your Docker environment by pulling the necessary images or creating a custom Dockerfile. In your server’s settings to define environment variables or .env file, add: `API_TOKEN=your_brightdata_api_token WEB_UNLOCKER_ZONE=optional_zone_name BROWSER_ZONE=optional_browser_zone_name` Use the following Docker command to run Supergateway: `docker run -it --rm -p 8000:8000 supercorp/supergateway \ --stdio "npx -y @brightdata/mcp /" \ --port 8000` Deploy it and copy the web server URL, then append /sse into it. Your SSE server should now be accessible at: https://your_server_url/sse For more installation guides, please refer to https://github.com/supercorp-ai/supergateway.git. ** Method 2: Run Supergateway in the same web service as the N8N instance This method will only work for self-hosted N8N. a. Set Required Environment Variables In your server's settings to define environment variables or .env file, add: API_TOKEN=your_brightdata_api_token WEB_UNLOCKER_ZONE=optional_zone_name BROWSER_ZONE=optional_browser_zone_name b. Run Supergateway in Background npx -y supergateway --stdio "npx -y @brightdata/mcp" --port 8000 --baseUrl http://localhost:8000 --ssePath /sse --messagePath /message Use the command above to execute it through the cloud shell or set it as a pre-deploy command. Your SSE server should now be accessible at: http://localhost:8000/sse For more installation guides, please refer to https://github.com/supercorp-ai/supergateway.git. * *Method 3: Configure via Smithery.ai* (Easiest) If you don't want additional setup and want to test it right away, follow these instructions: Visit https://smithery.ai/server/@luminati-io/brightdata-mcp/tools to: Signup (if you are new to Smithery) Create an API key Define environment variables via a profile Retrieve your SSE server HTTP URL 4. Connect Google Sheets to N8N Ensure your Google Sheet: Contains columns like row_id, first_name, last_name, email, and status. Is shared with your N8N service account (or connected via OAuth) In N8N: Add a Google Sheets Trigger node Set it to watch for new rows in your lead sheet 5. Import and Configure the N8N Workflow Import the provided JSON workflow into N8N Update nodes with your credentials: Hubspot: Add your API key or connect it via OAuth. Google Sheets Trigger: Link to your actual sheet OpenAI Node: Add your API key Bright Data Tool Execution: Add Bright Data token and SSE URL 🔄 How It Works New contact in Hubspot or a new row is added to the Google Sheet N8N triggers the workflow AI agent classifies the task (e.g., “Find LinkedIn”, “Get company info”) The relevant MCP tool is called Results are appended back to the sheet or routed to another destination Rerun the specific record by specifying status "needs more enrichment", or leaving it blank. 🧩 Use Cases B2B Lead Enrichment** – Add missing fields (title, domain, social profiles) Email Intelligence** – Validate and enrich based on email Market Research** – Pull company or contact data on demand CRM Auto-fill** – Push enriched leads to tools like HubSpot or Salesforce 🛠️ Customization Prompt Tuning** – Adjust how the AI interprets input data Column Mapping** – Customize which fields to pull from the sheet Tool Logic** – Add retries, fallback tools, or confidence-based routing Destination Output** – Integrate with CRMs, Slack, or webhook endpoints ✅ Summary This template turns a Google Sheet into an AI-powered lead enrichment engine. By combining Bright Data’s tools with a natural language AI agent, your team can automate repetitive tasks and scale lead ops—without writing code. Just add a row, and let the workflow do the rest.
by Hunyao
Note: This workflow assumes you already have your product’s Amazon reviews saved in a Google Sheet. If you still need those reviews, run my Amazon Reviews Scraper workflow first, then plug the resulting spreadsheet into this template. What it does Transforms any draft Google Doc into multiple high-converting sales pages. It blends Alex Hormozi’s value-stacking tactics with persona targeting based on Maslow’s Hierarchy of Needs, using your own customer reviews for proof and voice of customer (VOC). Perfect for • Growth and creative strategists • Freelance copywriters and agencies • Founders sharpening offers and funnels Apps used Google Sheets, Google Docs, LangChain OpenRouter LLM How it works Form Trigger collects Drive folder IDs, base copy URL and options. Workflow fetches the draft copy and product feature doc. It samples reviews, extracts VOC insights and maps them to Maslow needs. LLM drafts headlines and hooks following Hormozi’s \$100M Offers principles. Personas drive tone, objections and urgency in each copy variant. Loop writes one Google Doc per variant in your chosen folder. Customer analysis docs are saved to a second folder for reuse. Setup Share two Drive folders, copy the IDs (text after folders/). Paste each ID into Customer Analysis Folder ID and Advertorial Copy Folder ID. Provide File Name, Base copy (Google Docs URL) and Product Feature/USPs Doc. Optional: Reviews Sheet URL, Number of reviews to use, Target City. Set Number of Copies you need (1–20). Add Google Docs OAuth2 and Google Sheets OAuth2 credentials in n8n. If you have any questions in running the workflow, feel free to reach out to me at my youtube channel: https://www.youtube.com/@lifeofhunyao