by Eduard
This workflow demonstrates three distinct approaches to chaining LLM operations using Claude 3.7 Sonnet. Connect to any section to experience the differences in implementation, performance, and capabilities. What you'll find: 1️⃣ Naive Sequential Chaining The simplest but least efficient approach - connecting LLM nodes in a direct sequence. Easy to set up for beginners but becomes unwieldy and slow as your chain grows. 2️⃣ Agent-Based Processing with Memory Process a list of instructions through a single AI Agent that maintains conversation history. This structured approach provides better context management while keeping your workflow organized. 3️⃣ Parallel Processing for Maximum Speed Split your prompts and process them simultaneously for much faster results. Ideal when you need to run multiple independent tasks without shared context. Setup Instructions: API Credentials: Configure your Anthropic API key in the credentials manager. This workflow uses Claude 3.7 Sonnet, but you can modify the model in each Anthropic Chat Model node, or pick an entirely different LLM. For Cloud Users: If using the parallel processing method (section 3), replace {{ $env.WEBHOOK_URL }} in the "LLM steps - parallel" HTTP Request node with your n8n instance URL. Test Data: The workflow fetches content from the n8n blog by default. You can modify this part to use a different content or a data source. Customization: Each section contains a set of example prompts. Modify the "Initial prompts" nodes to change the questions asked to the LLM. Compare these methods to understand the trade-offs between simplicity, speed, and context management in your AI workflows! Follow me on LinkedIn for more tips on AI automation and n8n workflows!
by Elay Guez
Daily Economic News Brief for Israel (Hebrew, RTL, GPT-4o) Overview Stay ahead of the curve with this AI-powered workflow that delivers a daily economic summary tailored for professionals tracking the Israeli economy. At 8:00 PM Israel Time, this workflow: Retrieves the latest articles from Calcalist and Mako via RSS Filters duplicates and irrelevant stories Uses OpenAI’s GPT-4o to identify the 5 most important stories of the day Summarizes each article in concise, readable Hebrew Generates a fully styled, responsive HTML email (with proper RTL layout) Sends it to your inbox using your preferred SMTP email provider Perfect for economists, analysts, investors, or policymakers who want an actionable and personalized news digest -- no distractions, no fluff. Setup Instructions Estimated setup time: 10 minutes Required credentials: OpenAI API Key SMTP credentials (for email delivery) Steps: Import this template into your n8n instance. Add your OpenAI API Key under credentials. Configure the SMTP Email node with: Host (e.g. smtp.gmail.com) Port (465 or 587) Username (your email) Password (app-specific password or login) Set your target email address in the last node. (Optional) Customize the GPT prompt to adjust tone or audience (e.g. general public, policy makers). Activate the workflow and receive daily updates straight to your inbox. Customization Tips Change the RSS sources to pull from other Hebrew or international news websites Modify the summarization prompt to fit different sectors (e.g. tech, health, politics) Add integrations like Notion, Airtable, or Telegram for logging or distribution Apply your branding to the HTML output (logos, footer, colors) Why Use This? This is more than a news digest. It’s an intelligent economic assistant that filters noise, highlights what matters, and keeps you informed-automatically. You can set it up in 10 minutes and benefit every single day.
by PollupAI
Who is this for? This workflow is ideal for individuals focused on nutrition tracking, meal planning, or diet optimization—whether you’re a health-conscious individual, fitness coach, or developer working on a healthtech app. It also fits well for anyone who wants to capture their meal data via voice or text, without manually entering everything into a spreadsheet. What problem is this workflow solving? Manually logging meals and breaking down their nutritional content is time-consuming and often skipped. This workflow automates that process using Telegram for input, OpenAI for natural language understanding, and Google Sheets for structured tracking. It enables users to record meals by typing or sending voice messages, which are transcribed, analyzed for nutrients, and automatically stored for tracking and review. What this workflow does This n8n automation lets users send either a text or voice message to a Telegram bot describing their meal. The workflow then: Receives the Telegram message Checks if it’s a voice message • If yes: Downloads the audio file and transcribes it using OpenAI • If no: Uses the text input directly Sends the meal description to OpenAI to extract a structured list of ingredients and nutritional details Parses and stores the results in Google Sheets Responds via Telegram with a personalized confirmation message A testing interface also allows you to simulate prompts and view structured outputs for development or debugging. Setup Create a Telegram bot via BotFather and note the API token. Create an empty Google Sheet and store the sheet ID in the environment. Set up your OpenAI credentials in the n8n credential manager. Customize the “List of Ingredients and Nutrients” node with your prompt if needed. (Optional) Use the “Testing” section to simulate messages and refine outputs before going live. How to customize this workflow to your needs • Enhance prompts in the OpenAI node to improve the structure and accuracy of responses. • Add new fields in the Google Sheet and corresponding logic in the parser if you want more detail. • Adjust the Telegram response to provide motivational feedback, dietary tips, or summaries. • Upgrade to the “Pro” version mentioned in the contact section for USDA database integration and complete nutrient breakdowns. This is a lightweight, AI-powered meal logging automation that transforms voice or text into actionable nutrition data—perfect for making healthy eating easier and more data-driven. See my other workflows here
by Yaron Been
🔍 Competitor Review Scraper & Ad Copy Generator (Trustpilot + Bright Data + GPT-4o-mini) 📌 Who It's For Marketers, business owners, and agencies looking to: Analyze competitor pain points Generate high-impact Facebook ad copy Automate manual data processing 🧩 How It Works This n8n-based workflow combines Bright Data, Google Sheets, and OpenAI to scrape, process, and transform Trustpilot reviews into ready-to-use ad copy. 🔹 Step-by-Step Breakdown Trigger (Manual Form Submission) Input required: Competitor’s Trustpilot URL Review timeframe (30d, 3m, 6m, 12m) Fetch Reviews Calls Bright Data’s Dataset API with URL & timeframe Polls until snapshot is ready Retrieve & Store Extracts all reviews Saves them into a structured Google Sheet Filter & Aggregate Filters to only 1–2 star reviews Summarizes common negative feedback Generate Ad Copy Sends the summary to OpenAI GPT-4o-mini Produces 3 variations of ad copy targeting pain points Distribute Insights Sends ad copy + summary via email to the marketing team ✅ Requirements -LLM Account -Google Sheets - Copy this sheet: https://docs.google.com/spreadsheets/d/1Zi758ds2_aWzvbDYqwuGiQNaurLgs-leS9wjLWWlbUU/edit?gid=0#gid=0 -Bright Data account ⚙️ Setup Instructions **Step 1: Google Sheets ** Copy this Google Sheets template Do not change column headers **Step 2: n8n Credential Setup ** Google Sheets: OAuth2 Bright Data: Authorization Header OpenAI: API Key for GPT-4o-mini **Step 3: Import Workflow ** Import the .json file into n8n Configure your sheet + dataset ID Adjust GPT prompts as needed **Step 4: Run the Workflow ** Trigger via form Receive ad copy + review insights via email 🧠 Tips & Best Practices Bright Data snapshots may take time — polling is handled Focusing on 1–2 star reviews yields the most actionable pain points You can customize GPT-4o-mini prompts for tone or vertical 💬 Support & Feedback Need help or customization? 📧 Email: Yaron@nofluff.online 📺 YouTube: @YaronBeen 🔗 LinkedIn: linkedin.com/in/yaronbeen 📚 Bright Data Docs: docs.brightdata.com/introduction
by Francis Njenga
Detailed Description The ToDo App workflow is designed to streamline task management through Telegram and Google Tasks integration. This workflow allows users to create, update, and manage tasks via Telegram messages, leveraging AI capabilities to enhance user interaction. The expected outcome is a seamless experience where users can manage their tasks efficiently without needing to switch between applications. Who is this for? This workflow is intended for: Individuals** looking for an efficient way to manage their tasks directly from Telegram. Teams** that require a collaborative task management solution integrated with Google Tasks. Developers** interested in automating task management processes using n8n and Telegram. What problem does this workflow solve? Managing tasks can often be cumbersome, especially when switching between different applications. This workflow addresses the following problems: Fragmented Task Management**: Users can manage tasks directly from Telegram, reducing the need to switch to Google Tasks. Inefficient Communication**: By integrating AI, users can interact with the task management system in a conversational manner, making it more intuitive. Task Updates**: Users can easily update task statuses and details through simple messages, enhancing productivity. What this workflow does The ToDo App workflow performs the following functions: Incoming Message Handling: Listens for messages sent to a Telegram bot. Task Creation: Allows users to create new tasks based on their messages. Task Updates: Users can update existing tasks by sending specific commands. Task Retrieval: Retrieves today's and upcoming tasks from Google Tasks. Voice Note Transcription: Supports voice messages, converting them into text for task management. AI Assistance: Utilizes an AI agent to assist users in managing their tasks effectively. Setup Prerequisites Before setting up the workflow, ensure you have the following: n8n Account**: Sign up for an n8n account if you don't have one. Telegram Bot**: Create a Telegram bot and obtain the API token. Google Tasks API**: Set up Google Tasks API and obtain OAuth2 credentials. OpenAI API Key**: Sign up for OpenAI and obtain an API key for AI functionalities. Setup Process Upload the JSON for this workflow and setup the authentication for the different tools. How to customize this workflow To adapt the ToDo App workflow to different needs, consider the following customizations: Change Task Management Platform**: If you prefer a different task management tool, replace the Google Tasks nodes with your preferred service's API. Modify AI Responses**: Adjust the AI agent's system message to change how it interacts with users. Add Additional Commands**: Expand the workflow by adding more commands for different task management functionalities (e.g., deleting tasks). Integrate Other Messaging Platforms**: If you want to use a different messaging service, replace the Telegram nodes with the appropriate nodes for that service. Conclusion The ToDo App workflow provides a powerful solution for managing tasks through Telegram, enhancing productivity and user experience. By following the setup instructions and customization options, users can tailor the workflow to meet their specific needs, making task management more efficient and accessible.
by Jimleuk
This n8n demonstrates how to build your own Qdrant MCP server to extend its functionality beyond that of the official implementation. This n8n implementation exposes other cool API features from Qdrant such as facet search, grouped search and recommendations APIs. With this, we can build an easily customisable and maintainable Qdrant MCP server for business intelligence. This MCP example is based off an official MCP reference implementation which can be found here - https://github.com/qdrant/mcp-server-qdrant How it works A MCP server trigger is used and connected to 5 custom workflow tools. We're using custom workflow tools as there is quite a few nodes required for each task. We use a mix of n8n supported Qdrant nodes for simple operations such as insert documents and similarity search, and HTTP node to hit the Qdrant API directly for Facet search, group search and recommendations. We use "Edit Field" and "Aggregate" nodes to return suitable responses to the MCP client. How to use This Qdrant MCP server allows any compatible MCP client to manage a Qdrant Collection by supporting select and create operations. You will need to have a collection available before you can use this server. Use the Prerequisite manual steps to get started! Connect your MCP client by following the n8n guidelines here - https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-langchain.mcptrigger/#integrating-with-claude-desktop Try the following queries in your MCP client: "Can you help me list the available companies in the collection?" "What do customers say about product deliveries from company X?" "What do customers of company X and company Y say about product ease of use?" Requirements Qdrant for vector store. This can be an a cloud-hosted instance or one you can self-host internally. MCP Client or Agent for usage such as Claude Desktop - https://claude.ai/download Customising this workflow Depending on what queries you'll receive, adjust the tool inputs to make it easier for the agent to set the right parameters. Not interested in Reviews? The techniques shared in this template can be used for other types of collections. Remember to set the MCP server to require credentials before going to production and sharing this MCP server with others!
by Samir Saci
Tags: EU Legislation, Sustainability, Automation, Web Scraping, OpenAI, Google Sheets, Policy Monitoring, Climate Context Hey! I’m Samir, a Supply Chain Engineer and Data Scientist from Paris, and the founder of LogiGreen Consulting. We use AI, automation, and data to support sustainable business practices for small, medium and large companies. This workflow is part of our broader initiative to monitor and act on sustainability legislation in Europe. > How do you know if new EU laws will impact your business's sustainability goals? This n8n workflow automatically scrapes the EU Parliament’s legislative portal to find and flag procedures related to environmental sustainability. 📬 For business inquiries, feel free to connect with me on LinkedIn Who is this template for? This workflow is useful for: Sustainability consultants** monitoring legal frameworks NGOs and researchers** tracking environmental regulations Companies* aligning with *CSRD* or *EU Green Deal** objectives Policy analysts** looking for automation tools What does it do? This n8n workflow: 🌐 Scrapes the EU Parliament legislative portal for yesterday’s entries 🧠 Uses OpenAI to classify if each procedure is related to sustainability 🗂️ Filters out irrelevant items 📊 Saves the results in a Google Sheet ✅ Creates a Google Task for each relevant file to review the legislation How it works Trigger manually or on schedule Scrape HTML blocks for scheduled debates Parse each procedure to extract Title, Committee, Rapporteur, PDF link Call GPT-4-turbo to check if the topic matches sustainability criteria Filter responses based on “yes” or “no” Store valid items into Google Sheets Generate tasks in Google Tasks The AI only flags procedures that directly impact the environment, circular economy, or pollution control. What do I need to get started? You’ll need: A Google Sheet connected to your n8n instance An OpenAI account with GPT-4 access A Google Task List Follow the Guide! Follow the sticky notes in the workflow or check my tutorial to configure each node and start using AI to monitor sustainability regulations in Europe. 🎥 Watch My Tutorial Notes AI filters are strict — you can customise the system prompt to match your needs This is ideal for tracking legislative risk for climate regulations This workflow was built using n8n version 1.85.4 Submitted: April 21, 2025
by Alfonso Corretti
Who is this for? Everyone! Did you dream of asking an AI "what hotel did I stay in for holidays last summer?" or "what were my marks last semester like?". Dream no more, as vector similarity searches and this workflow are the foundations to make it possible (as long as the information appears in your e-mails 😅). 100% Local and Open Source! This workflow is designed to use locally-hosted open source. Ollama as LLM provider, nomic-embed-text as the embeddings model, and pgvector as the vector database engine, on top of Postgres. Structured AND Vectorized This workflow combines structured and semantic search on your e-mail. No need for enterprise setups! Leverage the convenience of n8n and open source to get a bleeding edge solution. Setup You will need a PGVector database with embeddings for all your email. Use my other template Gmail to Vector Embeddings with PGVector and Ollama to set it up in a breeze! Make a copy of my Email Assistant: Convert Natural Language to SQL Queries with Phi4-mini and PostgreSQL, you will need it for structured searches. Install this template and modify the Call the SQL composer Workflow step, to point at your copy of the SQL workflow. Adjust the rest of necessary steps: Telegram Trigger, AI Chat model, AI Embeddings... Activate the workflow and chat around!
by Ranjan Dailata
Who this is for? Indeed Data Scraper & Summarization with Airtable, Bright Data and Google Gemini is an automated workflow that extracts company profile information from Indeed using Bright Data Web Unlocker, transforms the data using Google Gemini's LLM, and forward the transformed response with the summary to a specified webhook for downstream use. This workflow is tailored for: Recruiters and HR teams who want quick summaries of companies listed on Indeed. Market researchers and analysts needing structured insights into businesses. Founders, investors, and consultants scouting potential competitors, partners, or clients. No-code enthusiasts looking to automate data extraction and enrichment pipelines without manual scraping or parsing. What problem is this workflow solving? Manually gathering structured information about companies on Indeed is time-consuming and inconsistent. Pages vary in structure, and extracting clean, digestible summaries can require technical scraping expertise. This workflow automates: Extracting company data from Indeed reliably using Bright Data Web Unlocker. Cleaning and summarizing the extracted content using Google Gemini LLM. Storing structured insights directly into Airtable for easy access and further workflows. Eliminates manual research, saves hours, and produces AI-enhanced, easily searchable records. What this workflow does Triggers on-demand. Pulls company page URLs from Airtable. Scrapes content from each Indeed company profile using Bright Data Web Unlocker. Sends the raw HTML to Google Gemini for extraction and summarization. Sends the summarized data to other platforms via a Webhook notification mechanism. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials for Bright Data. The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the Airtable Personal Access Token account under Credentials. Update the Webhook Notifier with the Webhook endpoint of your choice. How to customize this workflow to your needs This workflow is built to be flexible - whether you're a company or a market researcher, entrepreneur, or data analyst. Here's how you can adapt it to fit your specific use case: Extend the scraper**: Modify Bright Data targets to pull job listings, salaries, or employee reviews via the Airtable data source. Customize the summary prompt**: Ask Gemini to extract different attributes hiring trends, practices etc. Routing the output to different destinations**: Send summaries or transformed response to Google Sheets, Airtable, or CRMs like HubSpot or Salesforce etc.
by Ranjan Dailata
Disclaimer This template is only available on n8n self-hosted as it's making use of the community node for MCP Client. Who this is for? The Extract, Transform LinkedIn Data with Bright Data MCP Server & Google Gemini workflow is an automated solution that scrapes LinkedIn content via Bright Data MCP Server then transforms the response using a Gemini LLM. The final output is sent via webhook notification and also persisted on disk. This workflow is tailored for: Data Analysts : Who require structured LinkedIn datasets for analytics and reporting. Marketing and Sales Teams : Looking to enrich lead databases, track company updates, and identify market trends. Recruiters and Talent Acquisition Specialists : Who want to automate candidate sourcing and company research. AI Developers : Integrating real-time professional data into intelligent applications. Business Intelligence Teams : Needing current and comprehensive LinkedIn data to drive strategic decisions. What problem is this workflow solving? Gathering structured and meaningful information from the web is traditionally slow, manual, and error-prone. This workflow solves: Reliable web scraping using Bright Data MCP Server LinkedIn tools. LinkedIn person and company web scrapping with AI Agents setup with the Bright Data MCP Server tools. Data extraction and transformation with Google Gemini LLM. Persists the LinkedIn person and company info to disk. Performs a Webhook notification with the LinkedIn person and company info. What this workflow does? This n8n workflow performs the following steps: Trigger: Start manually. Input URL(s): Specify the LinkedIn person and company URL. Web Scraping (Bright Data): Use Bright Data's MCP Server, LinkedIn tools for the person and company data extract. Data Transformation & Aggregation: Uses the Google LLM for handling the data transformation. Store / Output: Save results into disk and also performs a Webhook notification. Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token>. Update the LinkedIn URL person and company workflow. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. Update the file name and path to persist on disk. How to customize this workflow to your needs Different Inputs: Instead of static URLs, accept URLs dynamically via webhook or form submissions. Data Extraction: Modify the LinkedIn Data Extractor node with the suitable prompt to format the data as you wish. Outputs: Update the Webhook endpoints to send the response to Slack channels, Airtable, Notion, CRM systems, etc.
by Ranjan Dailata
Disclaimer This template is only available on n8n self-hosted as it's making use of the community node for MCP Client. Who this is for? The Scrape Web Data with Bright Data and MCP Automated AI Agent workflow is built for professionals who need to automate large-scale, intelligent data extraction by utilizing the Bright Data MCP Server and Google Gemini. This solution is ideal for: Data Analysts - Who require structured, enriched datasets for analysis and reporting. Marketing Researchers - Seeking fresh market intelligence from dynamic web sources. Product Managers - Who want competitive product and feature insights from various websites. AI Developers - Aiming to feed web data into downstream machine learning models. Growth Hackers - Looking for high-quality data to fuel campaigns, research, or strategic targeting. What problem is this workflow solving? Manually scraping websites, cleaning raw HTML data, and generating useful insights from it can be slow, error-prone, and non-scalable. This workflow solves these problems by: Automating complex web data extraction through Bright Data’s MCP Server. Reducing the human effort needed for cleaning, parsing, and analyzing unstructured web content. Allowing seamless integration into further automation processes. What this workflow does? This n8n workflow performs the following steps: Trigger: Start manually. Input URL(s): Specify the URL to perform the web scrapping. Web Scraping (Bright Data): Use Bright Data’s MCP Server tools to accomplish the web data scrapping with markdown and html format. Store / Output: Save results into disk and also performs a Webhook notification. Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token>. Update the LinkedIn URL person and company workflow. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. Update the file name and path to persist on disk. How to customize this workflow to your needs Different Inputs: Instead of static URLs, accept URLs dynamically via webhook or form submissions. Outputs: Update the Webhook endpoints to send the response to Slack channels, Airtable, Notion, CRM systems, etc.
by Ranjan Dailata
Disclaimer This template is only available on n8n self-hosted as it's making use of the community node for MCP Client. Who this is for? The Chat Conversations with Bright Data MCP Search Engines & Google Gemini workflow is designed for users who need real-time, AI-enhanced conversations powered by live search engine results. This workflow is tailored for: Data Analysts - Who want live, search-based data fused with AI reasoning. Marketing Researchers - Seeking up-to-the-minute market or competitor insights via conversational AI. Product Managers - Exploring user needs, market trends, and competitor analysis in real time. AI Developers - Building dynamic applications that combine live search data with intelligent conversation agents. Growth Hackers - Who need fast, conversational research tools for campaign ideation, outreach, or content creation. What problem is this workflow solving? Traditional chatbots and AI systems often rely on static, outdated data. This workflow enables AI agents to fetch live search engine data and converse intelligently about it, making interactions dynamic, accurate, and highly contextual. This workflow solves the major gaps of: Outdated Knowledge: Regular chatbots lack up-to-date information from live web searches. Manual Search Fatigue: Manually searching for information and interpreting it is time-consuming. Context Bridging: Connecting search results into meaningful, conversational replies requires human-level reasoning. What this workflow does? Accepts a user's conversational query input. Triggers a search request to Bright Data’s MCP Search Engines API (Google, Bing, etc.) based on the query. Waits for the search task to complete. Retrieves real-time search results. Feeds the search results and original question into Google Gemini. Generates a human-like, contextually accurate AI response combining live information and conversational flow. Outputs the response back into a chat app. Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Also, do "Account Setup" as mentioned in the @brightdata/mcp URL. Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data Web Unlocker API Token within the Environments textbox above as API_TOKEN=<your-token>. Update the HTTP Request for Webhook Notification node for sending the Webhook notification for chat responses. How to customize this workflow to your needs Change Search Engine: Add or Remove the Search Engine MCP tools based upon the Bright Data MCP Server updates. Expand Outputs: Send AI chat responses to Slack, Discord, custom chat UIs, WhatsApp, or CRM systems. Store conversation logs in a database (PostgreSQL, MongoDB, etc.) for future audits or training.