by Jimleuk
This n8n demonstrates how to build a simple Youtube MCP server to look up videos on Youtube and download their transcripts for research purposes. Youtube videos are a great source of new and updated information on a variety of cutting edge developments but they''re are not always simple to understand and lengthy videos may take too much time. Using this MCP server, you extract and feed their transcripts for your AI agents which then allows it to breakdown the content into manageble learnings and insights. How it works A MCP server trigger is used and connected to 3 custom workflow tools: Youtube Search, Youtube Transcripts and Usage Reports. Both Youtube tools use an external scraping service called APIFY.com. This is my preference as it's a much simpler interface and there are no rate limits. The Youtube Search fetches 10 results based on the user's query. The Youtube Transcripts downloads the subtitles from one or more given URL. The usage reports pulls in your monthly APIFY.com monthly spending and limits as a way to check your account. How to use This Apify Youtube MCP server allows any compatible MCP client to research YouTube videos for any desired topic. An Apify account is required however to connect and use the service. Connect your MCP client by following the n8n guidelines here - https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-langchain.mcptrigger/#integrating-with-claude-desktop Alternatively, connect any n8n AI agent with the MCP client tool. Try the following queries in your MCP client: "what is MCP?" "How can I use MCP in n8n?" "How can I use Apify's official MCP server?" Requirements APIFY.com for Youtube Scraping. This is a paid service but there is a $5 free tier which is ample for this template. MCP Client or Agent for usage such as Claude Desktop - https://claude.ai/download Customising this workflow Add as many APIFY.com actors as required for your use-case or users. Consider using Apify's official MCP server for 4000+ available tools. Remember to set the MCP server to require credentials before going to production and sharing this MCP server with others!
by Samir Saci
Tags: Accessibility, SEO, Blogging, Marketing, Automation, AI, Web Auditing Context Hey! I’m Samir, a Supply Chain Engineer and Data Scientist from Paris, and the founder of LogiGreen Consulting. In my personal blog, I share insights on how to use AI, automation, and data analytics to improve logistics, operations, and digital sustainability practices. > Have you heard about accessibility? In this workflow, I use n8n to improve the quality of alternative texts for images on my personal website. 📬 For business inquiries, you can connect with me on LinkedIn Who is this template for? This workflow is for: Bloggers* and *website owners* who want to *improve accessibility** SEO professionals** looking to boost page performance Web developers* and *product teams** automating web audits What does it do? This n8n workflow: 🔍 Downloads the HTML of a blog or web page 🖼️ Extracts all ` tags and their alt` attributes 📉 Detects missing or too-short alt texts 🤖 Sends those images to GPT-4o (with vision) to generate new alt descriptions 📄 Saves the results into a Google Sheet, updating the alt text when needed How it works Set a page URL using the Set node Download HTML content Extract image src and alt using a Code node Store results in a Google Sheet Filter images with altLength < 50 Send image URL to GPT-4o Update the Google Sheet with the newly generated newAlt text The AI alt texts are concise, descriptive, and accessibility-compliant. What do I need to get started? You’ll need: A Google Sheet to store the audit results An OpenAI account with GPT-4o access Follow the Guide! Follow the sticky notes in the workflow or check my tutorial to configure each node and start using AI to improve the accessibility of your website. 🎥 Watch My Tutorial Notes GPT-generated alt texts are limited to ~125–150 characters for best results Use this to comply with WCAG and improve Google indexing Easily adapt it to audit multiple domains or e-commerce catalogues This workflow was built using n8n version 1.85.4 Submitted: April 21, 2025
by Alex Kim
Automatically convert documents from Google Drive into vector embeddings using OpenAI, LangChain, and PGVector — fully automated through n8n. ⚙️ What It Does This workflow monitors a Google Drive folder for new files, supports multiple file types (PDF, TXT, JSON), and processes them into vector embeddings using OpenAI’s text-embedding-3-small model. These embeddings are stored in a Postgres database using the PGVector extension, making them query-ready for semantic search or RAG-based AI agents. After successful processing, files are moved to a separate “vectorized” folder to avoid duplication. 💡 Use Cases Powering Retrieval-Augmented Generation (RAG) AI agents Semantic search across private documents AI assistant knowledge ingestion Automated document pipelines for indexing or classification 🧠 Workflow Highlights Trigger Options:** Manual or Scheduled (3 AM daily by default) Supported File Types:** PDF, TXT, JSON Embedding Stack:** LangChain Text Splitter, OpenAI Embeddings, PGVector Deduplication:** Files are moved after processing License:** CC BY-SA 4.0 Author:** AlexK1919 🛠 What You’ll Need Google Drive OAuth2** credentials (connected to Search Folder, Download File, and Move File nodes) OpenAI API Key** (used in the Embeddings OpenAI node) Postgres + PGVector** database (connected in the Postgres PGVector Store node) 🔧 Step-by-Step Setup Instructions Create Google OAuth2 credentials in n8n and connect them to all Google Drive nodes. Set your source folder ID in the Search Folder node — this is where incoming files are placed. Set your processed folder ID in the Move File node — files will be moved here after vectorization. Ensure you have a PGVector-enabled Postgres instance and input the table name and collection in the Postgres PGVector Store node. Add your OpenAI credentials to the Embeddings OpenAI node and select text-embedding-3-small. Optional: Activate the Schedule Trigger node to run daily or configure your own schedule. Run manually by triggering When clicking ‘Test workflow’ for on-demand ingestion. 🧩 Customization Tips Want to support more file types or enhance the pipeline? Add new extractors**: Use Extract from File with other formats like DOCX, Markdown, or HTML. Refine logic by file type**: The Switch node routes files to the correct extraction method based on MIME type (application/pdf, text/plain, application/json). Pre-process with OCR**: Add an OCR step before extraction to handle scanned PDFs or images. Add filters**: Enhance the Search Folder or Switch node logic to skip specific files or folders. 📄 License This workflow is available under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license. You are free to use, adapt, and share this workflow for non-commercial purposes under the terms of this license. Full license details: https://creativecommons.org/licenses/by-nc-sa/4.0/
by Davide
This Workflow streamlines the process of publishing posts (image or video) to multiple social media platforms using a unified form and a third-party API service called Upload-Post. The automation starts with a form trigger, allowing users to submit content (text and media) through a simple frontend interface. Users select the platform (Instagram, LinkedIn, Facebook, X, TikTok, Threads), choose the profile name, write a caption, and upload a photo or video. How It Works Automates cross-platform social media posting via Upload-Post, handling both images (JPEG) and videos (MP4). Here’s the process: Trigger**: A form submission captures user inputs: Platform (Instagram, LinkedIn, Facebook, X, TikTok, Threads). Account (pre-configured profile name). Caption and file (image/video). Optional Facebook Page ID for targeted posting. Routing**: The "Video or Photo?" Switch node checks the file’s MIME type: Image: Routes to the "Post photo" HTTP node (uploads via upload_photos API). Video: Routes to the "Post video" HTTP node (uploads via upload API). API Integration**: Both nodes send data to Upload-Post.com’s API, including: Caption, account name, platform, and file binary. Facebook ID (if provided). Success/Failure Handling**: The "Result Photo/Video" nodes parse the API response. Setup Steps Prerequisites: Upload-Post.com API Key**: Get it from the API Keys dashboard. Free tier allows 10 uploads/month. Configuration: API Authentication: In the HTTP Request nodes (Post photo/Post video), set the Authorization header: Name: Authorization Value: Apikey YOUR_API_KEY_HERE. Form Customization: Adjust the "On form submission" node to: Add/remove platforms (e.g., YouTube when approved). Modify file type restrictions (default: .jpg, .mp4). Account Mapping: Ensure the "Account" field matches profiles configured in Upload-Post.com (e.g., test1, test2). Facebook Page Integration: Optional: Add a Facebook Page ID field for page-specific posts. Testing: Submit test forms with images/videos. Verify API responses and success/failure messages. Optional Enhancements: Add error logging (e.g., save failed attempts to a database). Extend to YouTube once API support is confirmed. Key Features: Multi-Platform**: Post to 6+ social networks simultaneously. User-Friendly**: Simple form interface for non-technical users. Error Handling**: Clear feedback for success/failure cases. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Jimleuk
This n8n demonstrates how any organisation can quickly and easily build and offer MCP servers to their customers or internal staff to improve productivity. This MCP example uses PayCaptain.com as an example and shows how to create an MCP server which can search for and update employee data. How it works A MCP server trigger is used and connected to 3 custom workflow tools: Search Employee, Get Employee by ID and Update Employee. Each tool makes calls to the PayCaptain API to perform their respective tasks. Extra care is performed to strip out sensitive data and ensure we're not sharing too much. The Update Employee too also guards against updating fields which would preferably remain readonly. When you control the MCP server, you can determine behaviour of the tool. Finally, a Google Sheet node is used to log all operations for later audit. This will add a tiny bit of latency but recommended if sensitive data is being accessed. How to use This MCP server allows any compatible MCP client to manage their PayCaptain employee database. You will need to have a PayCaptain account and developer key to use it. Connect your MCP client by following the n8n guidelines here - https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-langchain.mcptrigger/#integrating-with-claude-desktop Try the following queries in your MCP client: "When did Sarah start here employment at the company?" "Does Jack work Wednesdays or Fridays?" "Please update Tracy's NI number to ABCD123456" Requirements PayCaptain Account and Developer Key. Google Sheets to log actions for later audit. MCP Client or Agent for usage such as Claude Desktop - https://claude.ai/download Customising this workflow Add or remove employee attributes as required for your user case. If Google Sheets is too slow, consider an API call to a faster service to log calls to the MCP server. Remember to set the MCP server to require credentials before going to production and sharing this MCP server with others!
by Ranjan Dailata
Who this is for? Extract & Summarize Yelp Business Review is an automated workflow that extracts the Yelp business reviews using Bright Data Web Unlocker, process and formats the raw data, summarizes using the Google Gemini's LLM, and forward the concise summary with the review respose to a specified webhook endpoint. This workflow is tailored for: Local SEO Specialists who need structured insights from Yelp reviews to optimize listings. Business Owners wanting quick summaries of what customers love or complain about. Reputation Managers who monitor brand sentiment and identify customer pain points. Data Analysts & Researchers extracting Yelp review patterns at scale. AI Product Builders needing clean Yelp review data as input for their LLMs or recommender systems. What problem is this workflow solving? Yelp reviews are rich in customer sentiment but messy to work with manually. This workflow solves: The pain of scraping Yelp review content manually. The challenge of building the structured data with the summary. The need for structured outputs suitable for analysis, reports, or AI input. What this workflow does This automated pipeline does the following: Bright Data Integration**: Queries Yelp and scrapes business listing data using Bright Data's Web Unlocker. Structured Data Formatting**: Formats the Yelp review data to a structured response in JSON format. Google Gemini Summarization**: Sends the cleaned reviews to Google Gemini to: Output Delivery**: Returns the structured response with the concise summary over the webhook endpoint. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the Yelp Business Review URL with the Bright Data zone by navigating to the Set Yelp URL with the Bright Data Zone node. Update the Webhook Notifier for the merged response node with the Webhook endpoint of your choice. How to customize this workflow to your needs This workflow is built to be flexible - whether you’re a market researcher, entrepreneur, or data analyst. Here's how you can adapt it to fit your specific use case: Target Specific Business Categories** Update the Yelp Business Review input to scrape different businesses like gyms, salons etc. Limit Reviews** Add filters by description, location, page range to get the top reviews. Tweak the Data Extraction Node** Update the Structured Data Extractor node Output Parser for building the JSON response with the appropriate fields or attributes. Tweak the Summarization Prompt** Modify the Gemini prompt to generate a comprehensive summary. Send Output to Other Destinations** Replace the Webhook URL to forward output to: Google Sheets Airtable Slack or Discord Custom API endpoints
by Mykolas Bartkus
What This Workflow Does This n8n workflow reads backlinks from a Google Sheet, sends each one to the DataForSEO On-Page API, and checks: Whether the backlink is still live on the target page Whether it's dofollow or nofollow Whether it's missing (i.e., lost) The result is then written back to the same Google Sheet under a Status column. Your result will look like this: Step-by-Step Setup Instructions Add your DataForSEO and Google Sheets credentials in n8n Make sure your Google Sheet has these columns: Backlink URL, Landing page, and Status Click the Test Workflow button to check a batch of backlinks Workflow Breakdown Trigger: Manual test start Read Data: Pulls backlink URLs and target pages from Google Sheets Format URLs: Extracts domain from URL Send POST Request to DataForSEO: Triggers a crawl on the backlink URL Wait 20 seconds: Allows crawl to finish Fetch Link Results: Retrieves backlink data from DataForSEO Validate Backlink: Checks if the backlink is live, and whether it’s dofollow Update Google Sheets: Logs the status as Live, Lost, or Lost (Nofollow)
by Amjid Ali
Automate Digital Delivery After PayPal Purchase Using n8n A Complete Step-by-Step Guide to Seamless Template Delivery Built by Amjid Ali – SyncBricks Deliver personalized files instantly after PayPal transactions using n8n – without writing a single backend line. 🚀 What This n8n Workflow Does This automation template helps you automatically deliver a digital product (such as an n8n template or JSON file) to customers who pay via PayPal — within seconds. You can: Automatically extract customer info Identify what was purchased Send a clean, branded email with the product file Promote your other courses, books, and tools 📦 Use Case Example Product: AI-Powered Social Media Content Generator & Publisher When a customer buys this product through PayPal, this automation: Listens for a successful payment event Fetches order details via API Sends an HTML email with the template attached Promotes your other offerings with embedded links 🔧 Prerequisites You’ll need: An n8n instance (self-hosted or n8n Cloud) A PayPal developer account PayPal OAuth2 credentials configured in n8n Your product hosted as a downloadable .json file (Oracle, Dropbox, GitHub, etc.) SMTP email credentials in n8n 🧠 Step-by-Step Setup 1. Webhook Trigger Node: Webhook Listens for a POST request from PayPal’s webhook for PAYMENT.CAPTURE.COMPLETED events. 📌 Add the webhook to your PayPal Developer App > Webhooks. 2. Wait Node: Wait Adds a brief delay to ensure the payment is completely processed before continuing. 3. Filter Event Type Node: Switch Processes only when the event is PAYMENT.CAPTURE.COMPLETED. 4. Fetch Order Details Node: HTTP Request Retrieves the order information from PayPal's Orders API. URL format: https://api.paypal.com/v2/checkout/orders/{{ order_id }} 5. Extract Email & Product Info Node: Set Extracts first name, last name, email address, and the purchased item name. 6. Identify Product Purchased Node: Switch Checks if the product is “AI-Powered Social Media Content Generator & Publisher”. 7. Download Workflow File Node: HTTP Request Fetches the hosted workflow JSON from object storage (Oracle in this case). 8. Convert to Downloadable File Node: Code Converts the JSON content into a binary file and attaches it. 9. Send Custom Email Node: Send Email Sends a rich HTML email to the buyer with: Their name The file attachment Product name Helpful resource links: 📘 Mastering n8n Course on Udemy 📖 Step-by-Step Guide (n8n Book) 🎓 n8n Video Tutorials (Free Course) ☁️ Sign up for n8n Cloud – Use code AMJID10 🎥 YouTube Video Walkthrough 📚 Additional Learning Resources 🚀 My Full Automation Suite Explore more and master n8n with these resources: 🎓 Mastering n8n (Full Udemy Course) 📕 Get Your Step-by-Step Guide (n8n Book) 🎥 Get Step-by-Step Tutorials (Video Course) ☁️ Sign up for n8n Cloud 💡 Templates, Tools, and More 📺 YouTube Channel – SyncBricks 🙋 Need Help or Customization? Reach out! Email: amjid@amjidali.com LinkedIn: linkedin.com/in/amjidali Website: syncbricks.com
by Xavier
This workflow creates nested Google Drive folders from a path string (like Projects/Clients/Reports). It automatically handles the necessary folder lookups and creation steps required by Google Drive, then outputs the final folder's ID for immediate use. How it works This workflow streamlines the creation of nested folders in Google Drive: Input: Provide a root_folder_id and a path (e.g., Projects/Clients/Reports) as input. Path Parsing: The workflow splits the path into individual folder names (based on the / separator) Iterative Check & Create: Loops through each part of your path: Searches within the current parent folder (starting with the root_folder_id) for a subfolder matching the name. If found: Retrieves the existing folder's ID to use as the parent for the next iteration. If not found: Creates a new folder with that name inside the current parent folder and uses the new folder's ID as the parent for the next iteration. Output: Returns the Google Drive Folder ID of the very last folder in the specified path (e.g., the ID for Reports in the example above). This ID can then be directly used in subsequent n8n nodes to upload files, create documents, or perform other actions within that specific folder. Set up steps Setting up this workflow requires configuring the connection to Google Drive and knowing where to start creating folders: Connect Google Drive Account: Ensure you have a Google Drive credential configured in your n8n instance. Then link your credentials in the workflow: there are 2 Google Drive nodes that will need to be updated. Identify Starting Folder ID: Determine the Google Drive Folder ID where your nested structure should begin. You can either use the root of your Google Drive or a specific folder: To use the root of Google Drive, simply set root_folder_id to root (also called "My Drive" in the UI) To use a specific folder, open the folder in a webbrowser and look at the URL. The folder ID will be in the last part of the URL: https://drive.google.com/drive/folders/THIS_IS_THE_FOLDER_ID. Prepare Inputs for Execution: When running the workflow (or triggering it), you will need to provide: google_drive_folder_id -> this is the root folder ID you identified in step 2. desired_path -> This is the path you want to create (e.g., Projects/Clients/Reports). Here's an example of how you can call this workflow in your other workflows:
by PollupAI
Who is this for? This workflow is designed for Customer Satisfaction Managers (CSM), sales professionals, and operations managers who need to automate the analysis of client transcripts, save summarized notes to HubSpot, and route relevant feedback to the appropriate departments via email. What problem is this workflow solving? / Use Case Manually processing client conversations, extracting key insights, and distributing them to the right teams is time-consuming and error-prone. This workflow automates: Transcript analysis** using AI (OpenAI) to identify relevant content. HubSpot integration** to log meeting notes against client records. Email routing** to ensure feedback reaches the correct departments (e.g., support, sales, product, admin). What this workflow does Input Transcript: Accepts a client conversation transcript (e.g., from emails, calls, or chats). HubSpot Sync: Searches for the client’s HubSpot ID using their email. Uploads a summarized version of the conversation as meeting notes. AI-Powered Routing: Uses an OpenAI model to analyze the transcript and categorize content by department. Triggers emails (via Gmail) to route feedback to the relevant teams. Form Completion: Ends the workflow with optional user confirmation. Setup Prerequisites: n8n instance (cloud or self-hosted). HubSpot API credentials (for contact lookup and notes upload). OpenAI API key (for transcript analysis). Gmail account (for sending emails). Configuration: Replace placeholder nodes (e.g., HubSpot, OpenAI, Gmail) with your authenticated accounts. Define email templates and recipient addresses for routing. Adjust the OpenAI prompt to match your categorization criteria (e.g., "support," "billing"). How to customize this workflow to your needs Transcript Sources**: Extend the workflow to pull transcripts from other sources (e.g., Zoom, Slack). Departments**: Modify the routing logic to include additional teams or conditions. Notifications**: Add Slack/MS Teams alerts for urgent feedback. Error Handling**: Introduce retries or fallback actions for failed HubSpot/Gmail steps.
by Luciano Gutierrez
Instagram Auto-Comment Responder with AI Agent Integration Version: 1.1.0 ‧ n8n Version: 1.88.0+ ‧ License: MIT A fully automated workflow for managing and responding to Instagram comments using AI agents. Designed to improve engagement and save time, this system listens for new Instagram comments, verifies and filters them, fetches relevant post data, processes valid messages with a natural language AI, and posts context-aware replies directly on the original post. Key Features 💬 AI-Driven Engagement: Intelligent responses to comments via a GPT-powered agent. ✅ Webhook Verification: Handles Instagram webhook handshake to ensure secure integration. 📦 Data Extraction: Maps incoming payload fields (user ID, username, message text, media ID) for processing. 🚫 Self-Comment Filtering: Automatically skips comments made by the account owner to prevent loops. 📡 Post Data Retrieval: Fetches the media’s id and caption from the Graph API (v22.0) before generating a reply. 🧠 Natural Language Processing: Uses a custom system prompt to maintain brand tone and context. 🔁 Automated Replies: Posts the AI-generated message back to the comment thread using Instagram’s API. 🧩 Modular Architecture: Clear separation of steps via sticky notes and dedicated HTTP Request and Agent nodes. Use Cases Social Media Automation**: Keep followers engaged 24/7 with instant, relevant replies. Community Building**: Maintain a consistent voice and tone across all interactions. Brand Reputation Management**: Ensure no valid comment goes unanswered. AI Customer Support**: Triage simple questions and direct followers to resources or support. Technical Implementation Webhook Verification Node: Webhook + Respond to Webhook Echoes hub.challenge to confirm subscription and secure incoming events. Data Extraction Node: Set Maps payload fields into structured variables: conta.id, usuario.id, usuario.name, usuario.message.id, usuario.message.text, usuario.media.id, endpoint. User Validation Node: Filter Skips processing if conta.id equals usuario.id (self-comments). Post Data Retrieval Node: HTTP Request (Get post data) GET https://graph.instagram.com/v22.0/{{ $json.usuario.media.id }}?fields=id,caption&access_token={{ credentials }} Captures the media’s caption for richer context in replies. AI Response Generation Nodes: AI Agent + OpenRouter Chat Model Uses a detailed system prompt with: Profile persona (expert in AI & automations, friendly tone). Input data (username, comment text, post caption). Filtering logic (spam, praise, questions, vague comments). Returns either the reply text or [IGNORE] for irrelevant content. Posting the Reply Node: HTTP Request (Post comment) POST {{ $json.endpoint }}/{{ $json.usuario.message.id }}/replies with message={{ $json.output }} Sends the AI answer back under the original comment. Instructions for Setup Import Workflow In n8n > Workflows > Import from File, upload the provided .json template. Configure Credentials Instagram Graph API (Header Auth or FacebookGraphApi) with instagram_basic, instagram_manage_comments scopes. OpenRouter/OpenAI API key for AI agent. Customize System Prompt Edit the AI Agent’s prompt to adjust brand tone, language (Brazilian Portuguese), length, or emoji usage. Test & Activate Publish a test comment on an Instagram post. Verify each node’s execution, ensuring the webhook, filter, data extraction, HTTP requests, and AI Agent respond as expected. Extend & Monitor Add sentiment analysis or lead capture nodes as needed. Monitor execution logs for errors or rate-limit events. Tags Social Media • Instagram Automation • Webhook Verification • AI Agent • HTTP Request • Auto Reply • Community Management
by Ranjan Dailata
Who this is for? Extract & Summarize Indeed Company Info is an automated workflow that extracts the Indeed company profile information using Bright Data Web Unlocker, transform it using Google Gemini’s LLM, and forward the transformed response with the summary to a specified webhook for downstream use. This workflow is tailored for: Recruiters and HR teams looking to assess companies quickly during talent sourcing. Job seekers researching potential employers and needing summarized company insights. Market researchers and analysts monitoring competitor or industry players. What problem is this workflow solving? Searching and evaluating company profiles on Indeed manually can be time-consuming and inefficient, especially when dealing with large volumes of companies. Manually browsing, copying, and summarizing company descriptions, reviews, and ratings from Indeed hinders productivity and limits real-time insights. This workflow solves this by: Automating the extraction of company details from Indeed using Bright Data Web Unlocker. Summarizing the raw data using Google Gemini's language model for a quick, human-readable overview. Sending the transformed response with the summary to a chosen endpoint, like Slack, Notion, Airtable, or a custom webhook. What this workflow does This automated pipeline does the following: Scrape Indeed company profile pages (e.g., ratings, description, reviews) using Bright Data’s Web Unlocker. Transform the scraped content into structured JSON using n8n’s built-in tools. Summarize and extract meaningful insights using Google Gemini's large language model. Forward the summarized data to a specified webhook or app for real-time access, storage, or analysis. Forward the formatted response to a specified webhook or app for real-time access, storage, or analysis. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the search query, Bright Data zone by navigating to the Set Indeed Search Query node. Update the Webhook Notifier with the Webhook endpoint of your choice. How to customize this workflow to your needs This workflow is built to be flexible - whether you're a company or a market researcher, entrepreneur, or data analyst. Here’s how you can adapt it to fit your specific use case: Changing the data source**: Replace the Indeed search input with other job or business listing platforms if needed (e.g., Glassdoor, Crunchbase) Refining the LLM prompt**: Tailor the Gemini prompt to transform or summarize the Indeed company information in a specific format. Routing the output to different destinations**: Send summaries or transformed response to Google Sheets, Airtable, or CRMs like HubSpot or Salesforce etc.