by InfraNodus
This template can be used to find the content gaps in PDF documents using the InfraNodus knowledge graph / GraphRAG text representation and then generate ideas / questions / AI prompts that bridge those gaps based on optimizing the knowledge graph's structure. Simply upload several PDF files (research papers, corporate or market reports, etc) and generate an idea in seconds. The template is useful for: generating ideas / questions for research generating content ideas based on competitors' discourse finding blind spots in any discourse and generating ideas that address them. avoiding the generic bias of LLM models and focusing on what's important in your particular context What are Content Gaps and Knowledge Graphs? Knowledge graphs represent any text as a network: the main concepts are the nodes, their co-occurrences are the connections between them. Based on this representation, we build a graph and apply network science metrics to rank the most important nodes (concepts) that serve as the crossroads of meaning and also the main topical clusters that they connect. Naturally, some of the clusters will be disconnected and will have gaps between them. These are the topics (groups of concepts) that exist in this context (the documents you uploaded) but that are not very well connected. Addressing those gaps can help you see which groups of concepts you could connect with your own ideas. This is exactly what InfraNodus does: builds the structure, finds the gaps, then uses the built-in AI to generate research questions and ideas that bridge those gaps. How it works 1) Step 1: First, you upload your PDF files using an online web form, which you can run from n8n or even make publicly available. 2) Steps 2-4: The documents are processed using the Code and PDF to Text nodes to extract plain text from them. 3) Step 5: This text is then sent to the InfraNodus GraphRAG node that creates a knowledge graph, identifies structural gaps in this graph, and then uses built-in AI to generate ideas or research questions / prompts (if you use the InfraNodus question module instead). 4) Step 6: The ideas are then shown to the user in the same web form. Optionally, you can hook this template to your own workflow and send the idea / question generated to your own AI model / agent for further processing. If you'd like to sync this workflow to PDF files in a Google Drive folder, you can copy our Google Drive PDF processing workflow for n8n. How to use You need an InfraNodus GraphRAG API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key. Add this key into the InfraNodus GraphRAG HTTP node(s) you use in this workflow. You do not need any OpenAI keys for this to work. Optionally, you can change the settings in the Step 4 of this workflow and enforce it to always use the biggest gap it identifies. Requirements An InfraNodus account and API key Note: OpenAI key is not required. You will have direct access to the InfraNodus AI with the API key. Customizing this workflow You can use this same workflow with a Telegram bot or Slack (to be notified of the summaries and ideas). You can also hook up automated social media content creation workflows in the end of this template, so you can generate posts that are relevant (covering the important topics in your niche) but also novel (because they connect them in a new way). Check out our n8n templates for ideas at https://n8n.io/creators/infranodus/ Also check the full tutorial with a conceptual explanation at https://support.noduslabs.com/hc/en-us/articles/20454382597916-Beat-Your-Competition-Target-Their-Content-Gaps-with-this-n8n-Automation-Workflow Also check out the video introduction to InfraNodus to better understand how knowledge graphs and content gaps work: For support and help with this workflow, please, contact us at https://support.noduslabs.com
by Kanaka Kishore Kandregula
Daily Magento 2 stock check Automation It identifies SKUs with low inventory per source and sends daily alerts via: 📬 Gmail (HTML email) 💬 Slack (formatted text message) This automation empowers store owners and operations teams to stay ahead of inventory issues by proactively monitoring stock levels across all Magento 2 sources. By receiving early alerts for low-stock products, businesses can restock before items sell out—ensuring continuous product availability, reducing missed sales opportunities, and maintaining customer trust. Avoiding stockouts not only protects your brand reputation but also keeps your store competitive by preventing customers from turning to competitors due to unavailable items. Timely restocking leads to higher fulfillment rates, improved customer satisfaction, and ultimately, stronger revenue and long-term loyalty. ✅ Features: Filters out configurable, virtual, and downloadable products Uses Magento 2 MSI stock per source Customizable thresholds (default: ≤10 overall or ≤5 per source) HTML-formatted email report Slack notification with a code-formatted Runs daily via Cron (08:50 AM) No need of any 3rd part Modules One time Setup 🔑 Credentials Used HTTP Request (Magento 2 REST API using Bearer Token) Gmail (OAuth2) Slack (OAuth2 or Webhook) 📊 Tags Magento, Inventory, MSI, Stock Alert, Ecommerce, Slack, Gmail, Automation 📂 Category E-commerce → Magento 2 (Adobe Commerce) 👤 Author Kanaka Kishore Kandregula Certified Magento 2 Developer https://gravatar.com/kmyprojects https://www.linkedin.com/in/kanakakishore
by Chad McGreanor
Overview This workflow automates LinkedIn posts using OpenAI. The prompts are stored in the workflow and can be customized as needed to fit your needs. The workflow uses a combination of a Schedule Trigger, some code that determines what day of the week it is (no posting Friday - Sunday), a prompts node to set your OpenAI prompts, a random selection of a prompt so that you are not generating content that looks repetitive. We send that all to OpenAI API, select a random time, have the final LinkedIn post sent to your Telegram for approval, once approved wait for the correct time slot, and then Post to your LinkedIn account using the LinkedIn node. How it works: Run or schedule the workflow in n8n The automation can be triggered manually or on a custom schedule (excluding weekends if needed). You should customize the prompts in the Prompt Node to suit your needs. A random LinkedIn post prompt is selected Pre-written prompts are rotated to keep content fresh and non-repetitive. OpenAI generates the LinkedIn post The prompt is sent to OpenAI via API, and the result is returned in clean, ready-to-use form. You receive the draft via Telegram. The post is sent to Telegram for quick approval or review. Post is scheduled or published via the LinkedIn Connector Once approved, the workflow delays until the target time, then sends the content to LinkedIn. What's needed: An OpenAPI API key, LinkedIn Account, and a Telegram Account. For Telegram you will need to configure the Bot service. Step-by-Step: Telegram Approval for Your Workflow A. Set Up a Telegram Bot Open Telegram and search for “@BotFather”. Start a chat and type /newbot to create a bot. Give your bot a name and a unique username (e.g., YourApprovalBot). Copy the API token that BotFather gives you. B. Add Your Bot to a Private Chat (with You) Find your bot in Telegram, click “Start” to activate it. Send a test message (like “hello”) so the chat is created. C. Get Your User ID Search for “userinfobot” or use sites like userinfobot in Telegram. Type /start and it will reply with your Telegram user ID. OpenAI powers the LinkedIn post creation Add Your OpenAI API Key: Log in to your OpenAI Platform account: https://platform.openai.com/. Go to API keys and create a new secret key. In n8n, create a new "OpenAI API" credential and paste your API key. Give it a name. Apply Credential to Nodes: OpenAI Message Node Connect your LinkedIn account to the Linked in Node Select your account from the LinkedIn Dropdown box.
by Ficky
Build a Redis-Powered CRUD App with HTML Frontend This workflow demonstrates how to use n8n to build a complete, self-contained CRUD (Create, Read, Update, Delete) application without relying on any external server or hosting. It not only acts as the backend, handling all CRUD operations through Webhook endpoints, but also serves a fully functional HTML Single Page Application (SPA) directly via a webhook response. Redis is used as a lightweight data store, providing fast and simple key-value storage with auto-incremented IDs. Because both the frontend (HTML app) and backend (API endpoints) are managed entirely within a single n8n workflow, you can quickly prototype or deploy small tools without additional infrastructure. This approach is ideal for: Rapidly creating no-code or low-code applications Running fully browser-based tools served directly from n8n Teaching or demonstrating n8n + Redis integration in a single workflow Features Add new item with auto-incremented ID Edit existing item Delete specific item Reset all data (clear storage and reset autoincrement id) Single HTML frontend for demonstration (no framework required) Setup Instructions 1. Prerequisites Before importing and running the workflow, make sure you have: A running n8n instance (self-hosted or cloud) A running Redis server (local or remote) 2. API Path Setup For the REST API, use a consistent path. For example, if you choose items as the path: 2a. Get All Items** Method: GET Endpoint: items 2b. Add Item** Method: POST Endpoint: items 2c. Edit Item** Method: PUT Endpoint: items 2d. Delete Item** Method: DELETE Endpoint: items 2e. Reset Items** Method: POST Endpoint: items-reset 3. Configure the API URL Set the API URL in the SET API URL node. Use your n8n webhook URL, for example: https://yourn8n.com/webhook/items 4. Run the HTML App Once everything is set: Open the webhook URL for the HTML app in a browser. The CRUD interface will load and connect to the API endpoints automatically. You can now add, edit, delete, or reset items directly from the web interface. Workflows 1. Render the HTML CRUD App This webhook serves a self-contained HTML Single Page Application (SPA) for basic CRUD operations. The HTML content is returned directly in the webhook response. This setup is ideal for lightweight, browser-based tools without external hosting. How to Use Open the webhook URL in a browser The CRUD interface will load and connect to the data source via API calls Before using, make sure to edit the api_url in the SET API URL node to match your webhook endpoint 2a. REST API: Get All Items This webhook handles retrieving all saved items from Redis. Each item is returned with its corresponding ID and associated data (e.g., name). This endpoint is used by the HTML CRUD App to display the full list of items. Method**: GET Function**: Fetches all items stored in Redis and returns them as a JSON array 2b. REST API: Add Item This webhook handles the Add Item functionality. This endpoint is typically called by the HTML CRUD App when adding a new item. Method**: POST Request Body**: { "name": "item name" } Function**: Generates an auto-incremented ID using Redis and saves the data under that ID 2c. REST API: Edit Item This webhook handles updating an existing item in Redis. Method**: PUT Request Body**: { "id": 1, "name": "Updated Item Name" } Function**: Finds the item by the given id and updates its data in Redis 2d. REST API: Delete Item This webhook handles deleting a specific item from Redis. Method**: DELETE Request Body**: { "id": 1 } Function**: Removes the item with the given id from Redis 2e. REST API: Reset Items This webhook handles resetting all data in the application. Method**: POST Function**: Deletes all stored items from Redis Resets the auto-increment ID by deleting the data in Redis
by Romain Jouhannet
Linear Project/Issue Status and End Date to Productboard feature Sync Sync project and issue data between Linear and Productboard to keep teams aligned. This workflow updates Productboard features with the status and end date from Linear projects or due date from Linear issues. It ensures consistent data and sends a Slack notification whenever changes are made. Features Listens for updates in Linear projects/issues. Maps Linear statuses to Productboard feature statuses. Updates Productboard feature details including timeframe. Sends a Slack notification summarizing the updates. Setup Linear Credentials: Add your Linear API credentials in n8n. Productboard Credentials: Configure the Productboard API credentials in n8n. Linear Projects or Issues: Select the Linear project(s) or Issue(s) you want to monitor for updates. Productboard Custom Field: Create a custom field in Productboard named "Linear". This field should store the URL of the Linear project or issue you want to sync. Retrieve the UUID of the custom field in Productboard and set it up in the "Get Productboard Feature ID" node. Slack Notification: Update the Slack node with the desired Slack channel ID. Activate the Workflow: Enable the workflow to automatically sync data when triggered by updates in Linear.
by Karam Ghazzi
Description 📄 Turn your Slack workspace into a smart AI-powered HelpDesk using this workflow. This automation listens to Slack messages and uses an AI assistant (powered by OpenAI or any other LLM) to respond to employee questions about HR, IT, or internal policies by referencing your internal documentation (such as the Policy Handbook). If the answer isn't available, it can optionally email the relevant department (HR or IT) and ask them to update the handbook. It remembers recent messages per user, cleans up intermediate responses to keep Slack threads tidy, and ensures your team gets consistent and helpful answers—without manually searching docs or escalating simple questions. Perfect for growing teams who want to streamline internal support using n8n, Slack, and AI. How it works 🛠️ This workflow turns n8n into a Slack-based HelpDesk assistant powered by AI. It listens to Slack messages using the Events API, detects whether a real user is asking a question, and responds using OpenAI (or another LLM of your choice). Here's how it works step-by-step: Webhook Trigger: The workflow starts when a message is posted in Slack via the Events API. It filters out any messages from bots to avoid loops. Identify the User: It fetches the full Slack profile of the user who posted the message and stores their name. Send Receipt Message: An initial message is sent to the user saying, “I’m on it!”, confirming their request is being processed. AI Response Handling: The message is processed using the OpenAI Chat model (GPT-4o by default). Before responding, it checks if the query matches any HR or IT policy from the Policy Handbook. If the question can’t be answered based on internal data, it can optionally alert the HR or IT department via Gmail (after user confirmation). Memory Retention: It keeps track of the last 5 interactions per user using Simple Memory, so it remembers previous context in a Slack conversation. Cleanup and Final Reply: It deletes the initial receipt message and sends a final, clean response to the user. How to use 🚀 Clone the Workflow: Download or import the JSON workflow into your n8n instance. Connect Your Credentials: Slack API (for messaging) Google Sheets API (for department contact info) Google Docs API (for the Policy Handbook) Gmail API (optional, for notifying departments) OpenAI or another AI model Slack Setup: Set up a Slack App and enable the Events API. Subscribe to message events and point them to the Webhook URL generated by the workflow. Customize Responses: Edit the initial and final Slack message nodes if you want to personalize the wording. Swap out the LLM (ChatGPT) with your preferred model in the AI Agent node. Adjust AI Behavior: Tune the prompt logic in the “AI Agent” node if you want the AI to behave differently or access different data sources. Expand Memory or Integrations: Use external databases to store longer histories. Integrate with tools like Asana, Notion, or CRM platforms for further automation. Requirements 📋 n8n (self-hosted or cloud) Slack Developer Account & App OpenAI (or any LLM provider) Google Sheets with department contact details Google Docs containing the policy Handbook Gmail account (optional, for email alerts) Knowledge of Slack Events API setup
by Marian Tcaciuc
Manage Calendar with Voice & Text Commands using GPT-4, Telegram & Google Calendar This n8n workflow transforms your Telegram bot into a personal AI calendar assistant, capable of understanding both voice and text commands in Romanian, and managing your Google Calendar using the GPT-4 model via LangChain. Whether you want to create, update, fetch, or delete events, you can simply speak or write your request to your Telegram bot — and the assistant takes care of the rest. 🚀 Features Voice command support using Telegram voice messages (.ogg) Transcription using OpenAI Whisper Natural language understanding with GPT-4 via LangChain Google Calendar integration: ✅ Create Events 🔁 Update Events ❌ Delete Events 📅 Fetch Events Responses sent back via Telegram 🛠️ Step-by-Step Setup Instructions 1. Create a Telegram Bot Go to @BotFather on Telegram. Send /newbot and follow the instructions. Save the Bot Token. 2. Configure Telegram Trigger Node Paste the Telegram token into the Telegram Trigger and Telegram nodes. Set updates to ["message"]. 3. Set up OpenAI Credentials Get an OpenAI API key from https://platform.openai.com Create a credential in n8n for OpenAI. This is used for both transcription and AI reasoning. 4. Set up Google Calendar In Google Cloud Console: Enable Google Calendar API Set up OAuth2 credentials Add your n8n redirect URI (usually https://yourdomain/rest/oauth2-credential/callback) Create a credential in n8n using Google Calendar OAuth2 Grant access to your calendar (e.g., "Family" calendar). ⚙️ Customization Options 🗣️ Change Language or Locale The transcription node uses "en" for English. Change to another locale if needed. ✏️ Edit Prompt You can modify the prompt in the AI Agent node to include your name, work schedule, or specific behavior expectations. 📆 Change Calendar Logic Adjust time ranges or filters in the Get Events node Add custom logic before Create Event (e.g., validation, conflict checks) 📚 Helpful Tips Make sure n8n has HTTPS enabled to receive Telegram updates. You can test the flow first using only text, then voice. Use AI memory or vector stores (like Supabase) if you want context-aware planning in the future.
by Naveen Choudhary
Who is this for? Marketing agencies, sales teams, lead generation specialists, and business development professionals who need to build comprehensive business databases with contact information for outreach campaigns across any industry. What problem is this workflow solving? Finding businesses and their contact details manually is time-consuming and inefficient. This workflow automates the entire process of discovering businesses through Google Maps and extracting their digital contact information from websites, saving hours of manual research. What this workflow does This automated workflow runs every 30 minutes to: Scrape business data from Google Maps using Apify's Google Places crawler Save basic business information (name, address, phone, website) to Google Sheets Filter businesses that have websites Scrape each business's website content using Firecrawl Extract contact information including emails, LinkedIn, Facebook, Instagram, and Twitter profiles Store all extracted data in organized Google Sheets for easy access and follow-up Setup Required Services: Google Sheets account with OAuth2 setup Apify account with API access for Google Places scraping Firecrawl account with API access for website scraping Pre-setup: Copy this Google Sheet Configure your Apify and Firecrawl API credentials in n8n Set up Google Sheets OAuth2 connection Update the Google Sheet ID in all Google Sheets nodes Quick Start: The workflow includes detailed sticky notes explaining each phase. Simply configure your API credentials and Google Sheet, then activate the workflow. How to customize this workflow to your needs Change search criteria**: Modify the Apify scraping parameters to target different business types (restaurants, gyms, salons, etc.) or locations Adjust schedule**: Change the trigger interval from 30 minutes to your preferred frequency Add more contact fields**: Extend the extraction code to find additional contact information like WhatsApp or Telegram Filter criteria**: Modify the filter conditions to target businesses with specific characteristics Batch size**: Adjust the batch processing to handle more or fewer websites simultaneously Perfect for lead generation, competitor research, and building targeted marketing lists across any industry or business type.
by Miquel Colomer
This n8n workflow template automates the process of finding LinkedIn profiles for a person based on their name, and company. It scrapes Google search results via Bright Data, parses the results with GPT-4o-mini, and delivers a personalized follow-up email with insights and suggested outreach steps. 🚀 What It Does Accepts a user-submitted form with a person’s full name, and company. Performs a Google search using Bright Data to find LinkedIn profiles and company data. Uses GPT-4o-mini to parse HTML results and identify matching profiles. Filters and selects the most relevant LinkedIn entry. Analyzes the data to generate a buyer persona and follow-up strategy. Sends a styled email with insights and outreach steps. 🛠️ Step-by-Step Setup Deploy the form trigger to accept person data (name, position, company). Build a Google search query from user input. Scrape search results using Bright Data. Extract HTML content using the HTML node. Use GPT-4o-mini to parse LinkedIn entries and company insights. Filter for matches based on user input. Merge relevant data and generate personalized outreach content. Send email to a predefined address. Show a final confirmation message to the user. 🧠 How It Works: Workflow Overview Trigger:** When User Completes Form Search:** Edit Url LinkedIn, Get LinkedIn Entry on Google, Extract Body and Title, Parse Google Results Matching:** Extract Parsed Results, Filter, Limit, IF LinkedIn Profile is Found? Fallback:** Form Not Found if no match Company Lookup:** Edit Company Search, Get Company on Google, Parse Results, Split Out Content Generation:** Merge, Create a Followup for Company and Person Email Delivery:** Send Email, Form Email Sent 📨 Final Output An HTML-styled email (using Tailwind CSS) with: Matched LinkedIn profile Company insights Persona-based outreach strategy 🔐 Credentials Used BrightData account** for scraping Google search results OpenAI account** for GPT-4o-mini-powered parsing and content generation SMTP account** for sending follow-up emails ❓Questions? Template and node created by Miquel Colomer and n8nhackers. Need help customizing or deploying? Contact us for consulting and support.
by Abdullah Maftah
Auto Source LinkedIn Candidates with GPT-4 Boolean Search & Google X-ray How It Works: User Input: The user pastes a job description or ideal candidate specifications into the workflow. Boolean Search String Generation: OpenAI processes the input and generates a precise LinkedIn Boolean search string formatted as: site:linkedin.com/in ("Job Title" AND "Skill1" AND "Skill2") This search string is optimized to find relevant LinkedIn profiles matching the provided criteria. Google Sheet Creation: A new Google Sheet is automatically created within a specified document to store extracted LinkedIn profile URLs. Google Search Execution: The workflow sends a search request to Google using an HTTP node with the generated Boolean string. Iterative Search & Data Extraction: The workflow retrieves the first 10 results from Google. If the desired number of LinkedIn profiles has not been reached, the workflow loops, fetching the next set of 10 results until the if condition is met. Data Storage: The workflow extracts LinkedIn profile URLs from the search results and saves them to the newly created Google Sheet for further review. Setup Steps: 1. API Key Configuration Under "Credentials", add your OpenAI API key from your OpenAI account settings. This key is used to generate the LinkedIn Boolean search string. 2. Adjust Search Parameters Navigate to the "If" node and update the condition to define the desired number of LinkedIn profiles to extract. The default is 50, but you can set it to any number based on your needs. 3. Establish Google Sheets Connection Connect your Google Sheets account** to the workflow. Create a document** to store the sourced LinkedIn profiles. The workflow automatically creates a new sheet for each new search, so no manual setup is needed. 4. Authenticate Google Search Google search requires authentication** for better results. Use the Cookie-Editor browser extension to export your header string and enable authenticated Google searches within the workflow. 5. Run the Workflow Execute* the workflow and monitor the *Google Sheet** for newly added LinkedIn profiles. Benefits: ✅ Automates profile sourcing, reducing manual search time. ✅ Generates precise LinkedIn Boolean search strings tailored to job descriptions. ✅ Extracts and saves LinkedIn profiles efficiently for recruitment efforts. This solution leverages OpenAI and advanced search techniques to enhance your talent sourcing process, making it faster and more accurate! 🚀
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for The DNB Company Search & Extract workflow is designed for professionals who need to gather structured business intelligence from Dun & Bradstreet (DNB). It is ideal for: Market Researchers B2B Sales & Lead Generation Experts Business Analysts Investment Analysts AI Developers Building Financial Knowledge Graphs What problem is this workflow solving? Gathering business information from the DNB website usually involves manual browsing, copying company details, and organizing them in spreadsheets. This workflow automates the entire data collection pipeline — from searching DNB via Google, scraping relevant pages, to structuring the data and saving it in usable formats. What this workflow does This workflow performs automated search, scraping, and structured extraction of DNB company profiles using Bright Data’s MCP search agents and OpenAI’s 4o mini model. Here's what it includes: Set Input Fields: Provide search_query and webhook_notification_url. Bright Data MCP Client (Search): Performs Google search for the DNB company URL. Markdown Scrape from DNB: Scrapes the company page using Bright Data and returns it as markdown. OpenAI LLM Extraction: Transforms markdown into clean structured data. Extracts business information (company name, size, address, industry, etc.) Webhook Notification: Sends structured response to your provided webhook. Save to Disk: Persists the structured data locally for logging or auditing. Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. In n8n, configure the OpenAi account credentials. In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token>. Update the Set input fields for search_query and webhook_notification_url. Update the file name and path to persist on disk. How to customize this workflow to your needs Search Engine**: Default is Google, but you can change the MCP client engine to Bing, or Yandex if needed. Company Scope**: Modify search query logic for niche filtering, e.g., "biotech startups site:dnb.com". Structured Fields**: Customize the LLM prompt to extract additional fields like CEO name, revenue, or ratings. Integrations**: Push output to Notion, Airtable, or CRMs like HubSpot using additional n8n nodes. Formatting**: Convert output to PDF or CSV using built-in File and Spreadsheet nodes.
by Ranjan Dailata
Who this is for The TrustPilot SaaS Product Review Tracker is designed for product managers, SaaS growth teams, customer experience analysts, and marketing teams who need to extract, summarize, and analyze customer feedback at scale from TrustPilot. This workflow is tailored for: Product Managers** - Monitoring feedback to drive feature improvements Customer Support & CX Teams** - Identifying sentiment trends or recurring issues Marketing & Growth Teams** - Leveraging testimonials and market perception Data Analysts** - Tracking competitor reviews and benchmarking Founders & Executives** - Wanting aggregated insights into customer satisfaction What problem is this workflow solving? Manually monitoring, extracting, and summarizing TrustPilot reviews is time-consuming, fragmented, and hard to scale across multiple SaaS products. This workflow automates that process from unlocking the data behind anti-bot layers to summarizing and storing customer insights enabling teams to respond faster, spot trends, and make data-backed product decisions. This workflow solves: The challenge of scraping protected review data (using Bright Data Web Unlocker) The need for structured insights from unstructured review content The lack of automated delivery to storage and alerting systems like Google Sheets or webhooks What this workflow does Extract TrustPilot Reviews: Uses Bright Data Web Unlocker to bypass anti-bot protections and pull markdown-based content from product review pages Convert Markdown to Text: Leverages a basic LLM chain to clean and convert scraped markdown into plain text Structured Information Extraction: Uses OpenAI GPT-4o via the Information Extractor node to extract fields like product name, review date, rating, and reviewer sentiment Summarization Chain: Generates concise summaries of overall review sentiment and themes using OpenAI Merge & Aggregate Output: Consolidates individual extracted records into a structured batch output Outbound Data Delivery: Google Sheets – Appends summary and structured review data Write to Disk – Persists raw and processed content locally Webhook Notification – Sends a real-time alert with summarized insights Pre-conditions You need to have a Bright Data account and do the necessary setup as mentioned in the "Setup" section below. You need to have an OpenAI Account. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, Configure the Google Sheet Credentials with your own account. Follow this documentation - Set Google Sheet Credential In n8n, configure the OpenAi account credentials. Ensure the URL and Bright Data zone name are correctly set in the Set URL, Filename and Bright Data Zone node. Set the desired local path in the Write a file to disk node to save the responses. How to customize this workflow to your needs Target Multiple Products : Configure the Bright Data input URL dynamically for different SaaS product TrustPilot URLs Loop through a product list and run parallel jobs for each Customize Extraction Fields : Update the prompt in the Information Extractor to include: Review title Response from company Specific feature mentions Competitor references Tune Summarization Style Change tone**: executive summary, customer pain-point focus, or marketing quote extract Enable sentiment aggregation** (e.g., 30% negative, 50% neutral, 20% positive) Expand Output Destinations Push to Notion, Airtable, or CRM tools using additional webhook nodes Generate and send PDF reports (via PDFKit or HTML-to-PDF nodes) Schedule summary digests via Gmail or Slack