by Akash Kankariya
๐ Discover trending and viral YouTube videos easily with this powerful n8n automation! This workflow helps you perform bulk research on YouTube videos related to any search term, analyzing engagement data like views, likes, comments, and channel statistics โ all in one streamlined process. โจ Perfect for: Content creators wanting to find viral video ideas Marketers analyzing competitor content YouTubers optimizing their content strategy How It Works ๐ฏ 1๏ธโฃ Input Your Search Term โ Simply enter any keyword or topic you want to research. 2๏ธโฃ Select Video Format โ Choose between short, medium, or long videos. 3๏ธโฃ Choose Number of Videos โ Define how many videos to analyze in bulk. 4๏ธโฃ Automatic Data Fetch โ The workflow grabs video IDs, then fetches detailed video data and channel statistics from the YouTube API. 5๏ธโฃ Performance Scoring โ Videos are scored based on engagement rates with easy-to-understand labels like ๐ HOLY HELL (viral) or ๐ Dead. 6๏ธโฃ Export to Google Sheets โ All data, including thumbnails and video URLs, is appended to your Google Sheet for comprehensive review and easy sharing. Setup Instructions ๐ ๏ธ Google API Key Get your YouTube Data API key from Google Developers Console. Add it securely in the n8n credentials manager (do not hardcode). Google Sheets Setup Create a Google Sheet to store your results (template link is provided). Share the sheet with your Google account used in n8n. Update the workflow with your sheet's Document ID and Sheet Name if needed. Run the Workflow Trigger the form webhook via browser or POST call. Enter search term, format, and number of videos. Let it process and check your Google Sheet for insights! Features โจ Bulk fetches the latest and top-viewed YouTube videos. Intelligent video performance scoring with emojis for quick insights ๐ฅ๐ฌ. Organizes data into Google Sheets with thumbnail previews ๐ผ๏ธ. Easy to customize search parameters via an intuitive form. Fully automated, no manual API calls needed. Get Started Today! ๐ Boost your YouTube content strategy and stay ahead with this powerful viral video research automation! Try it now on your n8n instance and tap into the world of viral content like a pro ๐ฅ๐ก
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for Recipe Recommendation Engine with Bright Data MCP & OpenAI is a powerful automated workflow combines Bright Data's MCP for scraping trending or regional recipe data with OpenAI 4o mini to generate personalized recipe recommendations. This automated workflow is designed for: Food Bloggers & Culinary Creators : Who want to automate the extraction and curation of recipes from across the web to generate content, compile cookbooks, or publish newsletters. Nutritionists & Health Coaches : Who need structured recipe data to analyze ingredients, calories, and nutrition for personalized meal planning or dietary tracking. AI/ML Engineers & Data Scientists : Building models that classify cuisines, predict recipes from ingredients, or generate dynamic meal suggestions using clean, structured datasets. Grocery & Meal Kit Platforms : Who aim to extract recipes to power recommendation engines, ingredient lists, or personalized meal plans. Recipe Aggregator Startups : Looking to scale recipe data collection, filtering, and standardization across diverse cooking websites with minimal human intervention. Developers Integrating Cooking Features : Into apps or digital assistants that offer recipe recommendations, step-by-step cooking instructions, or nutritional insights. What problem is this workflow solving? This workflow solves: Automated recipe data extraction from any public URL AI-driven structured data extraction Scalable looped crawling and processing Real-time notifications and data persistence What this workflow does 1. Set Recipe Extract URL Configure the recipe website URL in the input node Set your Bright Data zone name and authentication 2. Paginated Data Extract Triggers a paginated extraction across multiple pages (recipe listing, index, or search pages) Returns a list of recipe links for processing 3. Loop Over Items Loops through the array of recipe links Each link is passed individually to the scraping engine 4. Bright Data MCP Client (Per Recipe) Scrapes each individual recipe page using scrape_as_html Smartly bypasses common anti-bot protections via Bright Data Web Unlocker 5. Structured Recipe Data Extract (via OpenAI GPT-4o mini) Converts raw HTML to clean text using an LLM preprocessing node Uses OpenAI GPT-4o mini to extract structured data 6. Webhook Notification Pushes the structured recipe data to your configured webhook endpoint Format: JSON payload, ideal for Slack, internal APIs, or dashboards 7. Save Response to Disk Saves the structured recipe JSON information to local file system Pre-conditions You need to have a Bright Data account and do the necessary setup as mentioned in the "Setup" section below. You need to have an OpenAI Account. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the OpenAi account credentials. Make sure to set the fields as part of Set the Recipe Extract URL. Remember to set the webhook_url to send a webhook notification of recipe response. Set the desired local path in the Write the structured content to disk node to save the recipe response. How to customize this workflow to your needs You can tailor the Recipe Recommendation Engine workflow to better fit your specific use case by modifying the following key components: 1. Input Fields Node Update the Recipe URL to target specific cuisine sites or recipe types (e.g., vegan, keto, regional dishes). 2. LLM Configuration Swap out the OpenAI GPT-4o mini model with another provider (like Google Gemini) if you prefer. Modify the structured data prompt to extract custom fields that you wish. 3. Webhook Notification Configure the Webhook Notification node to point to your preferred integration (e.g., Slack, Discord, internal APIs). 4. Storage Destination Change the Save to Disk node to store the structured recipe data in: A cloud bucket (S3, GCS, Azure Blob etc.) A database (MongoDB, PostgreSQL, Firestore) Google Sheets or Airtable for spreadsheet-style access.
by Eduard
This workflow demonstrates three distinct approaches to chaining LLM operations using Claude 3.7 Sonnet. Connect to any section to experience the differences in implementation, performance, and capabilities. What you'll find: 1๏ธโฃ Naive Sequential Chaining The simplest but least efficient approach - connecting LLM nodes in a direct sequence. Easy to set up for beginners but becomes unwieldy and slow as your chain grows. 2๏ธโฃ Agent-Based Processing with Memory Process a list of instructions through a single AI Agent that maintains conversation history. This structured approach provides better context management while keeping your workflow organized. 3๏ธโฃ Parallel Processing for Maximum Speed Split your prompts and process them simultaneously for much faster results. Ideal when you need to run multiple independent tasks without shared context. Setup Instructions: API Credentials: Configure your Anthropic API key in the credentials manager. This workflow uses Claude 3.7 Sonnet, but you can modify the model in each Anthropic Chat Model node, or pick an entirely different LLM. For Cloud Users: If using the parallel processing method (section 3), replace {{ $env.WEBHOOK_URL }} in the "LLM steps - parallel" HTTP Request node with your n8n instance URL. Test Data: The workflow fetches content from the n8n blog by default. You can modify this part to use a different content or a data source. Customization: Each section contains a set of example prompts. Modify the "Initial prompts" nodes to change the questions asked to the LLM. Compare these methods to understand the trade-offs between simplicity, speed, and context management in your AI workflows! Follow me on LinkedIn for more tips on AI automation and n8n workflows!
by David Levesque
Here's the corrected English text: Dropbox Folder Monitoring Workflow As we don't have (yet?) a Dropbox node "Watching new files" or "Watching folder", I created this central workflow to do it. How it works Triggered by Dropbox webhook I respond immediately to Dropbox to avoid webhook disabling Then I add/duplicate one branch per monitored folder, according to my needs In my case, I need to monitor several folders, like "vocal notes to process", "transcriptions to LinkedIn posts" or "quotes to add". This workflow shows 2 types of folder monitoring: Way #1: Each file in the monitored folder calls a sub-workflow Way #2: We get all files from the monitored folder and compare them to a database. If the file is not listed in DB, i supposed it's new one. Way #1 - We get all files from the monitored folder I set a variable folder_to_watch to indicate which folder to monitor. This step is here just to be homogeneous and allow setting the folder path only once in this branch. I list the folder files We keep only files (exclude folders) Then I call the specialized sub-workflow Way #2 - We want only new files from the monitored folder I set a variable folder_to_watch to indicate which folder to monitor I list the folder files and keep only files Meanwhile, I query my DB to get known files about this folder (I send the query to NocoDB (folder_to_watch,eq,{{ $json.folder_to_watch }})) Now I can exclude old files and keep only new ones by merging (I compare from Dropbox file id - as the file could be renamed by the user) I add the new file in DB to be sure to recognize it next time - I save the JSON Dropbox data: { "id":"{{ $json.id }}", "name":"{{ $json.name }}", "lastModifiedClient": "{{ $json.lastModifiedClient }}", "lastModifiedServer": "{{ $json.lastModifiedServer }}", "rev": "{{ $json.rev }}", "contentSize": {{ $json.contentSize }}, "type": "{{ $json.type }}", "contentHash": "{{ $json.contentHash }}", "pathLower": "{{ $json.pathLower }}", "pathDisplay": "{{ $json.pathDisplay }}", "isDownloadable": {{ $json.isDownloadable }} } And now I can call my sub-workflow :) My DB Columns details: folder_to_watch data (json/text) timestamp file_id (Dropbox file ID, to ease future searches) My vision: I have only one workflow in my n8n that monitors Dropbox folders/files This workflow calls the required sub-workflow specialized for the tasks required I will have as many branches as I have folders to monitor (if I have 5 different folders to watch, I will get 5 branches and 5 sub-workflows)
by Adam Bertram
LintGuardian: Automated PR Linting with n8n & AI What It Does LintGuardian is an n8n workflow template that automates code quality enforcement for GitHub repositories. When a pull request is created, the workflow automatically analyzes the changed files, identifies linting issues, fixes them, and submits a new PR with corrections. This eliminates manual code style reviews, reduces back-and-forth comments, and lets your team focus on functionality rather than formatting. How It Works The workflow is triggered by a GitHub webhook when a PR is created. It fetches all changed files from the PR using the GitHub API, processes them through an AI-powered linting service (Google Gemini), and automatically generates fixes. The AI agent then creates a new branch with the corrected files and submits a "linting fixes" PR against the original branch. Developers can review and merge these fixes with a single click, keeping code consistently formatted with minimal effort. Prerequisites To use this template, you'll need: n8n instance: Either self-hosted or using n8n.cloud GitHub repository: Where you want to enforce linting standards GitHub Personal Access Token: With permissions for repo access (repo, workflow, admin:repo_hook) Google AI API Key: For the Gemini language model that powers the linting analysis GitHub webhook: Configured to send PR creation events to your n8n instance Setup Instructions Import the template into your n8n instance Configure credentials: Add your GitHub Personal Access Token under Credentials โ GitHub API Add your Google AI API key under Credentials โ Google Gemini API Update repository information: Locate the "Set Common Fields" code node at the beginning of the workflow Change the gitHubRepoName and gitHubOrgName values to match your repository const commonFields = { 'gitHubRepoName': 'your-repo-name', 'gitHubOrgName': 'your-org-name' } Configure the webhook: Create a file named .github/workflows/lint-guardian.yml in your repository replacing the Trigger n8n Workflow step with your webhook: name: Lint Guardian on: pull_request: types: [opened, synchronize] jobs: trigger-linting: runs-on: ubuntu-latest steps: name: Trigger n8n Workflow uses: fjogeleit/http-request-action@v1 with: url: 'https://your-n8n-instance.com/webhook/1da5a6e1-9453-4a65-bbac-a1fed633f6ad' method: 'POST' contentType: 'application/json' data: | { "pull_request_number": ${{ github.event.pull_request.number }}, "repository": "${{ github.repository }}", "branch": "${{ github.event.pull_request.head.ref }}", "base_branch": "${{ github.event.pull_request.base.ref }}" } preventFailureOnNoResponse: true Customize linting rules (optional): Modify the AI Agent's system message to specify your team's linting preferences Adjust file handling if you have specific file types to focus on or ignore Security Considerations When creating your GitHub Personal Access Token, remember to: Choose the minimal permissions needed (repo, workflow, admin:repo_hook) Set an appropriate expiration date Treat your token like a password and store it securely Consider using GitHub's fine-grained personal access tokens for more limited scope As GitHub documentation notes: "Personal access tokens are like passwords, and they share the same inherent security risks." Extending the Template You can enhance this workflow by: Adding Slack notifications when linting fixes are submitted Creating custom linting rules specific to your team's needs Expanding it to handle different types of code quality checks Adding approval steps for more controlled environments This template provides an excellent starting point that you can customize to fit your team's exact workflow and code style requirements.
by Naveen Choudhary
This workflow automatically enriches company domain lists with comprehensive business information scraped from ZoomInfo, organizing the data in Google Sheets for sales teams and researchers. Who's it for Sales teams** building prospect databases with accurate company information Marketing professionals** researching target companies for outreach campaigns Business development teams** qualifying leads with revenue and employee data Researchers** collecting structured company data for market analysis Lead generation specialists** enriching domain lists with contact details How it works The workflow processes unprocessed domains from a Google Sheet, searches for their ZoomInfo profiles using Serper API, scrapes the company pages through Oxylabs proxy service, and extracts structured business data. Each domain is marked as processed to prevent duplicates, and the workflow includes proper rate limiting to respect API limits. What it does Loads unprocessed domains from your Google Sheets database Searches ZoomInfo using targeted queries via Serper API for each domain Validates search results and extracts relevant ZoomInfo profile URLs Scrapes company pages using Oxylabs to bypass anti-scraping protection Extracts structured data including company details, address, revenue, and employee count Updates Google Sheets with enriched company information Tracks processing status to prevent reprocessing the same domains Requirements Serper API account** with search credits (Get API key) Oxylabs subscription** for web scraping proxy service (Sign up here) Google Sheets API access** with OAuth2 authentication Google Sheets template** - Make a copy of this template sheet with pre-configured columns How to set up Make a copy of the Google Sheets template - Click here to copy the template to your Google Drive Configure API credentials in the respective HTTP Request nodes: Add Serper API key in the search node Set up Oxylabs username/password in the scraping node Set up Google Sheets authentication using OAuth2 Update the Google Sheets document ID in all Google Sheets nodes to point to your copied template Add your domain list to the sheet with 'processed' column empty or false Run the workflow using the manual trigger How to customize the workflow Search query modification**: Update the search query in the Serper node for different geographic focus (currently set for Czech Republic) Data extraction fields**: Modify the Google Sheets column mapping to include/exclude specific company data points Rate limiting**: Adjust wait times between requests to match your API rate limits Batch processing**: Configure the split batch size for processing domains in smaller groups Error handling**: Customize the continue-on-error settings based on your data quality requirements Scheduling**: Replace Manual Trigger with Schedule Trigger for automated daily/weekly runs Output data includes Complete company name and official address Phone numbers and contact information Revenue figures and employee headcount Industry classifications and business categories LinkedIn company profile URLs Geographic location details (city, state, country, postal code) Processing status tracking for workflow management Note: This workflow includes comprehensive error handling to ensure domains are always marked as processed, preventing infinite loops while maintaining data integrity. Rate limiting is built-in to respect API quotas and avoid service interruptions.
by Elegant Biztech
Automated QuickBooks Invoice to Custom PDF & Email Tired of the standard, boring invoices from QuickBooks Online? This workflow completely automates the process of creating beautiful, custom-branded PDF invoices and emailing them directly to your clients, saving you time and elevating your brand's professionalism. The moment you create an invoice in QuickBooks, this workflow triggers, fetches all the necessary data, and generates a lavish, multi-page-aware PDF invoice complete with your company logo and signature. Key Features Fully Automated:** Runs instantly when a new invoice is created in QuickBooks. Custom Branding:** Automatically fetches your company logo and signature from a URL to place on the invoice. Modern & Professional Design:** Uses a premium, multi-column HTML template that is clean, easy to read, and far superior to the default QBO templates. Multi-Page Ready:** If an invoice has many line items, the template will intelligently create multiple pages and add a "Page X of Y" footer automatically. Smart Layout:** The totals and summary block are designed to never break across pages, ensuring a professional look no matter the length. Automatic Emailing:** The final PDF is attached to a beautifully formatted email and sent directly to the customer's email address on file. Prerequisites Before you start, you will need a few things: A running n8n instance. A QuickBooks Online account with API access. A running Gotenberg instance. This is a powerful, open-source tool for converting HTML to PDF. This workflow is designed to connect to its API. You can learn more about it here. Publicly accessible URLs for your company logo and signature image (e.g., hosted on your website or a service like Imgur). Setup Guide Follow these steps carefully to configure the workflow for your own use. Nodes that need your attention are marked with a [!!] prefix. Step 1: Configure the QuickBooks Webhook The workflow starts with a webhook. You need to tell QuickBooks to send information to this webhook. Open the [!!] Listen for New QuickBooks Invoice node. You will see a Webhook URL. Copy the Production URL. Go to your QuickBooks Developer dashboard, select your app, and navigate to the Webhooks section. Paste the n8n URL into the Endpoint URL field and select the Invoice event to subscribe to. Step 2: Connect Your QuickBooks Account Open the [!!] Get Invoice Data from QuickBooks node. In the "Credentials" field, select your existing QuickBooks Online credentials or create a new set. Step 3: Add Your Branding Open the [!!] Fetch Company Logo Image node. In the URL field, replace the placeholder with the public URL of your company's logo. Open the [!!] Fetch Company Signature Image node. In the URL field, replace the placeholder with the public URL of your signature image. Step 4: Update the PDF Generation Service Open the [!!] Generate PDF via Gotenberg node. In the URL field, replace the placeholder http://YourGotenBergInstanceURL/... with the real URL of your running Gotenberg instance. Step 5: Configure Your Email Open the [!!] Email PDF Invoice to Customer node. In the "Credentials" field, select your SMTP or email service credentials. Customize the From Email and Subject fields. You can also edit the beautiful HTML email body to match your company's tone of voice. Step 6: Activate Your Workflow You're all set! Save the workflow and activate it using the toggle at the top-right of the screen. Now, when you create a new invoice in QuickBooks, this automation will handle the rest. A Note from the Creator Thank you for using this workflow! I believe that professional and automated invoicing is a cornerstone of a great business. This tool was designed to save you time and help you put your best foot forward with every client interaction. If you have any questions or need assistance, feel free to reach out. Website:** https://www.elegantbiztech.com/ Email:** sales@elegantbiztech.com
by Lucรญa Maio Brioso
๐งโ๐ผ Who is this for? This workflow is for anyone with two YouTube channels who wants to copy playlists from one to the other โ no technical skills required. Whether you're a content creator, hobbyist, educator, or just someone managing multiple channels, this workflow helps you save time and avoid the manual work of recreating playlists video by video. ๐ง What problem is this workflow solving? YouTube doesn't provide an option to transfer or duplicate playlists between accounts or channels. That means if you want the same playlists in two places, you're stuck: Creating new playlists manually Searching for each video again Copy-pasting links one by one This workflow automates the entire process for you โ accurately, quickly, and with no manual work. โ๏ธ What this workflow does Retrieves all playlists from a source YouTube channel (excluding private ones) For each playlist: Gets all its videos Filters out private or unavailable videos Creates a new playlist in the target channel with the same title Adds the videos to the new playlist Continues smoothly even if some videos fail to copy (e.g., if theyโre restricted or deleted) ๐ ๏ธ Setup Create two YouTube OAuth2 credentials in n8n: One for your source channel One for your target channel Assign the credentials to the correct nodes as indicated in the sticky notes: Source nodes โ source credentials Target nodes โ target credentials Click โTest workflowโ to run it. > โ ๏ธ Note: If you have many playlists or videos, you may hit YouTubeโs API quota. You can request a quota increase in your Google Cloud Console if needed. ๐งฉ How to customize this workflow to your needs โ๏ธ Copy only specific playlists Use a Filter node after the playlist fetch to include only certain titles or IDs. ๐ Change the title of the copied playlists Modify the title in the Create playlist node (e.g., add โ(Copy)โ or a prefix). ๐ Automate it regularly Replace the Manual Trigger with a Cron node if you want to run this periodically. ๐งช Test safely If you're unsure, use a secondary channel as your test target before applying changes to your main account.
by M Shehroz Sajjad
What problem does it solve? Manual candidate screening is time-consuming and inconsistent. This workflow automates initial interviews, providing 24/7 availability, consistent questioning, and objective assessments for every candidate. Who is it for? HR teams handling high-volume recruiting Small businesses without dedicated recruiters Companies scaling their hiring process Remote-first organizations needing asynchronous screening What this workflow does Creates AI interviewers from job descriptions that conduct natural conversations with candidates via BeyondPresence Agents. Automatically analyzes interviews and saves structured assessments to Google Sheets. Setup Copy template sheet: BeyondPresence HR Interview System Template Add credentials: BeyondPresence API Key OpenAI API Google Sheets Configure webhook in BeyondPresence dashboard: https://[your-n8n-instance]/webhook/beyondpresence-hr-interviews Paste job description and run setup Share generated link with candidates How it works Agent Creation: Converts job description into conversational AI interviewer Interview Conduct: Candidates chat naturally with AI via shared link Webhook Trigger: Completed interviews sent to n8n AI Analysis: OpenAI evaluates responses against job requirements Results Storage: Assessments saved to Google Sheets with scores and recommendations Resources Google Sheets Template BeyondPresence Documentation Webhook Setup Guide Example Use Case Tech startup screens 200 applicants for engineering role. Creates AI interviewer in 2 minutes, sends link to all candidates. Receives structured assessments within 24 hours, identifying top 20 candidates for human interviews. Reduces initial screening time from 2 weeks to 2 days.
by Ranjan Dailata
Who this is for The Crunchbase B2B Lead Discovery Pipeline is designed for sales teams, B2B marketers, business analysts, and data operations teams who need a reliable way to extract, structure, and summarize company information from Crunchbase to fuel lead generation and market intelligence. This workflow is ideal for: Sales Development Reps (SDRs) - Needing structured leads from Crunchbase Marketing Analysts - Generating segmented outreach lists Growth Teams - Identifying trending B2B startups RevOps Teams - Automating company research pipelines Data Teams - Consolidating insights into Google Sheets for dashboards What problem is this workflow solving? Manual extraction of company data from Crunchbase is time-consuming, inconsistent, and often lacks the contextual summary required for sales enablement or growth targeting. This workflow automates the extraction, transformation, summarization, and delivery of Crunchbase company data into structured formats, making it instantly usable for B2B targeting and analysis. It solves: The difficulty of scaling lead discovery from Crunchbase The need to summarize raw textual content for quick insights The lack of integration between web scraping, LLM processing, and storage What this workflow does Markdown to Textual Data Extractor**: Takes raw scraped markdown from Crunchbase and converts it into readable plain text using a basic LLM chain Structured Data Extraction**: Applies a parsing model (OpenAI) to extract structured fields such as company name, funding rounds, industry tags, location, and founding year Summarization Chain**: Generates an executive summary from the raw Crunchbase text using a summarization prompt template Send to Google Sheets**: Adds the structured data and summary into a Google Sheet for team access and further processing Persist to Disk**: Saves both raw and structured data files locally for archiving or further use Webhook Notification**: Sends a structured payload to a webhook endpoint (e.g., Slack, CRM, internal tools) with lead insights Pre-conditions You need to have a Bright Data account and do the necessary setup as mentioned in the "Setup" section below. You need to have an OpenAI Account. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, Configure the Google Sheet Credentials with your own account. Follow this documentation - Set Google Sheet Credential In n8n, configure the OpenAi account credentials. Ensure the URL and Bright Data zone name are correctly set in the Set URL, Filename and Bright Data Zone node. Set the desired local path in the Write a file to disk node to save the responses. How to customize this workflow to your needs LLM Prompt Customization : Modify the extraction prompt to include additional fields like revenue, social links, leadership team Adjust summarization tone (e.g., executive summary, sales-focused snapshot or marketing digest) File Persistence Store raw markdown, extracted JSON, and summary text separately for audit/debug Webhook Notification Connect to CRM (e.g., HubSpot, Salesforce) via webhook to automatically create leads Send Slack notifications to alert sales reps when a new high-potential company is discovered
by Trung Tran
๐๏ธ VoiceScribe AI: Telegram Audio Message Auto Transcription with OpenAI Whisper > Automatically transcribe Telegram voice messages and store them as structured logs in Google Sheets, while backing up the audio in Google Drive. ๐งโ๐ผ Whoโs it for Journalists, content creators, or busy professionals who often record voice memos or short interviews on the go. Anyone who wants to turn voice recordings into searchable, structured notes. โ๏ธ How it works / What it does User sends a voice message to a Telegram bot. n8n checks if the message is an audio voice note. If valid, it downloads the audio file and: Transcribes it using OpenAI Whisper (or your LLM of choice). Uploads the original audio to Google Drive for safekeeping. The transcript and audio metadata are merged. The workflow: Logs the data into a Google Sheet. Sends a formatted confirmation message to the user via Telegram. If the input is not audio, the bot politely informs the user that only voice messages are accepted. โ Features Accepts only Telegram voice messages. Transcribes via OpenAI Whisper. Logs DateTime, Duration, Transcript, and Audio URL to Google Sheets. Sends user feedback message via Telegram with download + transcript link. ๐ How to set up Prerequisites Telegram Bot connected to n8n (via Telegram Trigger) Google Drive & Google Sheets credentials configured OpenAI or Whisper API credentials (for transcription) Steps Telegram Trigger Start the flow when a new message is sent to your bot. Check Message Type Use a conditional node to confirm it's a voice message. Download Voice Message Download the .oga file from Telegram. Transcribe Audio Send the binary audio to OpenAI Whisper or your transcription service. Upload to Google Drive Backup the original audio file. Merge Outputs Combine transcription with Drive metadata. Transform to Row Format Prepare structured JSON for Google Sheets. Append to Google Sheet Store the transcript log (DateTime, Duration, Transcript, AudioURL). Send Confirmation to User Inform the user via Telegram with their transcript and download link. Unsupported Message Handler Reply to users who send non-audio messages. ๐ Example Output in Google Sheet | DateTime | Duration | Transcript | AudioURL | |-----------------------|----------|--------------------------------------------|------------------------------------------------------------| | 2025-08-07T13:12:19Z | 27 | Dแปฑ รกn Outlet Activation lร ... | https://drive.google.com/uc?id=xxxx&export=download | ๐ง How to customize the workflow Swap Whisper with Deepgram, AssemblyAI, or other providers. Add speaker name detection or prompt-based tagging via GPT. Route transcripts into Notion, Airtable, or CRM systems. Add multi-language support or summarization steps. ๐ฆ Requirements | Component | Required | |---------------------|----------| | Telegram API | โ | | Google Drive API | โ | | Google Sheets API | โ | | OpenAI Whisper API | โ | | n8n Cloud or Self-hosted | โ | Created with โค๏ธ using n8n
by Ranjan Dailata
Who this is for The TrustPilot SaaS Product Review Tracker is designed for product managers, SaaS growth teams, customer experience analysts, and marketing teams who need to extract, summarize, and analyze customer feedback at scale from TrustPilot. This workflow is tailored for: Product Managers** - Monitoring feedback to drive feature improvements Customer Support & CX Teams** - Identifying sentiment trends or recurring issues Marketing & Growth Teams** - Leveraging testimonials and market perception Data Analysts** - Tracking competitor reviews and benchmarking Founders & Executives** - Wanting aggregated insights into customer satisfaction What problem is this workflow solving? Manually monitoring, extracting, and summarizing TrustPilot reviews is time-consuming, fragmented, and hard to scale across multiple SaaS products. This workflow automates that process from unlocking the data behind anti-bot layers to summarizing and storing customer insights enabling teams to respond faster, spot trends, and make data-backed product decisions. This workflow solves: The challenge of scraping protected review data (using Bright Data Web Unlocker) The need for structured insights from unstructured review content The lack of automated delivery to storage and alerting systems like Google Sheets or webhooks What this workflow does Extract TrustPilot Reviews: Uses Bright Data Web Unlocker to bypass anti-bot protections and pull markdown-based content from product review pages Convert Markdown to Text: Leverages a basic LLM chain to clean and convert scraped markdown into plain text Structured Information Extraction: Uses OpenAI GPT-4o via the Information Extractor node to extract fields like product name, review date, rating, and reviewer sentiment Summarization Chain: Generates concise summaries of overall review sentiment and themes using OpenAI Merge & Aggregate Output: Consolidates individual extracted records into a structured batch output Outbound Data Delivery: Google Sheets โ Appends summary and structured review data Write to Disk โ Persists raw and processed content locally Webhook Notification โ Sends a real-time alert with summarized insights Pre-conditions You need to have a Bright Data account and do the necessary setup as mentioned in the "Setup" section below. You need to have an OpenAI Account. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, Configure the Google Sheet Credentials with your own account. Follow this documentation - Set Google Sheet Credential In n8n, configure the OpenAi account credentials. Ensure the URL and Bright Data zone name are correctly set in the Set URL, Filename and Bright Data Zone node. Set the desired local path in the Write a file to disk node to save the responses. How to customize this workflow to your needs Target Multiple Products : Configure the Bright Data input URL dynamically for different SaaS product TrustPilot URLs Loop through a product list and run parallel jobs for each Customize Extraction Fields : Update the prompt in the Information Extractor to include: Review title Response from company Specific feature mentions Competitor references Tune Summarization Style Change tone**: executive summary, customer pain-point focus, or marketing quote extract Enable sentiment aggregation** (e.g., 30% negative, 50% neutral, 20% positive) Expand Output Destinations Push to Notion, Airtable, or CRM tools using additional webhook nodes Generate and send PDF reports (via PDFKit or HTML-to-PDF nodes) Schedule summary digests via Gmail or Slack