by Eduard
This workflow demonstrates three distinct approaches to chaining LLM operations using Claude 3.7 Sonnet. Connect to any section to experience the differences in implementation, performance, and capabilities. What you'll find: 1️⃣ Naive Sequential Chaining The simplest but least efficient approach - connecting LLM nodes in a direct sequence. Easy to set up for beginners but becomes unwieldy and slow as your chain grows. 2️⃣ Agent-Based Processing with Memory Process a list of instructions through a single AI Agent that maintains conversation history. This structured approach provides better context management while keeping your workflow organized. 3️⃣ Parallel Processing for Maximum Speed Split your prompts and process them simultaneously for much faster results. Ideal when you need to run multiple independent tasks without shared context. Setup Instructions: API Credentials: Configure your Anthropic API key in the credentials manager. This workflow uses Claude 3.7 Sonnet, but you can modify the model in each Anthropic Chat Model node, or pick an entirely different LLM. For Cloud Users: If using the parallel processing method (section 3), replace {{ $env.WEBHOOK_URL }} in the "LLM steps - parallel" HTTP Request node with your n8n instance URL. Test Data: The workflow fetches content from the n8n blog by default. You can modify this part to use a different content or a data source. Customization: Each section contains a set of example prompts. Modify the "Initial prompts" nodes to change the questions asked to the LLM. Compare these methods to understand the trade-offs between simplicity, speed, and context management in your AI workflows! Follow me on LinkedIn for more tips on AI automation and n8n workflows!
by Tiartyos
Voice Cloning Workflow - Zyphra Zonos API Who is this for? This workflow is designed for developers, content creators, and businesses looking to automate high-quality voice synthesis using AI voice cloning technology. What problem does this solve? It automates the process of generating natural-sounding speech from text using a sample voice file, eliminating the need for manual voice recording and providing consistent voice output for applications like audiobooks, virtual assistants, or content localization. What this workflow does The workflow receives text and voice cloning parameters via webhook, reads a sample voice file from your storage, sends the data to Zyphra's Zonos API for voice synthesis, and saves the generated audio file to your specified output location. Prerequisites You'll need: API key from Zyphra (obtain from https://playground.zyphra.com/settings/api-keys) Account registration at https://playground.zyphra.com Sample voice file stored on accessible disk/cloud storage n8n instance running with webhook capabilities Setup Configure your Zyphra API key in the "Call Zyphra Clone API" node under Header Parameters (Name: X-API-Key, Value: your-api-key) Ensure your sample voice files are accessible at the paths you'll specify Test the webhook endpoint is accessible Supported Audio Formats The API supports multiple output formats through the mime_type parameter: WebM** (default) - audio/webm Ogg** - audio/ogg WAV** - audio/wav MP3** - audio/mp3 or audio/mpeg MP4/AAC** - audio/mp4 or audio/aac Usage Example Endpoint: POST http://localhost:5678/webhook-test/voice-clone Headers: Content-Type: application/json Request Body: { "text": "Hello there! This voice sounds just like the sample!", "speaking_rate": 18, "sample_voice_path": "/data/output/sampleVoice.wav", "output_path": "/data/output/", "language_iso_code": "en-us", "mime_type": "audio/wav", "model": "zonos-v0.1-transformer", "emotion": { "happiness": 0.8, "neutral": 0.3, "sadness": 0.05, "disgust": 0.05, "fear": 0.05, "surprise": 0.05, "anger": 0.05, "other": 0.5 } } Parameters Required Parameters text**: Text to synthesize into speech sample_voice_path**: Path to your voice sample file output_path**: Directory where generated audio will be saved Optional Parameters (with defaults) speaking_rate**: 15 - Speech speed language_iso_code**: "en-us" - Language code mime_type**: "audio/wav" - Output audio format model**: "zonos-v0.1-transformer" - AI model to use emotion**: Object with emotion levels (0-1 scale)
by Wikus Bergh
Who is this for? This template is ideal for n8n administrators, automation engineers, and DevOps teams who want to maintain bidirectional synchronization between their n8n workflows and GitHub repositories. It helps teams keep their workflow backups up-to-date and ensures consistency between their n8n instance and version control system. What problem is this workflow solving? Managing workflow versions across n8n and GitHub can become complex when changes happen in both places. This workflow solves that by automatically synchronizing workflows bidirectionally, ensuring that the most recent version is always available in both systems without manual intervention or version conflicts. What this workflow does: Runs on a weekly schedule (every Monday) to check for synchronization needs. Fetches all workflows from your n8n instance and compares them with GitHub repository files. Identifies workflows that exist only in n8n and uploads them to GitHub as JSON backups. Identifies workflows that exist only in GitHub and creates them in your n8n instance. For workflows that exist in both places, compares timestamps and syncs the most recent version: If n8n version is newer → Updates GitHub with the latest workflow If GitHub version is newer → Updates n8n with the latest workflow Automatically handles file naming, encoding/decoding, and commit messages with timestamps. Setup: Connect GitHub: Configure GitHub API credentials in the GitHub nodes. Note: Use a GitHub Personal Access Token (classic) with repo permissions to read and write workflow files. Connect n8n API: Provide your n8n API credentials in the n8n nodes. Check this doc Configure GitHub Details in the Set GitHub Details node: github_account_name: Your GitHub username or organization github_repo_name: The repository name where workflows should be stored repo_workflows_path: The folder path in your repo (e.g., workflows or n8n-workflows) Adjust Schedule: Modify the Schedule Trigger if you want a different sync frequency (currently set to weekly on Mondays). Test the workflow: Run it manually first to ensure all connections and permissions are working correctly. How to customize this workflow to your needs: Change sync frequency**: Modify the Schedule Trigger to run daily, hourly, or on-demand. Add filtering**: Extend the Filter node to exclude certain workflows (e.g., test workflows, templates). Add notifications**: Insert Slack, email, or webhook notifications to report sync results. Implement conflict resolution**: Add custom logic for handling workflows with the same timestamp. Add workflow validation**: Include checks to validate workflow JSON before syncing. Branch management**: Modify to sync to different branches or create pull requests instead of direct commits. Backup retention**: Add logic to maintain multiple versions or archive old workflows. Key Features: Bidirectional sync**: Handles changes from both n8n and GitHub Timestamp-based conflict resolution**: Always keeps the most recent version Automatic file naming**: Converts workflow names to valid filenames Base64 encoding/decoding**: Properly handles JSON workflow data Comprehensive comparison**: Uses dataset comparison to identify differences Automated commits**: Includes timestamps in commit messages for traceability This automated synchronization workflow provides a robust backup and version control solution for n8n workflows, ensuring your automation assets are always safely stored and consistently available across environments.
by Joseph LePage
This n8n workflow template is designed to integrate a DeepSeek AI agent with Telegram, incorporating long-term memory capabilities for personalized and context-aware responses. Here's a detailed breakdown: Core Features Telegram Integration Uses a webhook to receive messages from Telegram users. Validates user identity and message content before processing. AI-Powered Responses Employs DeepSeek's AI models for conversational interactions. Includes memory capabilities to personalize responses based on past interactions. Error Handling Sends an error message if the input cannot be processed. Model Options 🧠 DeepSeek-V3 Chat**: Handles general conversational tasks. DeepSeek-R1 Reasoning**: Provides advanced reasoning capabilities for complex queries. Memory Buffer Window**: Maintains session context for ongoing conversations. Quick Setup 🛠️ Telegram Webhook Configuration Set up a webhook using the Telegram Bot API: https://api.telegram.org/bot{my_bot_token}/setWebhook?url={url_to_send_updates_to} Replace {my_bot_token} with your bot's token and {url_to_send_updates_to} with your n8n webhook URL. Verify the webhook setup using: https://api.telegram.org/bot{my_bot_token}/getWebhookInfo DeepSeek API Configuration Base URL: https://api.deepseek.com Obtain your API key from the DeepSeek platform. Implementation Details 🔧 User Validation The workflow validates the user's first name, last name, and ID using data from incoming Telegram messages. Only authorized users proceed to the next steps. Message Routing Routes messages based on their type (text, audio, or image) using a switch node. Ensures appropriate handling for each message format. AI Agent Interaction Processes text input using DeepSeek-V3 or DeepSeek-R1 models. Customizable system prompts define the AI's behavior and rules, ensuring user-centric and context-aware responses. Memory Management Retrieves long-term memories stored in Google Docs to enhance personalization. Saves new memories based on user interactions, ensuring continuity across sessions.
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for This workflow automates the real-time extraction of Job Descriptions and Salary Information from job listing pages using Bright Data MCP and analyzes content using OpenAI GPT-4o mini. This workflow is ideal for: Recruiters & HR Tech Startups**: Automate job data collection from public listings Market Intelligence Teams**: Analyze compensation trends across companies or geographies Job Boards & Aggregators**: Power search results with structured, enriched listings AI Workflow Builders**: Extend to other career platforms or automate resume-job match analysis Analysts & Researchers**: Track hiring signals and salary benchmarks in real time What problem is this workflow solving? Traditional scraping of job portals can be challenging due to cluttered content, anti-scraping measures, and inconsistent formatting. Manually analyzing salary ranges and job descriptions is tedious and error-prone. This workflow solves the problem by: Simulating user behavior using Bright Data MCP Client to bypass anti-scraping systems Extracting structured, clean job data in Markdown format Using OpenAI GPT-4o mini to analyze and extract precise salary details and refined job descriptions Merging and formatting the result for easy consumption Delivering final output via webhook, Google Sheets, or file system What this workflow does Components & Flow Input Nodes job_search_url: The job listing or search result URL job_role: The title or role being searched for (used in logging/formatting) MCP Client Operations MCP Salary Data Extractor Simulates browser behavior and scrapes salary-related content (if available) MCP Job Description Extractor Extracts full job description as structured Markdown content OpenAI GPT-4o mini Nodes Salary Information Extractor Uses GPT-4o mini to detect, clean, and standardize salary range data (if any) Job Description Refiner Extracts role responsibilities, qualifications, and benefits from unstructured text Company Information Extractor Uses Bright Data MCP and GPT-4o mini to extract the company information Merge Node Combines the refined job description and extracted salary information into a unified JSON response object Aggregate node Aggregates the job description and salary information into a single JSON response object Final Output Handling The output is handled in three different formats depending on your downstream needs: Save to Disk** Output stored with filename including timestamp and job role Google Sheet Update** Adds a new row with job role, salary, summary, and link Webhook Notification** Pushes merged response to an external system Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. In n8n, configure the OpenAi account credentials. In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token> How to customize this workflow to your needs Modify Input Source Change the job_search_url to point to any job board or aggregator Customize job_role to reflect the type of jobs being analyzed Tweak LLM Prompts (Optional) Refine GPT-4o mini prompts to extract additional fields like benefits, tech stacks, remote eligibility Change Output Format Customize the merged object to output JSON, CSV, or Markdown based on downstream needs Add additional destinations (e.g., Slack, Airtable, Notion) via n8n nodes
by Mark Shcherbakov
Video Guide I prepared a comprehensive guide detailing how to create a Smart Agent that automates meeting task management by analyzing transcripts, generating tasks in Airtable, and scheduling follow-ups when necessary. Youtube Link Who is this for? This workflow is ideal for project managers, team leaders, and business owners looking to enhance productivity during meetings. It is particularly helpful for those who need to convert discussions into actionable items swiftly and effectively. What problem does this workflow solve? Managing action items from meetings can often lead to missed tasks and poor follow-up. This automation alleviates that issue by automatically generating tasks from meeting transcripts, keeping everyone informed about their responsibilities and streamlining communication. What this workflow does The workflow leverages n8n to create a Smart Agent that listens for completed meeting transcripts, processes them using AI, and generates tasks in Airtable. Key functionalities include: Capturing completed meeting events through webhooks. Extracting relevant meeting details such as transcripts and participants using API calls. Generating structured tasks from meeting discussions and sending notifications to clients. Webhook Integration: Listens for meeting completion events to trigger subsequent actions. API Requests for Data: Pulls necessary details like transcripts and participant information from Fireflies. Task and Notification Generation: Automatically creates tasks in Airtable and notifies clients of their responsibilities. Setup N8N Workflow Configure the Webhook: Set up a webhook to capture meeting completion events and integrate it with Fireflies. Retrieve Meeting Content: Use GraphQL API requests to extract meeting details and transcripts, ensuring appropriate authentication through Bearer tokens. AI Processing Setup: Define system messages for AI tasks and configure connections to the AI chat model (e.g., OpenAI's GPT) to process transcripts. Task Creation Logic: Create structured tasks based on AI output, ensuring necessary details are captured and records are created in Airtable. Client Notifications: Use an email node to notify clients about their tasks, ensuring communications are client-specific. Scheduling Follow-Up Calls: Set up Google Calendar events if follow-up meetings are required, populating details from the original meeting context.
by Jay Emp0
🤖 MCP Personal Assistant Workflow Description This workflow integrates multiple productivity tools into a single AI-powered assistant using n8n, acting as a centralized control hub to receive and execute tasks across Google Calendar, Gmail, Google Drive, LinkedIn, Twitter, and more. ✅ Key Capabilities AI Agent + Tool Use**: Built using n8n's AI Agent and MCP system, enabling intelligent multi-step reasoning. Tool Integration**: Google Calendar: schedule, update, delete events Gmail: search, draft, send emails Google Drive: manage files and folders LinkedIn & Twitter: post updates, send DMs Utility tools: fetch date/time, search URLs Discord Input**: Accepts prompts via n8n_discord_trigger_bot repo link 🛠 Setup Instructions Timezone Configuration: Go to Settings > Default Timezone in n8n. Set to your local timezone (e.g., Asia/Jakarta). Ensure all Date & Time nodes explicitly use the same zone to avoid UTC-related bugs. Tool Authentication: Replace all OAuth credentials for: Gmail Google Drive Google Calendar Twitter LinkedIn Use your own accounts when copying this workflow. Platform Adaptability: While designed for Discord, you can replace the Discord trigger with any other chat or webhook service. Example: Telegram, Slack, WhatsApp Webhook, n8n Form Trigger, etc. 📦 Strengths Great for document retrieval, email summarization, calendar scheduling, and social posting. Reduces the need for tab-switching across multiple platforms. Tested with a comprehensive checklist across categories like: Calendar Gmail Google Drive Twitter LinkedIn Utility tools Cross-tool actions (Refer to discordGPT prompt checklist for prompt coverage.) ⚠️ Limitations ❌ Binary Uploads: AI agents & MCP server currently struggle with binary payloads. Uploading files to Gmail, Google Drive, or LinkedIn may fail due to format serialization issues. Binary operations (upload/post) are under development and will be fixed in future iterations. ❌ Date Bugs: If timezone settings are incorrect, event times may default to UTC, leading to misaligned calendar events. 🔬 Testing Use the provided prompt checklist for full coverage of: ✅ Core feature flows ✅ Edge cases (e.g., invalid dates, nonexistent users) ✅ Cross-tool chains (e.g., Google Drive → Gmail → LinkedIn) ✅ MCP Assistant Test Prompt Checklist 📅 Google Calendar [X] "Schedule a meeting with Alice tomorrow at 10am. and send an invite to alice@wonderland.com" [X] "Create an event called 'Project Sync' on Friday at 3pm with Bob and Charlie." [X] "Update the time of my call with James to next Monday at 2pm." [X] "Delete my meeting with Marketing next Wednesday." [x] "What is my schedule tommorow ? " 📧 Gmail [x] "Show me unread emails from this week." [x] "Search for emails with subject: invoice" [X] "Reply to the latest email from john@company.com saying 'Thanks, noted!'" [X] "Draft an email to info@a16z.com with subject 'Emp0 Fundraising' and draft the body of the email with an investment opportunity in Emp0, scrape this site https://Emp0.com to get to know more about emp0.com" [X] "Send an email to hi@cursor.com with subject 'Feature request' and cc sales@cursor.com" [ ] "Send an email to recruiting@openai.com , write about how you like their product and want to apply for a job there and attach my latest CV from Google Drivce" 🗂 Google Drive [ ] "Upload the PDF you just sent me to my Google Drive." [X] "Create a folder called 'July Reports' inside Emp0 shared drive." [X] "Move the file named 'Q2_Review.pdf' to 'Reports/2024/Q2'." [X] "Share the folder 'Investor Decks' with info@a16z.com as viewer." [ ] "Download the file 'Wayne_Li_CV.pdf' and attach it in Discord." [X] "Search for a file named 'Invoice May' in my Google Drive." 🖼 LinkedIn [X] "Think of a random and inspiring quote. Post a text update on LinkedIn with the quote and end with a question so people will answer and increase engagement" [ ] "Post this Google Drive image to LinkedIn with the caption: 'Team offsite snapshots!'" [X] "Summarize the contents of this workflow and post it on linkedin with the original url https://n8n.io/workflows/5230-content-farming-ai-powered-blog-automation-for-wordpress/" 🐦 Twitter [X] "Tweet: 'AI is eating operations. Fast.'" [X] "Send a DM to @founderguy: 'Would love to connect on what you’re building.'" [X] "Search Twitter for keyword: 'founder advice'" 🌐 Utilities [X] "What time is it now?" [ ] "Download this PDF: https://ontheline.trincoll.edu/images/bookdown/sample-local-pdf.pdf" [X] "Search this URL and summarize important tech updates today: https://techcrunch.com/feed/" 📎 Discord Attachments [ ] "Take the image I just uploaded and post it to LinkedIn." [ ] "Get the file from my last message and upload it to Google Drive." 🧪 Edge Cases [X] "Schedule a meeting on Feb 30." [X] "Send a DM to @user_that_does_not_exist" [ ] "Download a 50MB PDF and post it to LinkedIn" [X] "Get the latest tweet from my timeline and email it to myself." 🔗 Cross-tool Flows [ ] "Get the latest image from my Google Drive and post it on LinkedIn with the caption 'Another milestone hit!'" [ ] "Find the latest PDF report in Google Drive and email it to investor@vc.com." [ ] "Download an image from this link and upload it to my Google Drive: https://example.com/image.png" [ ] "Get the most recent attachment from my inbox and upload it to Google Drive." Run each of these in isolated test cases. For cross-tool flows, verify binary serialization integrity. 🧠 Why Use This Workflow? This is an always-on personal assistant that can: Process natural language input Handle multi-step logic Execute commands across 6+ platforms Be extended with more tools and memory If you want to interact with all your work tools from a single prompt—this is your base to start. 📎 Repo & Credits Discord bot trigger: n8n_discord_trigger_bot Creator: Jay (Emp₀)
by Lucas Peyrin
How it works This template is a powerful, reusable utility for managing stateful, long-running processes. It allows a main workflow to be paused indefinitely at "checkpoints" and then be resumed by external, asynchronous events. This pattern is essential for complex automations and I often call it the "Async Portal" or "Teleport" pattern. The template consists of two distinct parts: The Main Process (Top Flow): This represents your primary business logic. It starts, performs some actions, and then calls the Portal to register itself before pausing at a Wait node (a "Checkpoint"). The Async Portal (Bottom Flow): This is the state-management engine. It uses Workflow Static Data as a persistent memory to keep track of all paused processes. When an external event (like a new chat message or an approval webhook) comes in with a specific session_id, the Portal looks up the corresponding paused workflow and "teleports" the new data to it by calling its unique resume_url. This architecture allows you to build sophisticated systems where the state is managed centrally, and your main business logic remains clean and easy to follow. When to use this pattern This is an advanced utility ideal for: Chatbots:** Maintaining conversation history and context across multiple user messages. Human-in-the-Loop Processes:** Pausing a workflow to wait for a manager's approval from an email link or a form submission. Multi-Day Sequences:** Building user onboarding flows or drip campaigns that need to pause for hours or days between steps. Any process that needs to wait for an unpredictable external event** without timing out. Set up steps This template is a utility designed to be copied into your own projects. The workflow itself is a live demonstration of how to use it. Copy the Async Portal: In your own project, copy the entire Async Portal (the bottom flow, starting with the A. Entry: Receive Session Info trigger) into your workflow. This will be your state management engine. Register Your Main Process: At the beginning of your main workflow, use an Execute Workflow node to call the Portal's trigger. You must pass it a unique session_id for the process and the resume_url from a Wait node. Add Checkpoints: Place Wait nodes in your main workflow wherever you need the process to pause and wait for an external event. Trigger the Portal: Configure your external triggers (e.g., your chatbot's webhook) to call the Portal's entry trigger, not your main workflow's trigger. You must pass the same session_id so the Portal knows which paused process to resume. To see it in action, follow the detailed instructions in the "How to Test This Workflow" sticky note on the canvas.
by David Ashby
Complete MCP server exposing 2 CarbonDoomsDay API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add CarbonDoomsDay credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the CarbonDoomsDay API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.carbondoomsday.com/api • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (2 total) 🔧 Co2 (2 endpoints) • GET /co2/: Get CO2 Measurement by Date • GET /co2/{date}/: CO2 measurements from the Mauna Loa observatory. This data is made available through the good work of the people at the Mauna Loa observatory. Their release notes say: These data are made freely available to the public and the scientific community in the belief that their wide dissemination will lead to greater understanding and new scientific insights. We currently scrape the following sources: [co2_mlo_weekly.csv] [co2_mlo_surface-insitu_1_ccgg_DailyData.txt] [weekly_mlo.csv] We have daily CO2 measurements as far back as 1958. Learn about using pagination via [the 3rd party documentation]. [co2_mlo_weekly.csv]: https://www.esrl.noaa.gov/gmd/webdata/ccgg/trends/co2_mlo_weekly.csv [co2_mlo_surface-insitu_1_ccgg_DailyData.txt]: ftp://aftp.cmdl.noaa.gov/data/trace_gases/co2/in-situ/surface/mlo/co2_mlo_surface-insitu_1_ccgg_DailyData.txt [weekly_mlo.csv]: http://scrippsco2.ucsd.edu/sites/default/files/data/in_situ_co2/weekly_mlo.csv [the 3rd party documentation]: http://www.django-rest-framework.org/api-guide/pagination/#pagenumberpagination 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native CarbonDoomsDay API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Ranjan Dailata
Who this is for The TrustPilot SaaS Product Review Tracker is designed for product managers, SaaS growth teams, customer experience analysts, and marketing teams who need to extract, summarize, and analyze customer feedback at scale from TrustPilot. This workflow is tailored for: Product Managers** - Monitoring feedback to drive feature improvements Customer Support & CX Teams** - Identifying sentiment trends or recurring issues Marketing & Growth Teams** - Leveraging testimonials and market perception Data Analysts** - Tracking competitor reviews and benchmarking Founders & Executives** - Wanting aggregated insights into customer satisfaction What problem is this workflow solving? Manually monitoring, extracting, and summarizing TrustPilot reviews is time-consuming, fragmented, and hard to scale across multiple SaaS products. This workflow automates that process from unlocking the data behind anti-bot layers to summarizing and storing customer insights enabling teams to respond faster, spot trends, and make data-backed product decisions. This workflow solves: The challenge of scraping protected review data (using Bright Data Web Unlocker) The need for structured insights from unstructured review content The lack of automated delivery to storage and alerting systems like Google Sheets or webhooks What this workflow does Extract TrustPilot Reviews: Uses Bright Data Web Unlocker to bypass anti-bot protections and pull markdown-based content from product review pages Convert Markdown to Text: Leverages a basic LLM chain to clean and convert scraped markdown into plain text Structured Information Extraction: Uses OpenAI GPT-4o via the Information Extractor node to extract fields like product name, review date, rating, and reviewer sentiment Summarization Chain: Generates concise summaries of overall review sentiment and themes using OpenAI Merge & Aggregate Output: Consolidates individual extracted records into a structured batch output Outbound Data Delivery: Google Sheets – Appends summary and structured review data Write to Disk – Persists raw and processed content locally Webhook Notification – Sends a real-time alert with summarized insights Pre-conditions You need to have a Bright Data account and do the necessary setup as mentioned in the "Setup" section below. You need to have an OpenAI Account. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, Configure the Google Sheet Credentials with your own account. Follow this documentation - Set Google Sheet Credential In n8n, configure the OpenAi account credentials. Ensure the URL and Bright Data zone name are correctly set in the Set URL, Filename and Bright Data Zone node. Set the desired local path in the Write a file to disk node to save the responses. How to customize this workflow to your needs Target Multiple Products : Configure the Bright Data input URL dynamically for different SaaS product TrustPilot URLs Loop through a product list and run parallel jobs for each Customize Extraction Fields : Update the prompt in the Information Extractor to include: Review title Response from company Specific feature mentions Competitor references Tune Summarization Style Change tone**: executive summary, customer pain-point focus, or marketing quote extract Enable sentiment aggregation** (e.g., 30% negative, 50% neutral, 20% positive) Expand Output Destinations Push to Notion, Airtable, or CRM tools using additional webhook nodes Generate and send PDF reports (via PDFKit or HTML-to-PDF nodes) Schedule summary digests via Gmail or Slack
by Jah coozi
Universal Digital Device Support Assistant Transform any device manual into an intelligent AI assistant that provides 24/7 support for your users. This template works with ANY household appliance, electronic device, or technical equipment. 🎯 Use Cases Manufacturers**: Provide instant support for your products Support Teams**: Reduce ticket volume with AI-powered answers Smart Homes**: Centralized help for all devices Personal Use**: Never lose a manual again ✨ Features Universal Compatibility**: Works with any device type Multi-Language Support**: Serve global customers Intelligent Search**: Semantic understanding of user queries Context Awareness**: Remembers conversation history Easy Setup**: Just upload your manual and go 🛠️ What's Included Webhook Endpoint: Receive user queries via API AI Agent: Processes questions intelligently Vector Database: Stores and searches manuals Memory System: Maintains conversation context Upload Pipeline: Easy manual ingestion 📋 Setup Instructions Add Your Credentials: OpenAI API key (or alternative LLM) Pinecone API key (or alternative vector DB) Upload Device Manuals: Use the manual upload trigger Paste manual text or upload PDF System automatically indexes content Configure Webhook: Set your preferred endpoint path Enable CORS if needed Deploy and share URL Optional Customization: Adjust chunk size for your content Modify system prompts for your brand Add additional tools or integrations 🔧 Supported Devices (Examples) Kitchen Appliances (ovens, dishwashers, coffee machines) Home Entertainment (TVs, sound systems, gaming consoles) Smart Home Devices (thermostats, cameras, lights) Computer Equipment (printers, routers, monitors) Power Tools & Garden Equipment Medical Devices And many more! 🌐 Integration Options Embed in your website Connect to chat platforms Mobile app integration Voice assistant compatibility Email support automation 📈 Benefits Reduce support costs by 70% Available 24/7 in multiple languages Consistent, accurate responses Scales infinitely Improves with usage 🔐 Privacy & Security Your data stays in your control Can be deployed on-premise GDPR compliant architecture No data sharing between devices 💡 Pro Tips Upload manuals in sections for better accuracy Include troubleshooting guides and FAQs Add model numbers and specifications Regular updates keep content fresh Start providing world-class device support today!
by Guido X Jansen
Introduction **Manual LinkedIn data collection is time-consuming, error-prone, and results in inconsistent data quality across CRM/database records.** This workflow is great for organizations that struggle with: Incomplete contact records with only LinkedIn URLs but missing profile details Hours spent manually copying LinkedIn information into databases Inconsistent data formats due to copy-paste from LinkedIn (emojis, styled text, special characters) Outdated profile information that doesn't reflect current roles/companies No systematic way to enrich contacts at scale Primary Users Sales & Marketing Teams Event Organizers & Conference Managers for event materials Recruitment & HR Professionals CRM Administrators Specific Problems Addressed Data Completeness: Automatically fills missing profile fields (headline, bio, skills, experience) Data Quality: Sanitizes problematic characters that break databases/exports Time Efficiency: Reduces hours of manual data entry to automated monthly updates Error Handling: Gracefully manages invalid/deleted LinkedIn profiles Scalability: Processes multiple profiles in batch without manual intervention Standardization: Ensures consistent data format across all records Cost Each URL scraped by Apify costs $0.01 to get all the data above. Apify charges per scrape, regardless of how much dta or fields you extract/use. Setup Instructions Prerequisites n8n Instance: Access to a running n8n instance (self-hosted or cloud) NocoDB Account: Database with a table containing LinkedIn URLs Apify Account: Free or paid account for LinkedIn scraping Required fields in NocoDB table Input: single LinkedIn URL NocoDB Field name LinkedIn Output: first/last/full name e-mail bio headline profile pic URL current role country skills current employer employer URL experiences (all previous jobs) personal website publications (articles) NocoDB Field names linkedin_full_name linkedin_first_name: linkedin_headline: linkedin_email: linkedin_bio: linkedin_profile_pic linkedin_current_role linkedin_current_company linkedin_country linkedin_skills linkedin_company_website linkedin_experiences linkedin_personal_website linkedin_publications linkedin_scrape_error_reason linkedin_scrape_last_attempt linkedin_scrape_status linkedin_last_modified Technically you also need an Id field, but that is always there so no need to add it :) n8n Setup 1. Import the Workflow Copy the workflow JSON from the template In n8n, click "Add workflow" → "Import from JSON" Paste the workflow and click "Import" 2. Configure NocoDB Connection Click on any NocoDB node in the workflow Add new credentials → "NocoDB Token account" Enter your NocoDB API token (found in NocoDB → User Settings → API Tokens) Update the projectId and table parameters in all NocoDB nodes 3. Set Up Apify Integration Create an Apify account at apify.com Generate an API token (Settings → Integrations → API) In the workflow, update the Apify token in the "Get Scraper Results" node Configure HTTP Query Auth credentials with your token 4. Map Your Database Fields Review the "Transform & Sanitize Data" node Update field mappings to match your NocoDB table structure Ensure these fields exist in your table: LinkedIn (URL field) linkedin_headline, linkedin_full_name, linkedin_bio, etc. linkedin_scrape_status, linkedin_last_modified 5. Configure the Filter In "Get Guests with LinkedIn" node Adjust the filter to match your requirements Default: (LinkedIn,isnot,null)~and(linkedin_headline,is,null) 6. Test the Workflow Click "Execute Workflow" with Manual Trigger Monitor execution for any errors Verify data is properly updated in NocoDB 7. Activate Automated Schedule Configure the Schedule Trigger node (default: monthly) Toggle the workflow to "Active" Monitor executions in n8n dashboard Customization Options 1. Data Source Modifications Different Database: Replace NocoDB nodes with Airtable, Google Sheets, or PostgreSQL Multiple Tables: Add parallel branches to process different contact tables Custom Filters: Modify the WHERE clause to target specific record subsets 2. Enrichment Fields Add Fields: Include additional LinkedIn data like education, certifications, or recommendations Remove Fields: Simplify by removing unnecessary fields (publications, skills) Custom Transformations: Add business logic for field calculations or formatting 3. Scheduling Options Frequency: Change from monthly to daily, weekly, or hourly Time-based: Set specific times for different timezones Event-triggered: Replace with webhook trigger for on-demand processing 4. Error Handling Enhancement Notifications: Add email/Slack nodes to alert on failures Retry Logic: Implement wait and retry for temporary failures Logging: Add database logging for audit trails 5. Data Quality Rules Validation: Add IF nodes to validate data before updates Duplicate Detection: Check for existing records before creating new ones Data Standardization: Add custom sanitization rules for industry-specific needs 6. Integration Extensions CRM Sync: Add nodes to push data to Salesforce, HubSpot, or Pipedrive AI Enhancement: Use OpenAI to summarize bios or extract key skills Image Processing: Download and store profile pictures locally 7. Performance Optimization Batch Size: Adjust the number of profiles processed per run Rate Limiting: Add delays between API calls to avoid limits Parallel Processing: Split large datasets across multiple workflow executions 8. Compliance Additions GDPR Compliance: Add consent checking before processing Data Retention: Implement automatic cleanup of old records Audit Logging: Track who accessed what data and when These customizations allow the workflow to adapt from simple contact enrichment to complex data pipeline scenarios across various industries and use cases.