by Ranjan Dailata
Who this is for The Crunchbase B2B Lead Discovery Pipeline is designed for sales teams, B2B marketers, business analysts, and data operations teams who need a reliable way to extract, structure, and summarize company information from Crunchbase to fuel lead generation and market intelligence. This workflow is ideal for: Sales Development Reps (SDRs) - Needing structured leads from Crunchbase Marketing Analysts - Generating segmented outreach lists Growth Teams - Identifying trending B2B startups RevOps Teams - Automating company research pipelines Data Teams - Consolidating insights into Google Sheets for dashboards What problem is this workflow solving? Manual extraction of company data from Crunchbase is time-consuming, inconsistent, and often lacks the contextual summary required for sales enablement or growth targeting. This workflow automates the extraction, transformation, summarization, and delivery of Crunchbase company data into structured formats, making it instantly usable for B2B targeting and analysis. It solves: The difficulty of scaling lead discovery from Crunchbase The need to summarize raw textual content for quick insights The lack of integration between web scraping, LLM processing, and storage What this workflow does Markdown to Textual Data Extractor**: Takes raw scraped markdown from Crunchbase and converts it into readable plain text using a basic LLM chain Structured Data Extraction**: Applies a parsing model (OpenAI) to extract structured fields such as company name, funding rounds, industry tags, location, and founding year Summarization Chain**: Generates an executive summary from the raw Crunchbase text using a summarization prompt template Send to Google Sheets**: Adds the structured data and summary into a Google Sheet for team access and further processing Persist to Disk**: Saves both raw and structured data files locally for archiving or further use Webhook Notification**: Sends a structured payload to a webhook endpoint (e.g., Slack, CRM, internal tools) with lead insights Pre-conditions You need to have a Bright Data account and do the necessary setup as mentioned in the "Setup" section below. You need to have an OpenAI Account. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, Configure the Google Sheet Credentials with your own account. Follow this documentation - Set Google Sheet Credential In n8n, configure the OpenAi account credentials. Ensure the URL and Bright Data zone name are correctly set in the Set URL, Filename and Bright Data Zone node. Set the desired local path in the Write a file to disk node to save the responses. How to customize this workflow to your needs LLM Prompt Customization : Modify the extraction prompt to include additional fields like revenue, social links, leadership team Adjust summarization tone (e.g., executive summary, sales-focused snapshot or marketing digest) File Persistence Store raw markdown, extracted JSON, and summary text separately for audit/debug Webhook Notification Connect to CRM (e.g., HubSpot, Salesforce) via webhook to automatically create leads Send Slack notifications to alert sales reps when a new high-potential company is discovered
by Trung Tran
🎙️ VoiceScribe AI: Telegram Audio Message Auto Transcription with OpenAI Whisper > Automatically transcribe Telegram voice messages and store them as structured logs in Google Sheets, while backing up the audio in Google Drive. 🧑💼 Who’s it for Journalists, content creators, or busy professionals who often record voice memos or short interviews on the go. Anyone who wants to turn voice recordings into searchable, structured notes. ⚙️ How it works / What it does User sends a voice message to a Telegram bot. n8n checks if the message is an audio voice note. If valid, it downloads the audio file and: Transcribes it using OpenAI Whisper (or your LLM of choice). Uploads the original audio to Google Drive for safekeeping. The transcript and audio metadata are merged. The workflow: Logs the data into a Google Sheet. Sends a formatted confirmation message to the user via Telegram. If the input is not audio, the bot politely informs the user that only voice messages are accepted. ✅ Features Accepts only Telegram voice messages. Transcribes via OpenAI Whisper. Logs DateTime, Duration, Transcript, and Audio URL to Google Sheets. Sends user feedback message via Telegram with download + transcript link. 🚀 How to set up Prerequisites Telegram Bot connected to n8n (via Telegram Trigger) Google Drive & Google Sheets credentials configured OpenAI or Whisper API credentials (for transcription) Steps Telegram Trigger Start the flow when a new message is sent to your bot. Check Message Type Use a conditional node to confirm it's a voice message. Download Voice Message Download the .oga file from Telegram. Transcribe Audio Send the binary audio to OpenAI Whisper or your transcription service. Upload to Google Drive Backup the original audio file. Merge Outputs Combine transcription with Drive metadata. Transform to Row Format Prepare structured JSON for Google Sheets. Append to Google Sheet Store the transcript log (DateTime, Duration, Transcript, AudioURL). Send Confirmation to User Inform the user via Telegram with their transcript and download link. Unsupported Message Handler Reply to users who send non-audio messages. 📄 Example Output in Google Sheet | DateTime | Duration | Transcript | AudioURL | |-----------------------|----------|--------------------------------------------|------------------------------------------------------------| | 2025-08-07T13:12:19Z | 27 | Dự án Outlet Activation là... | https://drive.google.com/uc?id=xxxx&export=download | 🧠 How to customize the workflow Swap Whisper with Deepgram, AssemblyAI, or other providers. Add speaker name detection or prompt-based tagging via GPT. Route transcripts into Notion, Airtable, or CRM systems. Add multi-language support or summarization steps. 📦 Requirements | Component | Required | |---------------------|----------| | Telegram API | ✅ | | Google Drive API | ✅ | | Google Sheets API | ✅ | | OpenAI Whisper API | ✅ | | n8n Cloud or Self-hosted | ✅ | Created with ❤️ using n8n
by Arunava
This n8n workflow automates replying to Google Play Store reviews using AI. It analyzes each review’s sentiment and tone and posts a human-like response — saving time for indie devs, founders, and PMs managing multiple apps. 💡 Use Cases Respond to reviews at scale without sounding robotic Prioritize negative sentiment feedback Maintain consistent tone and support messaging Free up time for teams to focus on product instead of ops 🧠 How it works Uses the Play Store API to fetch new app reviews Filters out reviews that have already been replied to Analyzes sentiment using OpenAI GPT-4o Passes sentiment and review context to an AI Agent node that crafts a reply Replies are posted to Play Store via Google API (Optional) Logs the reply to Slack for visibility 🛠️ Setup Instructions (Sticky notes included in the workflow) 1. HTTPS Node Replace the package name with your app’s package ID Add Google Service Account credentials → Create from Google Cloud Console with access to Play Console → Add to n8n Credential Manager 2. OpenAI Node Add your OpenAI API key → GPT-4o or GPT-4o mini supported → Customize model or instructions if needed 3. AI Agent Node Modify prompt to reflect your app name, tone, and feature set → E.g. polite, witty, casual, support-friendly, etc. → You can add reply conditions or logic for different types of reviews 4. Slack Node (Optional) Configure Slack Webhook or OAuth credentials if you want reply logs → Otherwise, delete the node to simplify the workflow ⚡ Requirements Google Play Developer Console access Google Cloud Project with service account OpenAI account (GPT-4o or mini) (Optional) Slack workspace & app for logging 🙌 Don’t want to set this up yourself? I’ll do it for you. Just drop me an email: imarunavadas@gmail.com Let’s automate the boring stuff so you can focus on growth. 🚀
by Ranjan Dailata
Who this is for The TrustPilot SaaS Product Review Tracker is designed for product managers, SaaS growth teams, customer experience analysts, and marketing teams who need to extract, summarize, and analyze customer feedback at scale from TrustPilot. This workflow is tailored for: Product Managers** - Monitoring feedback to drive feature improvements Customer Support & CX Teams** - Identifying sentiment trends or recurring issues Marketing & Growth Teams** - Leveraging testimonials and market perception Data Analysts** - Tracking competitor reviews and benchmarking Founders & Executives** - Wanting aggregated insights into customer satisfaction What problem is this workflow solving? Manually monitoring, extracting, and summarizing TrustPilot reviews is time-consuming, fragmented, and hard to scale across multiple SaaS products. This workflow automates that process from unlocking the data behind anti-bot layers to summarizing and storing customer insights enabling teams to respond faster, spot trends, and make data-backed product decisions. This workflow solves: The challenge of scraping protected review data (using Bright Data Web Unlocker) The need for structured insights from unstructured review content The lack of automated delivery to storage and alerting systems like Google Sheets or webhooks What this workflow does Extract TrustPilot Reviews: Uses Bright Data Web Unlocker to bypass anti-bot protections and pull markdown-based content from product review pages Convert Markdown to Text: Leverages a basic LLM chain to clean and convert scraped markdown into plain text Structured Information Extraction: Uses OpenAI GPT-4o via the Information Extractor node to extract fields like product name, review date, rating, and reviewer sentiment Summarization Chain: Generates concise summaries of overall review sentiment and themes using OpenAI Merge & Aggregate Output: Consolidates individual extracted records into a structured batch output Outbound Data Delivery: Google Sheets – Appends summary and structured review data Write to Disk – Persists raw and processed content locally Webhook Notification – Sends a real-time alert with summarized insights Pre-conditions You need to have a Bright Data account and do the necessary setup as mentioned in the "Setup" section below. You need to have an OpenAI Account. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, Configure the Google Sheet Credentials with your own account. Follow this documentation - Set Google Sheet Credential In n8n, configure the OpenAi account credentials. Ensure the URL and Bright Data zone name are correctly set in the Set URL, Filename and Bright Data Zone node. Set the desired local path in the Write a file to disk node to save the responses. How to customize this workflow to your needs Target Multiple Products : Configure the Bright Data input URL dynamically for different SaaS product TrustPilot URLs Loop through a product list and run parallel jobs for each Customize Extraction Fields : Update the prompt in the Information Extractor to include: Review title Response from company Specific feature mentions Competitor references Tune Summarization Style Change tone**: executive summary, customer pain-point focus, or marketing quote extract Enable sentiment aggregation** (e.g., 30% negative, 50% neutral, 20% positive) Expand Output Destinations Push to Notion, Airtable, or CRM tools using additional webhook nodes Generate and send PDF reports (via PDFKit or HTML-to-PDF nodes) Schedule summary digests via Gmail or Slack
by A Z
Automatically scrape Meta Threads for posts hiring specific roles (e.g. automation engineers, video editors, graphic designers), filter true hiring intent, deduplicate, and send alerts. We are taking automation roles as an example for now. What it does This workflow continuously scans Threads for fresh posts mentioning the roles you care about. It uses AI to filter out self-promotion and service ads, keeping only posts where the author is hiring. Qualified posts are saved into Google Sheets for tracking and sent to Telegram for instant alerts. It’s ideal for freelancers, agencies, and job seekers who want a steady radar of opportunities. How it works (Step by Step) Schedule trigger – Runs on a set interval (e.g. every 12 hours). Scrape Threads posts – Fetches recent posts from multiple keywords (e.g., “n8n expert”, “hire video editor”, “graphic designer”, etc.) via Apify. Merge results – Combines posts into a single stream. Normalize fields – Maps raw data into clean fields: text, author, URL, timestamp, profile link. AI filter – Uses an AI Agent to: Accept only posts where someone is hiring (rejects “hire me” style self-promo). Apply simple geography rules (e.g., allow US, UK, UAE, CA; pass unknowns). Exclude roles outside your scope. Deduplication – Checks Google Sheets to skip posts already seen. Save to Google Sheets – Writes qualified posts with full details. Telegram alerts – Sends you the matched post instantly so you can act. Who it’s for Freelancers: Get first dibs on gigs before others spot them. Agencies: Build a client pipeline by tracking hiring signals. Job seekers: Spot hidden opportunities in your target field. Customization Ideas Swap keywords to monitor roles you care about (e.g., “UI/UX designer”, “motion graphics editor”, “copywriter”). Add Slack or Discord notifications instead of Telegram. Expand geo rules to match your region. Use Sheets as a CRM—add columns for status, outreach date, etc
by A Z
Automatically scrape X (Twitter) for posts hiring specific roles (e.g., automation engineers, video editors, graphic designers), filter true hiring intent with AI, deduplicate in Google Sheets, and alert via Telegram. What it does Pulls recent X/Twitter posts for multiple role keywords via Apify. Normalizes each post (text, author, links, location). Uses an AI Agent to keep only posts where the author is hiring (not self-promo). Checks Google Sheets for duplicates by URL before saving. Writes qualified posts to a sheet and sends a Telegram notification. We are using n8n automation roles as the example here How it works (Step by Step) Schedule Trigger – Runs on an interval (currently every 12 hours). Scrape X/Twitter – Apify tweet-scraper fetches up to 50 latest posts for keywords like: n8n developer, looking for n8n, n8n expert, hire AI automation, looking for AI automation. Normalize Fields – Set node maps to: url, text, author.userName, author.url, author.location. AI Filter & Dedupe Check Accept only clear hiring posts for n8n/AI automation roles (reject self-promotion). Queries Google Sheets to see if url already exists; duplicates are dropped. Gate – IF node passes only non-empty AI outputs. Parse JSON Safely – Code node extracts/validates JSON from the AI output. Save to Google Sheets – Appends/updates a row (matching on url). Telegram Alert – Sends a message with the tweet URL, author, location, and text. Who it’s for Freelancers, agencies, and job seekers who want a steady radar of real hiring posts for their target roles. Customization Ideas Swap keywords to track other roles (video editors, designers, copywriters, etc.). Add Slack/Discord notifications. Extend the AI rules (e.g., different geographies or role scopes). Treat the sheet as a mini-CRM (status, outreach date, notes).
by Jimleuk
This n8n template is one of a 3-part series exploring use-cases for clustering vector embeddings: Survey Insights Customer Insights Community Insights This template demonstrates the Survey Insights scenario where survey participant responses can be quickly grouped by similarity and an AI agent can generate insights on those groupings. With this workflow, researchers can save days and even weeks of work breaking down cohorts of participants and identify frequently mentioned positives and negatives. Sample Output: https://docs.google.com/spreadsheets/d/e/2PACX-1vT6m8XH8JWJTUAfwojc68NAUGC7q0lO7iV738J7aO5fuVjiVzdTRRPkMmT1C4N8TwejaiT0XrmF1Q48/pubhtml# How it works All survey questions and responses are imported from a Google Sheet. Responses are then inserted into a Qdrant collection carefully tagged with the question and survey metadata. For each question, all relevant response are put through a clustering algorithm using the Python Code node. The Qdrant points are returned in clustered groups. Each group is looped to fetch the payloads of the points and feed them to the AI agent to summarise and generate insights for. The resulting insights and raw responses are then saved to the Google Spreadsheet for further analysis by the researcher. Requirements Survey data and format as shown in the attached google sheet. Qdrant Vectorstore for storing embeddings. OpenAI account for embeddings and LLM. Customising the Template Adjust clustering parameters which make sense for your data. Add more clusters for open-ended questions and less clusters when responses are multiple choice.
by Incrementors
🛒 Lead Workflow: Yelp & Trustpilot Scraping + OpenAI Analysis via BrightData > Description: Automated lead generation workflow that scrapes business data from Yelp and Trustpilot based on location and category, analyzes credibility, and sends personalized outreach emails using AI. > ⚠️ Important: This template requires a self-hosted n8n instance to run. 📋 Overview This workflow provides an automated lead generation solution that identifies high-quality prospects from Yelp and Trustpilot, analyzes their credibility through reviews, and sends personalized outreach emails. Perfect for digital marketing agencies, sales teams, and business development professionals. ✨ Key Features 🎯 Smart Location Analysis** AI breaks down cities into sub-locations for comprehensive coverage 🛍 Yelp Integration** Scrapes business details using BrightData's Yelp dataset ⭐ Trustpilot Verification** Validates business credibility through review analysis 📊 Data Storage** Automatically saves results to Google Sheets 🤖 AI-Powered Outreach** Generates personalized emails using Claude AI 📧 Automated Sending** Sends emails directly through Gmail integration 🔄 How It Works User Input: Submit location, country, and business category through a form AI Location Analysis: Gemini AI identifies sub-locations within the specified area Yelp Scraping: BrightData extracts business information from multiple locations Data Processing: Cleans and stores business details in Google Sheets Trustpilot Verification: Scrapes reviews and company details for credibility check Email Generation: Claude AI creates personalized outreach messages Automated Outreach: Sends emails to qualified prospects via Gmail 📊 Data Output | Field | Description | Example | |---------------|----------------------------------|----------------------------------| | Company Name | Business name from Yelp/Trustpilot | Best Local Restaurant | | Website | Company website URL | https://example-restaurant.com | | Phone Number | Business contact number | (555) 123-4567 | | Email | Business email address | demo@example.com | | Address | Physical business location | 123 Main St, City, State | | Rating | Overall business rating | 4.5/5 | | Categories | Business categories/tags | Restaurant, Italian, Fine Dining | 🚀 Setup Instructions ⏱️ Estimated Setup Time: 10–15 minutes Prerequisites n8n instance (self-hosted or cloud) Google account with Sheets access BrightData account with Yelp and Trustpilot datasets Google Gemini API access Anthropic API key for Claude Gmail account for sending emails Step 1: Import the Workflow Copy the JSON workflow code In n8n: Workflows → + Add workflow → Import from JSON Paste JSON and click Import Step 2: Configure Google Sheets Integration Create two Google Sheets: Yelp data: Name, Categories, Website, Address, Phone, URL, Rating Trustpilot data: Company Name, Email, Phone Number, Address, Rating, Company About Copy Sheet IDs from URLs In n8n: Credentials → + Add credential → Google Sheets OAuth2 API Complete OAuth setup and test connection Update all Google Sheets nodes with your Sheet IDs Step 3: Configure BrightData Set up BrightData credentials in n8n Replace API token with: BRIGHT_DATA_API_KEY Verify dataset access: Yelp dataset: gd_lgugwl0519h1p14rwk Trustpilot dataset: gd_lm5zmhwd2sni130p Test connections Step 4: Configure AI Models Google Gemini (Location Analysis)** Add Google Gemini API credentials Configure model: models/gemini-1.5-flash Claude AI (Email Generation)** Add Anthropic API credentials Configure model: claude-sonnet-4-20250514 Step 5: Configure Gmail Integration Set up Gmail OAuth2 credentials in n8n Update "Send Outreach Email" node Test email sending Step 6: Test & Activate Activate the workflow Test with sample data: Country: United States Location: Dallas Category: Restaurants Verify data appears in Google Sheets Check that emails are generated and sent 📖 Usage Guide Starting a Lead Generation Campaign Access the form trigger URL Enter your target criteria: Country: Target country Location: City or region Category: Business type (e.g., restaurants) Submit the form to start the process Monitoring Results Yelp Data Sheet:** View scraped business information Trustpilot Sheet:** Review credibility data Gmail Sent Items:** Track outreach emails sent 🔧 Customization Options Modifying Email Templates Edit the "AI Generate Email Content" node to customize: Email tone and style Services mentioned Call-to-action messages Branding elements Adjusting Data Filters Modify rating thresholds Set minimum review counts Add geographic restrictions Filter by business size Scaling the Workflow Increase batch sizes Add delays between requests Use parallel processing Add error handling 🚨 Troubleshooting Common Issues & Solutions 1. BrightData Connection Failed Cause: Invalid API credentials or dataset access Solution: Verify credentials and dataset permissions 2. No Data Extracted Cause: Invalid location or changed page structure Solution: Verify location names and test other categories 3. Gmail Authentication Issues Cause: Expired OAuth tokens Solution: Re-authenticate and check permissions 4. AI Model Errors Cause: API quota exceeded or invalid keys Solution: Check usage limits and API key Performance Optimization Rate Limiting:** Add delays Error Handling:** Retry failed requests Data Validation:** Check for malformed data Memory Management:** Process in smaller batches 📈 Use Cases & Examples 1. Digital Marketing Agency Lead Generation Goal:** Find businesses needing marketing Target:** Restaurants, retail stores Approach:** Focus on good-rated but low-online-presence businesses 2. B2B Sales Prospecting Goal:** Find software solution clients Target:** Growing businesses Approach:** Focus on recent positive reviews 3. Partnership Development Goal:** Find complementary businesses Target:** Established businesses Approach:** Focus on reputation and satisfaction scores ⚡ Performance & Limits Expected Performance Processing Time:** 5–10 minutes/location Data Accuracy:** 90%+ Success Rate:** 85%+ Daily Capacity:** 100–500 leads Resource Usage API Calls:** ~10–20 per business Storage:** Minimal (Google Sheets) Execution Time:** 3–8 minutes/10 businesses Network Usage:** ~5–10MB/business 🤝 Support & Community Getting Help n8n Community Forum:** community.n8n.io Docs:** docs.n8n.io BrightData Support:** Via dashboard Contributing Share improvements Report issues and suggestions Create industry-specific variations Document best practices > 🔒 Privacy & Compliance: Ensure GDPR/CCPA compliance. Always respect robots.txt and terms of service of scraped sites. 🎯 Ready to Generate Leads! This workflow provides a complete solution for automated lead generation and outreach. Customize it to fit your needs and start building your pipeline today! For any questions or support, please contact: 📧 info@incrementors.com or fill out this form: Contact Us
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for The Legal Case Research Extractor is a powerful automated workflow designed for legal tech teams, researchers, law firms, and data scientists focused on transforming unstructured legal case data into actionable, structured insights. This workflow is tailored for: Legal Researchers automating case law data mining Litigation Support Teams handling large volumes of case records LawTech Startups building AI-powered legal research assistants Compliance Analysts extracting case-specific insights AI Developers working on legal NLP, summarization, and search engines What problem is this workflow solving? Legal case data is often locked in semi-structured or raw HTML formats, scattered across jurisdiction-specific websites. Manually extracting and processing this data is tedious and inefficient. This workflow automates: Extraction of legal case data via Bright Data's powerful MCP infrastructure Parsing of HTML into clean, readable text using Google Gemini LLM Structuring and delivering the output through webhook and file storage What this workflow does Input Set the Legal Case Research URL node is responsible for setting the legal case URL for the data extraction. Bright Data MCP Data Extractor Bright Data MCP Client For Legal Case Research node is responsible for the legal case extraction via the Bright Data MCP tool - scrape_as_html Case Extractor Google Gemini based Case Extractor is responsible for producing a paginated list of cases Loop through Legal Case URLs Receives a collection of legal case links to process Each URL represents a different case from a target legal website Bright Data MCP Scraping Utilizes Bright Data’s scrape_as_html MCP mode Retrieves raw HTML content of each legal case Google Gemini LLM Extraction Transforms raw HTML into clean, structured text Performs additional information extraction if required (e.g., case summary, court, jurisdiction etc.) Webhook Notification Sends extracted legal case content to a configurable webhook URL Enables downstream processing or storage in legal databases Binary Conversion & File Persistence Converts the structured text to binary format Saves the final response to disk for archival or further processing Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token> How to customize this workflow to your needs Target New Legal Portals Modify the legal case input URLs to scrape from different state or federal case databases Customize LLM Extraction Modify the prompt to extract specific fields: case number, plaintiff, case summary, outcome, legal precedents etc. Add a summarization step if needed Enhance Loop Handling Integrate with a Google Sheet or API to dynamically fetch case URLs Add error handling logic to skip failed cases and log them Improve Security & Compliance Redact sensitive information before sending via webhook Store processed case data in encrypted cloud storage Output Formats Save as PDF, JSON, or Markdown Enable output to cloud storage (S3, Google Drive) or legal document management systems
by Jay Emp0
🤖 MCP Personal Assistant Workflow Description This workflow integrates multiple productivity tools into a single AI-powered assistant using n8n, acting as a centralized control hub to receive and execute tasks across Google Calendar, Gmail, Google Drive, LinkedIn, Twitter, and more. ✅ Key Capabilities AI Agent + Tool Use**: Built using n8n's AI Agent and MCP system, enabling intelligent multi-step reasoning. Tool Integration**: Google Calendar: schedule, update, delete events Gmail: search, draft, send emails Google Drive: manage files and folders LinkedIn & Twitter: post updates, send DMs Utility tools: fetch date/time, search URLs Discord Input**: Accepts prompts via n8n_discord_trigger_bot repo link 🛠 Setup Instructions Timezone Configuration: Go to Settings > Default Timezone in n8n. Set to your local timezone (e.g., Asia/Jakarta). Ensure all Date & Time nodes explicitly use the same zone to avoid UTC-related bugs. Tool Authentication: Replace all OAuth credentials for: Gmail Google Drive Google Calendar Twitter LinkedIn Use your own accounts when copying this workflow. Platform Adaptability: While designed for Discord, you can replace the Discord trigger with any other chat or webhook service. Example: Telegram, Slack, WhatsApp Webhook, n8n Form Trigger, etc. 📦 Strengths Great for document retrieval, email summarization, calendar scheduling, and social posting. Reduces the need for tab-switching across multiple platforms. Tested with a comprehensive checklist across categories like: Calendar Gmail Google Drive Twitter LinkedIn Utility tools Cross-tool actions (Refer to discordGPT prompt checklist for prompt coverage.) ⚠️ Limitations ❌ Binary Uploads: AI agents & MCP server currently struggle with binary payloads. Uploading files to Gmail, Google Drive, or LinkedIn may fail due to format serialization issues. Binary operations (upload/post) are under development and will be fixed in future iterations. ❌ Date Bugs: If timezone settings are incorrect, event times may default to UTC, leading to misaligned calendar events. 🔬 Testing Use the provided prompt checklist for full coverage of: ✅ Core feature flows ✅ Edge cases (e.g., invalid dates, nonexistent users) ✅ Cross-tool chains (e.g., Google Drive → Gmail → LinkedIn) ✅ MCP Assistant Test Prompt Checklist 📅 Google Calendar [X] "Schedule a meeting with Alice tomorrow at 10am. and send an invite to alice@wonderland.com" [X] "Create an event called 'Project Sync' on Friday at 3pm with Bob and Charlie." [X] "Update the time of my call with James to next Monday at 2pm." [X] "Delete my meeting with Marketing next Wednesday." [x] "What is my schedule tommorow ? " 📧 Gmail [x] "Show me unread emails from this week." [x] "Search for emails with subject: invoice" [X] "Reply to the latest email from john@company.com saying 'Thanks, noted!'" [X] "Draft an email to info@a16z.com with subject 'Emp0 Fundraising' and draft the body of the email with an investment opportunity in Emp0, scrape this site https://Emp0.com to get to know more about emp0.com" [X] "Send an email to hi@cursor.com with subject 'Feature request' and cc sales@cursor.com" [ ] "Send an email to recruiting@openai.com , write about how you like their product and want to apply for a job there and attach my latest CV from Google Drivce" 🗂 Google Drive [ ] "Upload the PDF you just sent me to my Google Drive." [X] "Create a folder called 'July Reports' inside Emp0 shared drive." [X] "Move the file named 'Q2_Review.pdf' to 'Reports/2024/Q2'." [X] "Share the folder 'Investor Decks' with info@a16z.com as viewer." [ ] "Download the file 'Wayne_Li_CV.pdf' and attach it in Discord." [X] "Search for a file named 'Invoice May' in my Google Drive." 🖼 LinkedIn [X] "Think of a random and inspiring quote. Post a text update on LinkedIn with the quote and end with a question so people will answer and increase engagement" [ ] "Post this Google Drive image to LinkedIn with the caption: 'Team offsite snapshots!'" [X] "Summarize the contents of this workflow and post it on linkedin with the original url https://n8n.io/workflows/5230-content-farming-ai-powered-blog-automation-for-wordpress/" 🐦 Twitter [X] "Tweet: 'AI is eating operations. Fast.'" [X] "Send a DM to @founderguy: 'Would love to connect on what you’re building.'" [X] "Search Twitter for keyword: 'founder advice'" 🌐 Utilities [X] "What time is it now?" [ ] "Download this PDF: https://ontheline.trincoll.edu/images/bookdown/sample-local-pdf.pdf" [X] "Search this URL and summarize important tech updates today: https://techcrunch.com/feed/" 📎 Discord Attachments [ ] "Take the image I just uploaded and post it to LinkedIn." [ ] "Get the file from my last message and upload it to Google Drive." 🧪 Edge Cases [X] "Schedule a meeting on Feb 30." [X] "Send a DM to @user_that_does_not_exist" [ ] "Download a 50MB PDF and post it to LinkedIn" [X] "Get the latest tweet from my timeline and email it to myself." 🔗 Cross-tool Flows [ ] "Get the latest image from my Google Drive and post it on LinkedIn with the caption 'Another milestone hit!'" [ ] "Find the latest PDF report in Google Drive and email it to investor@vc.com." [ ] "Download an image from this link and upload it to my Google Drive: https://example.com/image.png" [ ] "Get the most recent attachment from my inbox and upload it to Google Drive." Run each of these in isolated test cases. For cross-tool flows, verify binary serialization integrity. 🧠 Why Use This Workflow? This is an always-on personal assistant that can: Process natural language input Handle multi-step logic Execute commands across 6+ platforms Be extended with more tools and memory If you want to interact with all your work tools from a single prompt—this is your base to start. 📎 Repo & Credits Discord bot trigger: n8n_discord_trigger_bot Creator: Jay (Emp₀)
by Lucas Perret
Who this is for This workflow is for sales people who want to quickly and efficiently follow up with their leads What this workflow does This workflow starts every time a new reply is received in lemlist. It then classifies the response using openAI and creates the correct follow up task. The follow-up tasks currently include: Slack alerts when a lead for each new replies Tag interested leads in lemlist Unsubscription of leads when they request it The Slack alerts include: Lead email address Sender email address Reply type (positive, not interested...etc) A preview of the reply Setup To set this template up, simply follow the stickies steps in it How to customize this workflow to your needs Adjust the follow up tasks to your needs Change the Slack notification to your needs ...
by Kumar Shivam
Complete AI Product Description Generator Transforms product images into high-converting copy with GPT-4o Vision + Claude 3.5 The Shopify AI Product Description Factory is a production-grade n8n workflow that converts product images and metadata into refined, SEO-aware descriptions—fully automated and region-agnostic. It blends GPT-4o vision for visible attribute extraction, Claude 3.5 Sonnet for premium copy, Perplexity research for verified brand context, Google Sheets for orchestration and audit trails, plus automated daily sales analytics enrichment. Link-header pagination and structured output enforcement ensure reliable scale. To refine according to your usecase connect via my profile @connect Key Advantages Vision-first copywriting Uses gpt-4o to identify only visible physical attributes (closure, heel, materials, sole) from product images—no guesses. Premium copy generation anthropic/claude-3.5-sonnet crafts concise, benefit-led descriptions with consistent tone, length control, and clean formatting. Research-assisted accuracy perplexityTool verifies vendor/brand context from official sources to avoid speculation or fabricated claims. Pagination you can trust Automates Shopify REST pagination via Link headers and persists page_info for resumable runs. Google Sheets orchestration Centralized staging, status tracking, and QA in Products, with ProcessingState for batch/page markers, and Error_log for diagnostics. Bulletproof error feedback errorTrigger + AI diagnosis logs clear, non-technical and technical explanations to Error_log for fast recovery. Automated sales analytics Daily sales tracking automatically captures and enriches total sales data for comprehensive business intelligence and performance monitoring. How It Works Intake and filtering httpRequest fetches /admin/api/2024-04/products.json?limit=200&{page_info} code filters only items with: Image present Empty body_html The currSeas:SS2025 tag Extracts tag metadata such as x-styleCode, country_of_origin, and gender when available Pagination controller code parses Link headers for rel="next" and extracts page_info googleSheets updates ProcessingState with page_info_next and increments the batch number for resumable polling Generation pipeline googleSheets pulls rows with Status = Ready for AI Description; limit throttles batch size openAi Analyze image (model gpt-4o) returns strictly visible features lmChatOpenRouter (Claude 3.5) composes the SEO description, optionally blending verified vendor context from perplexityTool outputParserStructured guarantees strict JSON: product_id, product_title (normalized), generated_description, status googleSheets writes results back to Products for review/publish Sales analytics enrichment Schedule Trigger** runs daily at 2:01 PM to capture previous day's sales httpRequest fetches paid orders from Shopify REST API with date range filtering splitOut and summarize nodes calculate total daily sales Automatic Google Sheets logging with date stamps and totals Zero-sale days are properly recorded for complete analytics continuity Reliability and insight errorTrigger routes failures to an AI agent that explains the root cause and appends a concise note to Error_log. What's Inside (Node Map) Data + API httpRequest (Shopify REST 2024-04 for products and orders) googleSheets (multiple sheet operations) googleSheetsTool (error logging) AI models openAi (gpt-4o vision analysis) lmChatOpenRouter (anthropic/claude-3.5-sonnet for content generation) AI Agent** (intelligent error diagnosis) Analytics & Processing splitOut (order data processing) summarize (sales totals calculation) set nodes (data field mapping) Tools and guards perplexityTool (brand research) outputParserStructured (JSON validation) memoryBufferWindow (conversation context) Control & Scheduling scheduleTrigger (multiple time-based triggers) cron (periodic execution) limit (batch size control) if (conditional logic) code (custom filtering and pagination logic) Observability errorTrigger + AI diagnosis to Error_log Processing state tracking Sales analytics logging Content & Compliance Rules Locale-agnostic copy**; brand voice is configurable per store Only image-verifiable attributes** (no guesses); clean HTML suitable for Shopify themes Optional normalization rules (e.g., color/branding cleanup, title sanitization) Style code inclusion supported when x-styleCode is present Gender-aware content generation when gender tag is present Strict JSON output** and schema consistency for safe downstream publishing Setup Steps Core integrations Shopify Access Token** — Products read + Orders read (REST 2024-04) OpenAI API** — gpt-4o vision OpenRouter API** — Claude Sonnet (3.5) Perplexity API** — vendor/market verification via perplexityTool Google Sheets OAuth** — Products, ProcessingState, Error_log, Sales analytics Configure sheets ProcessingState** with fields: batch number page_info_next Products** with: Product ID Product Title Product Type Vendor Image url Status country of origin x_style_code gender Generated Description Error_log** with: timestamp Reason of Error Sales Analytics Sheet** with: Date Total Sales Workflow Capabilities Discovery and staging Auto-paginate Shopify; stage eligible products in Sheets with reasons and timestamps. Vision-grounded copywriting Descriptions reflect only visible attributes plus verified brand context; concise, mobile-friendly structure with gender-aware tone. Metadata awareness Auto-injects x-styleCode, country_of_origin, and gender when present; natural SEO for brand and product type. Sales intelligence Automated daily sales tracking with Melbourne timezone support, handles zero-sale days, and maintains complete historical records. Error analytics Layman + technical diagnosis logged to Error_log to shorten MTTR. Safe output Structured JSON via outputParserStructured for predictable row updates. Credentials Required Shopify Access Token** (Products + Orders read permissions) OpenAI API Key** (GPT-4o vision) OpenRouter API Key** (Claude Sonnet) Perplexity API Key** Google Sheets OAuth** Ideal For E-commerce teams** scaling compliant, on-brand product copy with comprehensive sales insights Agencies and SEO specialists** standardizing image-grounded descriptions with performance tracking and analytics Stores** needing resumable pagination, auditable content operations, and automated daily sales reporting in Sheets Advanced Features Dual-workflow architecture**: Content generation + Sales analytics in one system Link-header pagination with page_info persistence in ProcessingState Title/content normalization (e.g., color removal) configurable per brand Gender-aware copywriting** based on product tags Memory windows (memoryBufferWindow) to keep multi-step prompts consistent Melbourne timezone support** for accurate daily sales cutoffs Zero-sales handling** ensures complete analytics continuity Structured Output enforcement for downstream safety AI-powered error diagnosis** with technical and layman explanations Time & Scheduling (Universal) The workflow includes two independent schedules: Content Generation**: Every 5 minutes (configurable) for product processing Sales Analytics**: Daily at 2:01 PM Melbourne time for previous day's sales For globally distributed teams, schedule triggers and timestamps can be standardized on UTC to avoid regional drift. Pro Tip Start with small batches (limit set to 10 or fewer) to validate both copy generation and sales tracking flows. The workflow handles dual operations independently - content generation failures won't affect sales analytics and vice versa. Monitor the Error_log sheet for any issues and use the ProcessingState sheet to track pagination progress.