by Trung Tran
🎙️ VoiceScribe AI: Telegram Audio Message Auto Transcription with OpenAI Whisper > Automatically transcribe Telegram voice messages and store them as structured logs in Google Sheets, while backing up the audio in Google Drive. 🧑💼 Who’s it for Journalists, content creators, or busy professionals who often record voice memos or short interviews on the go. Anyone who wants to turn voice recordings into searchable, structured notes. ⚙️ How it works / What it does User sends a voice message to a Telegram bot. n8n checks if the message is an audio voice note. If valid, it downloads the audio file and: Transcribes it using OpenAI Whisper (or your LLM of choice). Uploads the original audio to Google Drive for safekeeping. The transcript and audio metadata are merged. The workflow: Logs the data into a Google Sheet. Sends a formatted confirmation message to the user via Telegram. If the input is not audio, the bot politely informs the user that only voice messages are accepted. ✅ Features Accepts only Telegram voice messages. Transcribes via OpenAI Whisper. Logs DateTime, Duration, Transcript, and Audio URL to Google Sheets. Sends user feedback message via Telegram with download + transcript link. 🚀 How to set up Prerequisites Telegram Bot connected to n8n (via Telegram Trigger) Google Drive & Google Sheets credentials configured OpenAI or Whisper API credentials (for transcription) Steps Telegram Trigger Start the flow when a new message is sent to your bot. Check Message Type Use a conditional node to confirm it's a voice message. Download Voice Message Download the .oga file from Telegram. Transcribe Audio Send the binary audio to OpenAI Whisper or your transcription service. Upload to Google Drive Backup the original audio file. Merge Outputs Combine transcription with Drive metadata. Transform to Row Format Prepare structured JSON for Google Sheets. Append to Google Sheet Store the transcript log (DateTime, Duration, Transcript, AudioURL). Send Confirmation to User Inform the user via Telegram with their transcript and download link. Unsupported Message Handler Reply to users who send non-audio messages. 📄 Example Output in Google Sheet | DateTime | Duration | Transcript | AudioURL | |-----------------------|----------|--------------------------------------------|------------------------------------------------------------| | 2025-08-07T13:12:19Z | 27 | Dự án Outlet Activation là... | https://drive.google.com/uc?id=xxxx&export=download | 🧠 How to customize the workflow Swap Whisper with Deepgram, AssemblyAI, or other providers. Add speaker name detection or prompt-based tagging via GPT. Route transcripts into Notion, Airtable, or CRM systems. Add multi-language support or summarization steps. 📦 Requirements | Component | Required | |---------------------|----------| | Telegram API | ✅ | | Google Drive API | ✅ | | Google Sheets API | ✅ | | OpenAI Whisper API | ✅ | | n8n Cloud or Self-hosted | ✅ | Created with ❤️ using n8n
by Ranjan Dailata
Who this is for The TrustPilot SaaS Product Review Tracker is designed for product managers, SaaS growth teams, customer experience analysts, and marketing teams who need to extract, summarize, and analyze customer feedback at scale from TrustPilot. This workflow is tailored for: Product Managers** - Monitoring feedback to drive feature improvements Customer Support & CX Teams** - Identifying sentiment trends or recurring issues Marketing & Growth Teams** - Leveraging testimonials and market perception Data Analysts** - Tracking competitor reviews and benchmarking Founders & Executives** - Wanting aggregated insights into customer satisfaction What problem is this workflow solving? Manually monitoring, extracting, and summarizing TrustPilot reviews is time-consuming, fragmented, and hard to scale across multiple SaaS products. This workflow automates that process from unlocking the data behind anti-bot layers to summarizing and storing customer insights enabling teams to respond faster, spot trends, and make data-backed product decisions. This workflow solves: The challenge of scraping protected review data (using Bright Data Web Unlocker) The need for structured insights from unstructured review content The lack of automated delivery to storage and alerting systems like Google Sheets or webhooks What this workflow does Extract TrustPilot Reviews: Uses Bright Data Web Unlocker to bypass anti-bot protections and pull markdown-based content from product review pages Convert Markdown to Text: Leverages a basic LLM chain to clean and convert scraped markdown into plain text Structured Information Extraction: Uses OpenAI GPT-4o via the Information Extractor node to extract fields like product name, review date, rating, and reviewer sentiment Summarization Chain: Generates concise summaries of overall review sentiment and themes using OpenAI Merge & Aggregate Output: Consolidates individual extracted records into a structured batch output Outbound Data Delivery: Google Sheets – Appends summary and structured review data Write to Disk – Persists raw and processed content locally Webhook Notification – Sends a real-time alert with summarized insights Pre-conditions You need to have a Bright Data account and do the necessary setup as mentioned in the "Setup" section below. You need to have an OpenAI Account. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, Configure the Google Sheet Credentials with your own account. Follow this documentation - Set Google Sheet Credential In n8n, configure the OpenAi account credentials. Ensure the URL and Bright Data zone name are correctly set in the Set URL, Filename and Bright Data Zone node. Set the desired local path in the Write a file to disk node to save the responses. How to customize this workflow to your needs Target Multiple Products : Configure the Bright Data input URL dynamically for different SaaS product TrustPilot URLs Loop through a product list and run parallel jobs for each Customize Extraction Fields : Update the prompt in the Information Extractor to include: Review title Response from company Specific feature mentions Competitor references Tune Summarization Style Change tone**: executive summary, customer pain-point focus, or marketing quote extract Enable sentiment aggregation** (e.g., 30% negative, 50% neutral, 20% positive) Expand Output Destinations Push to Notion, Airtable, or CRM tools using additional webhook nodes Generate and send PDF reports (via PDFKit or HTML-to-PDF nodes) Schedule summary digests via Gmail or Slack
by Autonomous Work
This workflow exports every table in a base as its own CSV, saves the files in a time-stamped folder in Amazon S3, pings you on Slack, and optionally prunes older copies. You get an automated weekly backup that is easy to inspect or re-import as needed. You can easily swap the S3 node for the storage provider of your choice. ++How it works++ Weekly Backup Schedule trigger fires weekly Sets and formats the week ex. [2025-W12] Create a folder in S3 bucket with the week Loops through all tables in Airtable base creating CSVs and uploading to the new path Slack message is sent on completion Monthly Prune Schedule trigger fires weekly Sets a cut-off date 4 weeks in the past Lists folders in S3 Deletes all folders > 4 weeks old ++Setup Steps++ Clone workflow Swap credentials for Airtable, AWS, and Slack Ensure AWS credential has appropriate IAM policy to manage bucket & objects Set workflow to "Active"
by DataMinex
📊 Real-Time Flight Data Analytics Bot with Dynamic Chart Generation via Telegram 🚀 Template Overview This advanced n8n workflow creates an intelligent Telegram bot that transforms raw CSV flight data into stunning, interactive visualizations. Users can generate professional charts on-demand through a conversational interface, making data analytics accessible to anyone via messaging. Key Innovation: Combines real-time data processing, Chart.js visualization engine, and Telegram's messaging platform to deliver instant business intelligence insights. 🎯 What This Template Does Transform your flight booking data into actionable insights with four powerful visualization types: 📈 Bar Charts**: Top 10 busiest airlines by flight volume 🥧 Pie Charts**: Flight duration distribution (Short/Medium/Long-haul) 🍩 Doughnut Charts**: Price range segmentation with average pricing 📊 Line Charts**: Price trend analysis across flight durations Each chart includes auto-generated insights, percentages, and key business metrics delivered instantly to users' phones. 🏗️ Technical Architecture Core Components Telegram Webhook Trigger: Captures user interactions and button clicks Smart Routing Engine: Conditional logic for command detection and chart selection CSV Data Pipeline: File reading → parsing → JSON transformation Chart Generation Engine: JavaScript-powered data processing with Chart.js Image Rendering Service: QuickChart API for high-quality PNG generation Response Delivery: Binary image transmission back to Telegram Data Flow Architecture User Input → Command Detection → CSV Processing → Data Aggregation → Chart Configuration → Image Generation → Telegram Delivery 🛠️ Setup Requirements Prerequisites n8n instance** (self-hosted or cloud) Telegram Bot Token** from @BotFather CSV dataset** with flight information Internet connectivity** for QuickChart API Dataset Source This template uses the Airlines Flights Data dataset from GitHub: 🔗 Dataset: Airlines Flights Data by Rohit Grewal Required Data Schema Your CSV file should contain these columns: airline,flight,source_city,departure_time,arrival_time,duration,price,class,destination_city,stops File Structure /data/ └── flights.csv (download from GitHub dataset above) ⚙️ Configuration Steps 1. Telegram Bot Setup Create a new bot via @BotFather on Telegram Copy your bot token Configure the Telegram Trigger node with your token Set webhook URL in your n8n instance 2. Data Preparation Download the dataset from Airlines Flights Data Upload the CSV file to /data/flights.csv in your n8n instance Ensure UTF-8 encoding Verify column headers match the dataset schema Test file accessibility from n8n 3. Workflow Activation Import the workflow JSON Configure all Telegram nodes with your bot token Test the /start command Activate the workflow 🔧 Technical Implementation Details Chart Generation Process Bar Chart Logic: // Aggregate airline counts const airlineCounts = {}; flights.forEach(flight => { const airline = flight.airline || 'Unknown'; airlineCounts[airline] = (airlineCounts[airline] || 0) + 1; }); // Generate Chart.js configuration const chartConfig = { type: 'bar', data: { labels, datasets }, options: { responsive: true, plugins: {...} } }; Dynamic Color Schemes: Bar Charts: Professional blue gradient palette Pie Charts: Duration-based color coding (light→dark blue) Doughnut Charts: Price-tier specific colors (green→purple) Line Charts: Trend-focused red gradient with smooth curves Performance Optimizations Efficient Data Processing: Single-pass aggregations with O(n) complexity Smart Caching: QuickChart handles image caching automatically Minimal Memory Usage: Stream processing for large datasets Error Handling: Graceful fallbacks for missing data fields Advanced Features Auto-Generated Insights: Statistical calculations (percentages, averages, totals) Trend analysis and pattern detection Business intelligence summaries Contextual recommendations User Experience Enhancements: Reply keyboards for easy navigation Visual progress indicators Error recovery mechanisms Mobile-optimized chart dimensions (800x600px) 📈 Use Cases & Business Applications Airlines & Travel Companies Fleet Analysis**: Monitor airline performance and market share Pricing Strategy**: Analyze competitor pricing across routes Operational Insights**: Track duration patterns and efficiency Data Analytics Teams Self-Service BI**: Enable non-technical users to generate reports Mobile Dashboards**: Access insights anywhere via Telegram Rapid Prototyping**: Quick data exploration without complex tools Business Intelligence Executive Reporting**: Instant charts for presentations Market Research**: Compare industry trends and benchmarks Performance Monitoring**: Track KPIs in real-time 🎨 Customization Options Adding New Chart Types Create new Switch condition Add corresponding data processing node Configure Chart.js options Update user interface menu Data Source Extensions Replace CSV with database connections Add real-time API integrations Implement data refresh mechanisms Support multiple file formats Visual Customizations // Custom color palette backgroundColor: ['#your-colors'], // Advanced styling borderRadius: 8, borderSkipped: false, // Animation effects animation: { duration: 2000, easing: 'easeInOutQuart' } 🔒 Security & Best Practices Data Protection Validate CSV input format Sanitize user inputs Implement rate limiting Secure file access permissions Error Handling Graceful degradation for API failures User-friendly error messages Automatic retry mechanisms Comprehensive logging 📊 Expected Outputs Sample Generated Insights "✈️ Vistara leads with 350+ flights, capturing 23.4% market share" "📈 Long-haul flights dominate at 61.1% of total bookings" "💰 Budget category (₹0-10K) represents 47.5% of all bookings" "📊 Average prices peak at ₹14K for 6-8 hour duration flights" Performance Metrics Response Time**: <3 seconds for chart generation Image Quality**: 800x600px high-resolution PNG Data Capacity**: Handles 10K+ records efficiently Concurrent Users**: Scales with n8n instance capacity 🚀 Getting Started Download the workflow JSON Import into your n8n instance Configure Telegram bot credentials Upload your flight data CSV Test with /start command Deploy and share with your team 💡 Pro Tips Data Quality**: Clean data produces better insights Mobile First**: Charts are optimized for mobile viewing Batch Processing**: Handles large datasets efficiently Extensible Design**: Easy to add new visualization types Ready to transform your data into actionable insights? Import this template and start generating professional charts in minutes! 🚀
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for The Legal Case Research Extractor is a powerful automated workflow designed for legal tech teams, researchers, law firms, and data scientists focused on transforming unstructured legal case data into actionable, structured insights. This workflow is tailored for: Legal Researchers automating case law data mining Litigation Support Teams handling large volumes of case records LawTech Startups building AI-powered legal research assistants Compliance Analysts extracting case-specific insights AI Developers working on legal NLP, summarization, and search engines What problem is this workflow solving? Legal case data is often locked in semi-structured or raw HTML formats, scattered across jurisdiction-specific websites. Manually extracting and processing this data is tedious and inefficient. This workflow automates: Extraction of legal case data via Bright Data's powerful MCP infrastructure Parsing of HTML into clean, readable text using Google Gemini LLM Structuring and delivering the output through webhook and file storage What this workflow does Input Set the Legal Case Research URL node is responsible for setting the legal case URL for the data extraction. Bright Data MCP Data Extractor Bright Data MCP Client For Legal Case Research node is responsible for the legal case extraction via the Bright Data MCP tool - scrape_as_html Case Extractor Google Gemini based Case Extractor is responsible for producing a paginated list of cases Loop through Legal Case URLs Receives a collection of legal case links to process Each URL represents a different case from a target legal website Bright Data MCP Scraping Utilizes Bright Data’s scrape_as_html MCP mode Retrieves raw HTML content of each legal case Google Gemini LLM Extraction Transforms raw HTML into clean, structured text Performs additional information extraction if required (e.g., case summary, court, jurisdiction etc.) Webhook Notification Sends extracted legal case content to a configurable webhook URL Enables downstream processing or storage in legal databases Binary Conversion & File Persistence Converts the structured text to binary format Saves the final response to disk for archival or further processing Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token> How to customize this workflow to your needs Target New Legal Portals Modify the legal case input URLs to scrape from different state or federal case databases Customize LLM Extraction Modify the prompt to extract specific fields: case number, plaintiff, case summary, outcome, legal precedents etc. Add a summarization step if needed Enhance Loop Handling Integrate with a Google Sheet or API to dynamically fetch case URLs Add error handling logic to skip failed cases and log them Improve Security & Compliance Redact sensitive information before sending via webhook Store processed case data in encrypted cloud storage Output Formats Save as PDF, JSON, or Markdown Enable output to cloud storage (S3, Google Drive) or legal document management systems
by Khairul Muhtadin
Tesseract - Money Mate Workflow Description Disclaimer: This template requires the n8n-nodes-tesseractjs community node, which is only available on self-hosted n8n instances. You’ll need a self-hosted n8n setup to use this workflow. Who is this for? This workflow is designed for individuals, freelancers, or small business owners who want an easy way to track expenses using Telegram. It’s ideal for anyone looking to digitize receipts—whether from photos or text messages—using free tools, without needing advanced technical skills. What problem does this workflow solve? Manually entering receipt details into a spreadsheet or app is time-consuming and prone to mistakes. This workflow automates the process by extracting information from receipt images or text messages sent via Telegram, categorizing expenses, and sending back a clear, formatted summary. It saves time, reduces errors, and makes expense tracking effortless. What this workflow does The workflow listens for messages sent to a Telegram bot, which can be either text descriptions of expenses or photos of receipts. If a photo is sent, Tesseract (an open-source text recognition tool) extracts the text. If text is sent, it’s processed directly. An AI model (LLaMA via OpenRouter) analyzes the input, categorizes it into expense types (e.g., Food & Beverages, Household, Transport), and creates a structured summary including store name, date, items, total, and category. The summary is then sent back to the user’s Telegram chat. Setup Instructions Follow these step-by-step instructions to set up the workflow. No advanced technical knowledge is required, but you’ll need a self-hosted n8n instance. Set Up a Self-Hosted n8n Instance: If you don’t have n8n installed, follow the n8n self-hosting guide to set it up. You can use platforms like Docker or a cloud provider (e.g., DigitalOcean, AWS). Ensure your n8n instance is running and accessible via a web browser. Install the Tesseract Community Node: In your n8n instance, go to Settings > Community Nodes in the sidebar. Click Install a Community Node, then enter n8n-nodes-tesseractjs in the search bar. Click Install and wait for confirmation. This node enables receipt image processing. If you encounter issues, check the n8n community nodes documentation for troubleshooting. Create a Telegram Bot: Open Telegram and search for @BotFather to start a new bot. Send /start to BotFather, then /newbot to create your bot. Follow the prompts to name your bot (e.g., “MoneyMateBot”). BotFather will provide a Bot Token (e.g., 23872837287:ExampleExampleExample). Copy this token. In n8n, go to Credentials > Add Credential, select Telegram API, and paste the token. Name the credential (e.g., “MoneyMateBot”) and save. Set Up OpenRouter for AI Processing: Sign up for a free account at OpenRouter. In your OpenRouter dashboard, generate an API Key under the API section. In n8n, go to Credentials > Add Credential, select OpenRouter API, and paste the API key. Name it (e.g., “OpenRouter Account”) and save. The free tier of OpenRouter’s LLaMA model is sufficient for this workflow. Import and Configure the Workflow: Download the workflow JSON file (provided separately or copy from the source). In n8n, go to Workflows > Import Workflow and upload the JSON file. Open the imported workflow (“Tesseract - Money Mate”). Ensure the Telegram Trigger and Send Expense Summary nodes use the Telegram credential you created. Ensure the AI Analyzer node uses the OpenRouter credential. Save the workflow. Test the Workflow: Activate the workflow by toggling the Active switch in n8n. In Telegram, find your bot (e.g., @MoneyMateBot) and send /start. Test with a sample input (see “Example Inputs” below). Check the n8n workflow execution panel to ensure data flows correctly. If errors occur, double-check credentials and node connections. Activate for Continuous Use: Once tested, keep the workflow active in n8n. Your bot will now process any text or image sent to it via Telegram. Example Inputs/Formats To help the workflow process your data accurately, use clear and structured inputs. Below are examples of valid inputs: Text Input Example: Send a message to your Telegram bot like this: Bought coffee at Starbucks, Jalan Sudirman, yesterday. Total Rp 50,000. 2 lattes, each Rp 25,000. Expected Output: hello [Your Name] Ini Rekap Belanjamu 📋 Store: Starbucks 📍 Location: Jalan Sudirman 📅 Date: 2025-05-26 🛒 Items: Latte: Rp 25,000 Latte: Rp 25,000 💸 Total: Rp 50,000 📌 Category: Food & Beverages Image Input Example: Upload a photo of a receipt to your Telegram bot. The receipt should contain: Store name (e.g., “Alfamart”) Address (e.g., “Jl. Gatot Subroto, Jakarta”) Date and time (e.g., “27/05/2025 14:00”) Items with prices (e.g., “Bread Rp 15,000”, “Milk Rp 20,000”) Total amount (e.g., “Total: Rp 35,000”) Expected Output: hello [Your Name] Ini Rekap Belanjamu 📋 Store: Alfamart 📍 Location: Jl. Gatot Subroto, Jakarta 📅 Date: 2025-05-27 14:00 🛒 Items: Bread: Rp 15,000 Milk: Rp 20,000 💸 Total: Rp 35,000 📌 Category: Household Tips for Images: Ensure the receipt is well-lit and text is readable. Avoid blurry or angled photos for better Tesseract accuracy. How to Customize This Workflow Change Expense Categories: In the **AI Categorizer node, edit the prompt to include custom categories (e.g., add “Entertainment” or “Utilities” to the list: Food & Beverages, Household, Transport). Modify Response Format: In the **Format Summary Message node, adjust the JavaScript code to change how the summary looks (e.g., add emojis, reorder fields). Save to a Database: Add a node (e.g., Google Sheets or PostgreSQL) after the **Format Summary Message node to store summaries. Support Other Languages: In the **AI Categorizer node, update the prompt to handle additional languages (e.g., Spanish, Mandarin) by specifying them in the instructions. Add Error Handling: Enhance the **Check Invalid Input node to catch more edge cases, like invalid dates. All Free, End-to-End This workflow is 100% free! It leverages: Telegram Bot API**: Free via BotFather. Tesseract**: Open-source text recognition. LLaMA via OpenRouter**: Free tier available for AI processing. Enjoy automating your expense tracking without any cost! Made by: khmuhtadin Need a custom? contact me on LinkedIn or Web
by Daniel Shashko
Note: This template is for self-hosted n8n instances only You can use this workflow to fully automate website content monitoring and change detection on a weekly basis—even when there’s no native node for scraping or structured comparison. It uses an AI-powered scraper, structured data extraction, and integrates Google Sheets, Drive, Docs, and email for seamless tracking and reporting. Main Use Cases Monitor and report changes to websites (e.g., pricing, content, headings, FAQs) over time Automate web audits, compliance checks, or competitive benchmarking Generate detailed change logs and share them automatically with stakeholders How it works The workflow operates as a scheduled process, organized into these stages: 1. Initialization & Configuration Triggers weekly (or manually) and initializes key variables: Google Drive folder, spreadsheet IDs, notification emails, and test mode. 2. Input Retrieval Reads the list of URLs to be monitored from a Google Sheet. 3. Web Scraping & Structuring For each URL, an AI agent uses Bright Data's scrape_as_markdown tool to extract the full web page content. The workflow then parses this content into a well-structured JSON, capturing elements like metadata, headings, pricing, navigation, calls to action, contacts, banners, and FAQs. 4. Saving Current Week’s Results The structured JSON is saved to Google Drive as the current week’s snapshot for each monitored URL. The Google Sheet is updated with file references for traceability. 5. Comparison with Previous Snapshot If a prior week’s file exists, it is downloaded and parsed. The workflow compares the current and previous JSON snapshots, detecting and categorizing all substantive content changes (e.g., new/updated plans, FAQ edits, contact info modifications). Optionally, in test mode, mock changes are introduced for demo and validation purposes. 6. Change Report Generation & Delivery A rich Markdown-formatted changelog is generated, summarizing the detected changes, and then converted to HTML. The changelog is uploaded to Google Docs and linked back to the tracking sheet. An HTML email with the full report and relevant links is sent to recipients. Summary Flow: Schedule/workflow trigger → initialize variables Read URL list from spreadsheet For each URL: Scrape & structure as JSON Save to Drive, update tracking sheet If previous week exists: Download & parse previous Compare, generate changelog Convert to HTML, save to Docs, update Sheet Email results Benefits: Fully automated website change tracking with end-to-end reporting Adaptable and extensible for any set of monitored pages and content types Easy integration with Google Workspace tools for collaboration and storage Minimal manual intervention required after initial setup
by Thapani Sawaengsri
Description This workflow automates compliance validation between a policy/procedure and a corresponding uploaded document. It leverages an AI agent to determine whether the content of the document aligns with the expectations outlined in the provided procedure or policy. How It Works Document Upload A document (e.g., PDF) is uploaded via an HTTP Request Webhook. The content is processed into vector embeddings using a Qdrant vector store and an embedding model. Procedure Submission A policy/procedure text and description are submitted via a second HTTP Request Webhook. These serve as the basis for evaluating the uploaded document. AI-Based Validation The AI agent receives: The uploaded document (via vector embeddings) The submitted procedure/policy text The description/context It returns a structured compliance analysis including: Summary of Compliance (sections that align with policy) Summary of Non-Compliance (gaps or missing elements) Supporting Text Citations (document evidence) Confidence Level (0–100 score based on evidence quality) Setup Instructions Pre-Conditions / Requirements An n8n instance running with access to: Qdrant (for vector storage) An embedding model (e.g., OpenAI, HuggingFace, or local model) Optional: Microsoft Graph or another storage system for document retrieval. Workflow Setup HTTP Request Node 1: Document Upload Accepts binary document files (PDF, DOCX, etc.). Extracts text, generates embeddings, and stores them in Qdrant. Returns a spDocumentId for reference. HTTP Request Node 2: Procedure Submission Accepts a JSON payload with: { "procedure": "Policy or procedure text", "description": "Brief context or objective", "spDocumentId": "ID of the uploaded document" } Links the procedure to the previously uploaded document. Order of Operations Step 1: Upload the document. Step 2: Submit the procedure referencing the same spDocumentId. Step 3: AI agent evaluates compliance and returns results. Example Input & Output Example Input: Document Upload (Webhook 1) Request: Binary file upload (example_policy.pdf) Response: { "spDocumentId": "12345" } Example Input: Procedure Submission (Webhook 2) { "procedure": "All financial records must be retained for 7 years.", "description": "Retention policy compliance validation", "spDocumentId": "12345" } Example Output: AI Compliance Validation { "compliance_summary": "The document includes a 7-year retention requirement for invoices and payroll records.", "non_compliance_summary": "No reference to retention of vendor contracts.", "citations": [ { "text": "Invoices will be stored for 7 years.", "page": 4 } ], "confidence": 87 }
by Akshay
Overview This project is an AI-powered WhatsApp virtual receptionist built using n8n, designed to handle both text and voice-based customer messages automatically. The workflow integrates Google Gemini, Pinecone, and the WhatsApp Business API to provide intelligent, context-aware responses that feel natural and professional. How It Works Message Detection The workflow begins when a message arrives on WhatsApp. It identifies whether the message is text or voice and routes it accordingly. Voice Message Handling Audio messages are securely downloaded from WhatsApp. The files are converted to Base64 format and sent to the Gemini API for transcription. The transcribed text is then passed to the AI Agent for further processing. AI Agent Processing The LangChain AI Agent acts as the brain of the system. It uses: Google Gemini Chat Model** for natural language understanding and response generation. Pinecone Vector Store** to retrieve company-specific information and product data. Memory Buffer** to remember the last 20 user messages, ensuring context-aware responses. The agent also follows a set of custom communication rules — replying only in approved languages, skipping greetings, and focusing on direct, helpful, and professional responses (e.g., product recommendations, support, or guidance). Knowledge Retrieval The AI Agent connects to a Pinecone database containing detailed company data, such as product catalogs or service FAQs. Using Gemini-generated embeddings, it retrieves the most relevant information for each user query. Response Delivery Once the AI Agent prepares the response, it is instantly sent back to the user via WhatsApp, completing the conversational loop. Who It’s For This system is ideal for businesses seeking to automate their customer communication through WhatsApp. It’s especially valuable for: Product-based companies** with frequent customer inquiries. Service providers** offering 24/7 customer assistance or quote requests. SMBs** looking to scale their communication without hiring additional staff. Tech Stack & Requirements n8n** – Workflow automation and orchestration. WhatsApp Cloud API** – For sending and receiving messages. Google Gemini (PaLM)** – For LLM-based transcription and response generation. Pinecone** – Vector database for product and service knowledge retrieval. LangChain Integration** – For connecting memory, vector store, and reasoning tools. Custom Business Rules** – Configurable within the AI Agent node to manage tone, style, and workflow behavior. Key Features Handles both text and voice messages seamlessly. Responds in multiple languages, including English. Maintains conversation memory per user session. Retrieves accurate company-specific information using vector search. Fully automated, with customizable behavior for different industries or use cases. Setup Instructions 1. Prerequisites Before importing the workflow, ensure you have: An active n8n instance (self-hosted or n8n Cloud). WhatsApp Cloud API credentials** from Meta. Google Gemini API key** with model access (for chat and transcription). Pinecone API key** with a preconfigured vector index containing your company data. 2. Environment Setup Install all required credentials under Settings → Credentials in n8n. Add environment variables (if applicable) for keys like: GOOGLE_API_KEY=your_google_gemini_key PINECONE_API_KEY=your_pinecone_key WHATSAPP_ACCESS_TOKEN=your_whatsapp_token 3. Pinecone Configuration Create a Pinecone index named, for example, products-index. Upload company documents or product details as vector embeddings using Gemini or LangChain utilities. Adjust the retrieval limit in the Pinecone node settings for broader or narrower search responses. 4. WhatsApp API Configuration Set up a WhatsApp Business Account via Meta Developer Dashboard. Create a webhook endpoint URL (n8n’s public URL) to receive WhatsApp messages. Use the WhatsApp Trigger Node to capture messages in real time. 5. AI Agent Customization You can personalize how the AI behaves by editing the system prompt inside the AI Agent node: Modify tone, response length, or product focus. Add new “rules” for language preferences or conversation flow. Include links or custom text output (e.g., quotation formats, product catalog messages). 6. Handling Voice Messages Ensure your WhatsApp Business Account has media message permissions enabled. Verify the HTTP Request node that connects to the Gemini API for transcription is correctly authenticated. You can adjust the transcription model or prompt if you prefer shorter, keyword-based outputs. 7. Testing Send both text and voice messages from a test WhatsApp number. Check response time and message formatting. Use n8n’s execution logs to debug errors (especially for media downloads or API credentials). Customization Options 🧩 AI Behavior Modify the AI Agent’s system message to adapt tone and personality (e.g., sales-oriented, support-driven). Update memory length (default: last 20 messages) for longer or shorter conversations. 🌍 Multi-language Support Add or remove allowed languages in the rules section of the AI Agent node. For multilingual businesses, duplicate the AI Agent path and route messages by language detection. 📦 Industry Adaptation Swap the Pinecone dataset to suit different industries — retail, hospitality, logistics, etc. Replace product data with FAQs, customer records, or support documentation.
by Don Jayamaha Jr
A fully autonomous, HTX Spot Market AI Agent (Huobi AI Agent) built using GPT-4o and Telegram. This workflow is the primary interface, orchestrating all internal reasoning, trading logic, and output formatting. ⚙️ Core Features 🧠 LLM-Powered Intelligence: Built on GPT-4o with advanced reasoning ⏱️ Multi-Timeframe Support: 15m, 1h, 4h, and 1d indicator logic 🧩 Self-Contained Multi-Agent Workflow: No external subflows required 🧮 Real-Time HTX Market Data: Live spot price, volume, 24h stats, and order book 📲 Telegram Bot Integration: Interact via chat or schedule 🔄 Autonomous Runs: Support for webhook, schedule, or Telegram triggers 📥 Input Examples | User Input | Agent Action | | --------------- | --------------------------------------------- | | btc | Returns 15m + 1h analysis for BTC | | eth 4h | Returns 4-hour swing data for ETH | | bnbusdt today | Full day snapshot with technicals + 24h stats | 🖥️ Telegram Output Sample 📊 BTC/USDT Market Summary 💰 Price: $62,400 📉 24h Stats: High $63,020 | Low $60,780 | Volume: 89,000 BTC 📈 1h Indicators: • RSI: 68.1 → Overbought • MACD: Bearish crossover • BB: Tight squeeze forming • ADX: 26.5 → Strengthening trend 📉 Support: $60,200 📈 Resistance: $63,800 🛠️ Setup Instructions Create your Telegram Bot using @BotFather Add Bot Token in n8n Telegram credentials Add your GPT-4o or OpenAI-compatible key under HTTP credentials in n8n (Optional) Add your HTX API credentials if expanding to authenticated endpoints Deploy this main workflow using: ✅ Webhook (HTTP Request Trigger) ✅ Telegram messages ✅ Cron / Scheduled automation 🎥 Live Demo 🧠 Internal Architecture | Component | Role | | ------------------ | -------------------------------------------------------- | | 🔄 Telegram Trigger | Entry point for external or manual signal | | 🧠 GPT-4o | Symbol + timeframe extraction + strategy generation | | 📊 Data Collector | Internal tools fetch price, indicators, order book, etc. | | 🧮 Reasoning Layer | Merges everything into a trading signal summary | | 💬 Telegram Output | Sends formatted HTML report via Telegram | 📌 Use Case Examples | Scenario | Outcome | | -------------------------------------- | ------------------------------------------------------- | | Auto-run every 4 hours | Sends new HTX signal summary to Telegram | | Human requests “eth 1h” | Bot replies with real-time 1h chart-based summary | | System-wide trigger from another agent | Invokes webhook and returns response to parent workflow | 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding permitted. 🔗 For support: Don Jayamaha – LinkedIn
by Trung Tran
Try It Out, HireMind – AI-Driven Resume Intelligence Pipeline! This n8n template demonstrates how to automate resume screening and evaluation using AI to improve candidate processing and reduce manual HR effort. A smart and reliable resume screening pipeline for modern HR teams. This workflow combines Google Drive (JD & CV storage), OpenAI (GPT-4-based evaluation), Google Sheets (position mapping + result log), and Slack/SendGrid integrations for real-time communication. Automatically extract, evaluate, and track candidate applications with clarity and consistency. How it works A candidate submits their application using a form that includes name, email, CV (PDF), and a selected job role. The CV is uploaded to Google Drive for record-keeping and later reference. The Profile Analyzer Agent reads the uploaded resume, extracts structured candidate information, and transforms it into a standardized JSON format using GPT-4 and a custom output parser. The corresponding job description PDF file is automatically retrieved from a Google Sheet based on the selected job role. The HR Expert Agent evaluates the candidate profile against the job description using another GPT-4 model, generating a structured assessment that includes strengths, gaps, and an overall recommendation. The evaluation result is parsed and formatted for output. The evaluation score will be used to mark candidate as qualified or unqualified, based on that an email will be sent to applicant or the message will be send to hiring team for the next process The final evaluation result will be stored in a Google Sheet for long-term tracking and reporting. Google drive structure ├── jd # Google drive folder to store your JD (pdf) │ ├── Backend_Engineer.pdf │ ├── Azure_DevOps_Lead.pdf │ └── ... │ ├── cv # Google drive folder, where workflow upload candidate resume │ ├── John_Doe_DevOps.pdf │ ├── Jane_Smith_FullStack.pdf │ └── ... │ ├── Positions (Sample: https://docs.google.com/spreadsheets/d/1pW0muHp1NXwh2GiRvGVwGGRYCkcMR7z8NyS9wvSPYjs/edit?usp=sharing) # 📋 Mapping Table: Job Role ↔ Job Description (Link) │ └── Columns: │ - Job Role │ - Job Description File URL (PDF in jd/) │ └── Evaluation form (Google Sheet) # ✅ Final AI Evaluation Results How to use Set up credentials and integrations: Connect your OpenAI account (GPT-4 API). Enable Google Cloud APIs: Google Sheets API (for reading job roles and saving evaluation results) Google Drive API (for storing CVs and job descriptions) Set up SendGrid (to send email responses to candidates) Connect Slack (to send messages to the hiring team) Prepare your Google Drive structure: Create a root folder, then inside it create: /jd → Store all job descriptions in PDF format /cv → This is where candidate CVs will be uploaded automatically Create a Google Sheet named Positions with the following structure: | Job Role | Job Description Link | |------------------------------|----------------------------------------| | Azure DevOps Engineer | https://drive.google.com/xxx/jd1.pdf | | Full-Stack Developer (.NET) | https://drive.google.com/xxx/jd2.pdf | Update your application form: Use the built-in form, or connect your own (e.g., Typeform, Tally, Webflow, etc.) Ensure the Job Role dropdown matches exactly the roles in the Positions sheet Run the AI workflow: When a candidate submits the form: Their CV is uploaded to the /cv folder The job role is used to match the JD from /jd The Profile Analyzer Agent extracts candidate info from the CV The HR Expert Agent evaluates the candidate against the matched JD using GPT-4 Distribute and store results: Store the evaluation results in the Evaluation form Google Sheet Optionally notify your team: ✉️ Send an email to the candidate using SendGrid 💬 Send a Slack message to the hiring team with a summary and next steps Requirements OpenAI GPT-4 account for both Profile Analyzer and HR Expert Agents Google Drive account (for storing CVs and evaluation sheet) Google Sheets API credentials (for JD source and evaluation results) Need Help? Join the n8n Discord or ask in the n8n Forum! Happy Hiring! 🚀
by Pinecone
Try it out This n8n workflow template lets you chat with your Google Drive documents (.docx, .json, .md, .txt, .pdf) using OpenAI and Pinecone Assistant. It retrieves relevant context from your files in real time so you can get accurate, context-aware answers about your proprietary data—without the need to train your own LLM. What is Pinecone Assistant? Pinecone Assistant allows you to build production-grade chat and agent-based applications quickly. It abstracts the complexities of implementing retrieval-augmented (RAG) systems by managing the chunking, embedding, storage, query planning, vector search, model orchestration, reranking for you. Prerequisites A Pinecone account and API key A GCP project with Google Drive API enabled and configured Note: When setting up the OAuth consent screen, skip steps 8-10 if running on localhost An Open AI account and API key Setup Create a Pinecone Assistant in the Pinecone Console here Name your Assistant n8n-assistant and create it in the United States region If you use a different name or region, update the related nodes to reflect these changes No need to configure a Chat model or Assistant instructions Setup your Google Drive OAuth2 API credential in n8n In the File added node -> Credential to connect with, select Create new credential Set the Client ID and Client Secret from the values generated in the prerequisites Set the OAuth Redirect URL from the n8n credential in the Google Cloud Console (instructions) Name this credential Google Drive account so that other nodes reference it Setup Pinecone API key credential in n8n In the Upload file to assistant node -> PineconeApi section, select Create new credential Paste in your Pinecone API key in the API Key field Setup Pinecone MCP Bearer auth credential in n8n In the Pinecone Assistant node -> Credential for Bearer Auth section, select Create new credential Set the Bearer Token field to your Pinecone API key used in the previous step Setup the Open AI credential in n8n In the OpenAI Chat Model node -> Credential to connect with, select Create new credential Set the API Key field to your OpenAI API key Add your files to a Drive folder named n8n-pinecone-demo in the root of your My Drive If you use a different folder name, you'll need to update the Google Drive triggers to reflect that change Activate the workflow or test it with a manual execution to ingest the documents Chat with your docs! Ideas for customizing this workflow Customize the System Message on the AI Agent node to your use case to indicate what kind of knowledge is stored in Pinecone Assistant Change the top_k value of results returned from Assistant by adding "and should set a top_k of 3" to the System Message to help manage token consumption Configure the Context Window Length in the Conversation Memory node Swap out the Conversation Memory node for one that is more persistent Make the chat node publicly available or create your own chat interface that calls the chat webhook URL. Need help? You can find help by asking in the Pinecone Discord community, asking on the Pinecone Forum, or filing an issue on this repo.