by Murtaja Ziad
A n8n workflow designed to shorten URLs using Dub.co API. How it works: It shortens a url using Dub.co API, with the ability to use custom domains and projects. It updates the current shortened url if the slug has been already used. Estimated Time: Around 15 minutes. Requirements: A Dub.co account. Configuration: Configure the "API Auth" node to add your Dub.co API key, project slug, and the long URL. There some extras that you're able to configure too. You will be able to do that by clicking the "API Auth" node and filling the fields. Detailed Instructions: Sticky notes within the workflow provide extensive setup information and guidance. Keywords: n8n workflow, dub.co, dub.sh, url shortener, short urls, short links
by DataMinex
📊 Real-Time Flight Data Analytics Bot with Dynamic Chart Generation via Telegram 🚀 Template Overview This advanced n8n workflow creates an intelligent Telegram bot that transforms raw CSV flight data into stunning, interactive visualizations. Users can generate professional charts on-demand through a conversational interface, making data analytics accessible to anyone via messaging. Key Innovation: Combines real-time data processing, Chart.js visualization engine, and Telegram's messaging platform to deliver instant business intelligence insights. 🎯 What This Template Does Transform your flight booking data into actionable insights with four powerful visualization types: 📈 Bar Charts**: Top 10 busiest airlines by flight volume 🥧 Pie Charts**: Flight duration distribution (Short/Medium/Long-haul) 🍩 Doughnut Charts**: Price range segmentation with average pricing 📊 Line Charts**: Price trend analysis across flight durations Each chart includes auto-generated insights, percentages, and key business metrics delivered instantly to users' phones. 🏗️ Technical Architecture Core Components Telegram Webhook Trigger: Captures user interactions and button clicks Smart Routing Engine: Conditional logic for command detection and chart selection CSV Data Pipeline: File reading → parsing → JSON transformation Chart Generation Engine: JavaScript-powered data processing with Chart.js Image Rendering Service: QuickChart API for high-quality PNG generation Response Delivery: Binary image transmission back to Telegram Data Flow Architecture User Input → Command Detection → CSV Processing → Data Aggregation → Chart Configuration → Image Generation → Telegram Delivery 🛠️ Setup Requirements Prerequisites n8n instance** (self-hosted or cloud) Telegram Bot Token** from @BotFather CSV dataset** with flight information Internet connectivity** for QuickChart API Dataset Source This template uses the Airlines Flights Data dataset from GitHub: 🔗 Dataset: Airlines Flights Data by Rohit Grewal Required Data Schema Your CSV file should contain these columns: airline,flight,source_city,departure_time,arrival_time,duration,price,class,destination_city,stops File Structure /data/ └── flights.csv (download from GitHub dataset above) ⚙️ Configuration Steps 1. Telegram Bot Setup Create a new bot via @BotFather on Telegram Copy your bot token Configure the Telegram Trigger node with your token Set webhook URL in your n8n instance 2. Data Preparation Download the dataset from Airlines Flights Data Upload the CSV file to /data/flights.csv in your n8n instance Ensure UTF-8 encoding Verify column headers match the dataset schema Test file accessibility from n8n 3. Workflow Activation Import the workflow JSON Configure all Telegram nodes with your bot token Test the /start command Activate the workflow 🔧 Technical Implementation Details Chart Generation Process Bar Chart Logic: // Aggregate airline counts const airlineCounts = {}; flights.forEach(flight => { const airline = flight.airline || 'Unknown'; airlineCounts[airline] = (airlineCounts[airline] || 0) + 1; }); // Generate Chart.js configuration const chartConfig = { type: 'bar', data: { labels, datasets }, options: { responsive: true, plugins: {...} } }; Dynamic Color Schemes: Bar Charts: Professional blue gradient palette Pie Charts: Duration-based color coding (light→dark blue) Doughnut Charts: Price-tier specific colors (green→purple) Line Charts: Trend-focused red gradient with smooth curves Performance Optimizations Efficient Data Processing: Single-pass aggregations with O(n) complexity Smart Caching: QuickChart handles image caching automatically Minimal Memory Usage: Stream processing for large datasets Error Handling: Graceful fallbacks for missing data fields Advanced Features Auto-Generated Insights: Statistical calculations (percentages, averages, totals) Trend analysis and pattern detection Business intelligence summaries Contextual recommendations User Experience Enhancements: Reply keyboards for easy navigation Visual progress indicators Error recovery mechanisms Mobile-optimized chart dimensions (800x600px) 📈 Use Cases & Business Applications Airlines & Travel Companies Fleet Analysis**: Monitor airline performance and market share Pricing Strategy**: Analyze competitor pricing across routes Operational Insights**: Track duration patterns and efficiency Data Analytics Teams Self-Service BI**: Enable non-technical users to generate reports Mobile Dashboards**: Access insights anywhere via Telegram Rapid Prototyping**: Quick data exploration without complex tools Business Intelligence Executive Reporting**: Instant charts for presentations Market Research**: Compare industry trends and benchmarks Performance Monitoring**: Track KPIs in real-time 🎨 Customization Options Adding New Chart Types Create new Switch condition Add corresponding data processing node Configure Chart.js options Update user interface menu Data Source Extensions Replace CSV with database connections Add real-time API integrations Implement data refresh mechanisms Support multiple file formats Visual Customizations // Custom color palette backgroundColor: ['#your-colors'], // Advanced styling borderRadius: 8, borderSkipped: false, // Animation effects animation: { duration: 2000, easing: 'easeInOutQuart' } 🔒 Security & Best Practices Data Protection Validate CSV input format Sanitize user inputs Implement rate limiting Secure file access permissions Error Handling Graceful degradation for API failures User-friendly error messages Automatic retry mechanisms Comprehensive logging 📊 Expected Outputs Sample Generated Insights "✈️ Vistara leads with 350+ flights, capturing 23.4% market share" "📈 Long-haul flights dominate at 61.1% of total bookings" "💰 Budget category (₹0-10K) represents 47.5% of all bookings" "📊 Average prices peak at ₹14K for 6-8 hour duration flights" Performance Metrics Response Time**: <3 seconds for chart generation Image Quality**: 800x600px high-resolution PNG Data Capacity**: Handles 10K+ records efficiently Concurrent Users**: Scales with n8n instance capacity 🚀 Getting Started Download the workflow JSON Import into your n8n instance Configure Telegram bot credentials Upload your flight data CSV Test with /start command Deploy and share with your team 💡 Pro Tips Data Quality**: Clean data produces better insights Mobile First**: Charts are optimized for mobile viewing Batch Processing**: Handles large datasets efficiently Extensible Design**: Easy to add new visualization types Ready to transform your data into actionable insights? Import this template and start generating professional charts in minutes! 🚀
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for The Legal Case Research Extractor is a powerful automated workflow designed for legal tech teams, researchers, law firms, and data scientists focused on transforming unstructured legal case data into actionable, structured insights. This workflow is tailored for: Legal Researchers automating case law data mining Litigation Support Teams handling large volumes of case records LawTech Startups building AI-powered legal research assistants Compliance Analysts extracting case-specific insights AI Developers working on legal NLP, summarization, and search engines What problem is this workflow solving? Legal case data is often locked in semi-structured or raw HTML formats, scattered across jurisdiction-specific websites. Manually extracting and processing this data is tedious and inefficient. This workflow automates: Extraction of legal case data via Bright Data's powerful MCP infrastructure Parsing of HTML into clean, readable text using Google Gemini LLM Structuring and delivering the output through webhook and file storage What this workflow does Input Set the Legal Case Research URL node is responsible for setting the legal case URL for the data extraction. Bright Data MCP Data Extractor Bright Data MCP Client For Legal Case Research node is responsible for the legal case extraction via the Bright Data MCP tool - scrape_as_html Case Extractor Google Gemini based Case Extractor is responsible for producing a paginated list of cases Loop through Legal Case URLs Receives a collection of legal case links to process Each URL represents a different case from a target legal website Bright Data MCP Scraping Utilizes Bright Data’s scrape_as_html MCP mode Retrieves raw HTML content of each legal case Google Gemini LLM Extraction Transforms raw HTML into clean, structured text Performs additional information extraction if required (e.g., case summary, court, jurisdiction etc.) Webhook Notification Sends extracted legal case content to a configurable webhook URL Enables downstream processing or storage in legal databases Binary Conversion & File Persistence Converts the structured text to binary format Saves the final response to disk for archival or further processing Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token> How to customize this workflow to your needs Target New Legal Portals Modify the legal case input URLs to scrape from different state or federal case databases Customize LLM Extraction Modify the prompt to extract specific fields: case number, plaintiff, case summary, outcome, legal precedents etc. Add a summarization step if needed Enhance Loop Handling Integrate with a Google Sheet or API to dynamically fetch case URLs Add error handling logic to skip failed cases and log them Improve Security & Compliance Redact sensitive information before sending via webhook Store processed case data in encrypted cloud storage Output Formats Save as PDF, JSON, or Markdown Enable output to cloud storage (S3, Google Drive) or legal document management systems
by Khairul Muhtadin
Tesseract - Money Mate Workflow Description Disclaimer: This template requires the n8n-nodes-tesseractjs community node, which is only available on self-hosted n8n instances. You’ll need a self-hosted n8n setup to use this workflow. Who is this for? This workflow is designed for individuals, freelancers, or small business owners who want an easy way to track expenses using Telegram. It’s ideal for anyone looking to digitize receipts—whether from photos or text messages—using free tools, without needing advanced technical skills. What problem does this workflow solve? Manually entering receipt details into a spreadsheet or app is time-consuming and prone to mistakes. This workflow automates the process by extracting information from receipt images or text messages sent via Telegram, categorizing expenses, and sending back a clear, formatted summary. It saves time, reduces errors, and makes expense tracking effortless. What this workflow does The workflow listens for messages sent to a Telegram bot, which can be either text descriptions of expenses or photos of receipts. If a photo is sent, Tesseract (an open-source text recognition tool) extracts the text. If text is sent, it’s processed directly. An AI model (LLaMA via OpenRouter) analyzes the input, categorizes it into expense types (e.g., Food & Beverages, Household, Transport), and creates a structured summary including store name, date, items, total, and category. The summary is then sent back to the user’s Telegram chat. Setup Instructions Follow these step-by-step instructions to set up the workflow. No advanced technical knowledge is required, but you’ll need a self-hosted n8n instance. Set Up a Self-Hosted n8n Instance: If you don’t have n8n installed, follow the n8n self-hosting guide to set it up. You can use platforms like Docker or a cloud provider (e.g., DigitalOcean, AWS). Ensure your n8n instance is running and accessible via a web browser. Install the Tesseract Community Node: In your n8n instance, go to Settings > Community Nodes in the sidebar. Click Install a Community Node, then enter n8n-nodes-tesseractjs in the search bar. Click Install and wait for confirmation. This node enables receipt image processing. If you encounter issues, check the n8n community nodes documentation for troubleshooting. Create a Telegram Bot: Open Telegram and search for @BotFather to start a new bot. Send /start to BotFather, then /newbot to create your bot. Follow the prompts to name your bot (e.g., “MoneyMateBot”). BotFather will provide a Bot Token (e.g., 23872837287:ExampleExampleExample). Copy this token. In n8n, go to Credentials > Add Credential, select Telegram API, and paste the token. Name the credential (e.g., “MoneyMateBot”) and save. Set Up OpenRouter for AI Processing: Sign up for a free account at OpenRouter. In your OpenRouter dashboard, generate an API Key under the API section. In n8n, go to Credentials > Add Credential, select OpenRouter API, and paste the API key. Name it (e.g., “OpenRouter Account”) and save. The free tier of OpenRouter’s LLaMA model is sufficient for this workflow. Import and Configure the Workflow: Download the workflow JSON file (provided separately or copy from the source). In n8n, go to Workflows > Import Workflow and upload the JSON file. Open the imported workflow (“Tesseract - Money Mate”). Ensure the Telegram Trigger and Send Expense Summary nodes use the Telegram credential you created. Ensure the AI Analyzer node uses the OpenRouter credential. Save the workflow. Test the Workflow: Activate the workflow by toggling the Active switch in n8n. In Telegram, find your bot (e.g., @MoneyMateBot) and send /start. Test with a sample input (see “Example Inputs” below). Check the n8n workflow execution panel to ensure data flows correctly. If errors occur, double-check credentials and node connections. Activate for Continuous Use: Once tested, keep the workflow active in n8n. Your bot will now process any text or image sent to it via Telegram. Example Inputs/Formats To help the workflow process your data accurately, use clear and structured inputs. Below are examples of valid inputs: Text Input Example: Send a message to your Telegram bot like this: Bought coffee at Starbucks, Jalan Sudirman, yesterday. Total Rp 50,000. 2 lattes, each Rp 25,000. Expected Output: hello [Your Name] Ini Rekap Belanjamu 📋 Store: Starbucks 📍 Location: Jalan Sudirman 📅 Date: 2025-05-26 🛒 Items: Latte: Rp 25,000 Latte: Rp 25,000 💸 Total: Rp 50,000 📌 Category: Food & Beverages Image Input Example: Upload a photo of a receipt to your Telegram bot. The receipt should contain: Store name (e.g., “Alfamart”) Address (e.g., “Jl. Gatot Subroto, Jakarta”) Date and time (e.g., “27/05/2025 14:00”) Items with prices (e.g., “Bread Rp 15,000”, “Milk Rp 20,000”) Total amount (e.g., “Total: Rp 35,000”) Expected Output: hello [Your Name] Ini Rekap Belanjamu 📋 Store: Alfamart 📍 Location: Jl. Gatot Subroto, Jakarta 📅 Date: 2025-05-27 14:00 🛒 Items: Bread: Rp 15,000 Milk: Rp 20,000 💸 Total: Rp 35,000 📌 Category: Household Tips for Images: Ensure the receipt is well-lit and text is readable. Avoid blurry or angled photos for better Tesseract accuracy. How to Customize This Workflow Change Expense Categories: In the **AI Categorizer node, edit the prompt to include custom categories (e.g., add “Entertainment” or “Utilities” to the list: Food & Beverages, Household, Transport). Modify Response Format: In the **Format Summary Message node, adjust the JavaScript code to change how the summary looks (e.g., add emojis, reorder fields). Save to a Database: Add a node (e.g., Google Sheets or PostgreSQL) after the **Format Summary Message node to store summaries. Support Other Languages: In the **AI Categorizer node, update the prompt to handle additional languages (e.g., Spanish, Mandarin) by specifying them in the instructions. Add Error Handling: Enhance the **Check Invalid Input node to catch more edge cases, like invalid dates. All Free, End-to-End This workflow is 100% free! It leverages: Telegram Bot API**: Free via BotFather. Tesseract**: Open-source text recognition. LLaMA via OpenRouter**: Free tier available for AI processing. Enjoy automating your expense tracking without any cost! Made by: khmuhtadin Need a custom? contact me on LinkedIn or Web
by Gofive
Template: Create an AI Knowledge Base Chatbot with Google Drive and OpenAI GPT (Venio/Salesbear) 📋 Template Overview This comprehensive n8n workflow template creates an intelligent AI chatbot that automatically transforms your Google Drive documents into a searchable knowledge base. The chatbot uses OpenAI's GPT models to provide accurate, context-aware responses based exclusively on your uploaded documents, making it perfect for customer support, internal documentation, and knowledge management systems. 🎯 What This Template Does Automated Knowledge Processing Real-time Document Monitoring**: Automatically detects when files are added or updated in your designated Google Drive folder Intelligent Document Processing**: Converts PDFs, text files, and other documents into searchable vector embeddings Smart Text Chunking**: Breaks down large documents into optimally-sized chunks for better AI comprehension Vector Storage**: Creates a searchable knowledge base that the AI can query for relevant information AI-Powered Chat Interface Webhook Integration**: Receives questions via HTTP requests from any external platform (Venio/Salesbear) Contextual Responses**: Maintains conversation history for natural, flowing interactions Source-Grounded Answers**: Provides responses based strictly on your document content, preventing hallucinations Multi-platform Support**: Works with any chat platform that can send HTTP requests 🔧 Pre-conditions and Requirements Required API Accounts and Permissions 1. Google Drive API Access Google Cloud Platform account Google Drive API enabled OAuth2 credentials configured Read access to your target Google Drive folder 2. OpenAI API Account Active OpenAI account with API access Sufficient API credits for embeddings and chat completions API key with appropriate permissions 3. n8n Instance n8n cloud account or self-hosted instance Webhook functionality enabled Ability to install community nodes (LangChain nodes) 4. Target Chat Platform (Optional) API credentials for your chosen chat platform Webhook capability or API endpoints for message sending Required Permissions Google Drive**: Read access to folder contents and file downloads OpenAI**: API access for text-embedding-ada-002 and gpt-4o-mini models External Platform**: API access for sending/receiving messages (if integrating with existing chat systems) 🚀 Detailed Workflow Operation Phase 1: Knowledge Base Creation File Monitoring: Two trigger nodes continuously monitor your Google Drive folder for new files or updates Document Discovery: When changes are detected, the workflow searches for and identifies the modified files Content Extraction: Downloads the actual file content from Google Drive Text Processing: Uses LangChain's document loader to extract text from various file formats Intelligent Chunking: Splits documents into overlapping chunks (configurable size) for optimal AI processing Vector Generation: Creates embeddings using OpenAI's text-embedding-ada-002 model Storage: Stores vectors in an in-memory vector store for instant retrieval Phase 2: Chat Interaction Question Reception: Webhook receives user questions in JSON format Data Extraction: Parses incoming data to extract chat content and session information AI Processing: AI Agent analyzes the question and determines relevant context Knowledge Retrieval: Searches the vector store for the most relevant document sections Response Generation: OpenAI generates responses based on found content and conversation history Authentication: Validates the request using token-based authentication Response Delivery: Sends the answer back to the originating platform 📚 Usage Instructions After Setup Adding Documents to Your Knowledge Base Upload Files: Simply drag and drop documents into your configured Google Drive folder Supported Formats: PDFs, TXT, DOC, DOCX, and other text-based formats Automatic Processing: The workflow will automatically detect and process new files within minutes Updates: Modify existing files, and the knowledge base will automatically update Integrating with Your Chat Platform Webhook URL: Use the generated webhook URL to send questions POST https://your-n8n-domain/webhook/your-custom-path Content-Type: application/json { "body": { "Data": { "ChatMessage": { "Content": "What are your business hours?", "RoomId": "user-123-session", "Platform": "web", "User": { "CompanyId": "company-456" } } } } } Response Format: The chatbot returns structured responses that your platform can display Testing Your Chatbot Initial Test: Send a simple question about content you know exists in your documents Context Testing: Ask follow-up questions to test conversation memory Edge Cases: Try questions about topics not in your documents to verify appropriate responses Performance: Monitor response times and accuracy 🎨 Customization Options System Message Customization Modify the AI Agent's system message to match your brand and use case: You are a [YOUR_BRAND] customer support specialist. You provide helpful, accurate information based on our documentation. Always maintain a [TONE] tone and [SPECIFIC_GUIDELINES]. Response Behavior Customization Tone and Voice**: Adjust from professional to casual, formal to friendly Response Length**: Configure for brief answers or detailed explanations Fallback Messages**: Customize what the bot says when it can't find relevant information Language Support**: Adapt for different languages or technical terminologies Technical Configuration Options Document Processing Chunk Size**: Adjust from 1000 to 4000 characters based on your document complexity Overlap**: Modify overlap percentage for better context preservation File Types**: Add support for additional document formats AI Model Configuration Model Selection**: Switch between gpt-4o-mini (cost-effective) and gpt-4 (higher quality) Temperature**: Adjust creativity vs. factual accuracy (0.0 to 1.0) Max Tokens**: Control response length limits Memory and Context Conversation Window**: Adjust how many previous messages to remember Session Management**: Configure session timeout and user identification Context Retrieval**: Tune how many document chunks to consider per query Integration Customization Authentication Methods Token-based**: Default implementation with bearer tokens API Key**: Simple API key validation OAuth**: Full OAuth2 implementation for secure access Custom Headers**: Validate specific headers or signatures Response Formatting JSON Structure**: Customize response format for your platform Markdown Support**: Enable rich text formatting in responses Error Handling**: Define custom error messages and codes 🎯 Specific Use Case Examples Customer Support Chatbot Scenario: E-commerce company with product documentation, return policies, and FAQ documents Setup: Upload product manuals, policy documents, and common questions to Google Drive Customization: Professional tone, concise answers, escalation triggers for complex issues Integration: Website chat widget, mobile app, or customer portal Internal HR Knowledge Base Scenario: Company HR department with employee handbook, policies, and procedures Setup: Upload HR policies, benefits information, and procedural documents Customization: Friendly but professional tone, detailed policy explanations Integration: Internal Slack bot, employee portal, or HR ticketing system Technical Documentation Assistant Scenario: Software company with API documentation, user guides, and troubleshooting docs Setup: Upload API docs, user manuals, and technical specifications Customization: Technical tone, code examples, step-by-step instructions Integration: Developer portal, support ticket system, or documentation website Educational Content Helper Scenario: Educational institution with course materials, policies, and student resources Setup: Upload syllabi, course content, academic policies, and student guides Customization: Helpful and encouraging tone, detailed explanations Integration: Learning management system, student portal, or mobile app Healthcare Information Assistant Scenario: Medical practice with patient information, procedures, and policy documents Setup: Upload patient guidelines, procedure explanations, and practice policies Customization: Compassionate tone, clear medical explanations, disclaimer messaging Integration: Patient portal, appointment system, or mobile health app 🔧 Advanced Customization Examples Multi-Language Support // In Edit Fields node, detect language and route accordingly const language = $json.body.Data.ChatMessage.Language || 'en'; const systemMessage = { 'en': 'You are a helpful customer support assistant...', 'es': 'Eres un asistente de soporte al cliente útil...', 'fr': 'Vous êtes un assistant de support client utile...' }; Department-Specific Routing // Route questions to different knowledge bases based on department const department = $json.body.Data.ChatMessage.Department; const vectorStoreKey = vector_store_${department}; Advanced Analytics Integration // Track conversation metrics const analytics = { userId: $json.body.Data.ChatMessage.User.Id, timestamp: new Date().toISOString(), question: $json.body.Data.ChatMessage.Content, response: $json.response, responseTime: $json.processingTime }; 📊 Performance Optimization Tips Document Management Optimal File Size**: Keep documents under 10MB for faster processing Clear Structure**: Use headers and sections for better chunking Regular Updates**: Remove outdated documents to maintain accuracy Logical Organization**: Group related documents in subfolders Response Quality System Message Refinement**: Regularly update based on user feedback Context Tuning**: Adjust chunk size and overlap for your specific content Testing Framework**: Implement systematic testing for response accuracy User Feedback Loop**: Collect and analyze user satisfaction data Cost Management Model Selection**: Use gpt-4o-mini for cost-effective responses Caching Strategy**: Implement response caching for frequently asked questions Usage Monitoring**: Track API usage and set up alerts Batch Processing**: Process multiple documents efficiently 🛡️ Security and Compliance Data Protection Document Security**: Ensure sensitive documents are properly secured Access Control**: Implement proper authentication and authorization Data Retention**: Configure appropriate data retention policies Audit Logging**: Track all interactions for compliance Privacy Considerations User Data**: Minimize collection and storage of personal information Session Management**: Implement secure session handling Compliance**: Ensure adherence to relevant privacy regulations Encryption**: Use HTTPS for all communications 🚀 Deployment and Scaling Production Readiness Environment Variables**: Use environment variables for sensitive configurations Error Handling**: Implement comprehensive error handling and logging Monitoring**: Set up monitoring for workflow health and performance Backup Strategy**: Ensure document and configuration backups Scaling Considerations Load Testing**: Test with expected user volumes Rate Limiting**: Implement appropriate rate limiting Database Scaling**: Consider external vector database for large-scale deployments Multi-Instance**: Configure for multiple n8n instances if needed 📈 Success Metrics and KPIs Quantitative Metrics Response Accuracy**: Percentage of correct answers Response Time**: Average time from question to answer User Satisfaction**: Rating scores and feedback Usage Volume**: Questions per day/week/month Cost Efficiency**: Cost per interaction Qualitative Metrics User Feedback**: Qualitative feedback on response quality Use Case Coverage**: Percentage of user needs addressed Knowledge Gaps**: Identification of missing information Conversation Quality**: Natural flow and context understanding
by Cheng Siong Chin
How It Works This workflow automates business intelligence reporting by aggregating data from multiple sources, processing it through AI models, and delivering formatted dashboards via email. Designed for business analysts, operations managers, and executive teams, it solves the challenge of manually compiling metrics from disparate systems into coherent reports. The system triggers on schedule or webhook, extracting data from Google Sheets, databases, and APIs. Raw data flows through transformation nodes that calculate KPIs, generate trend analyses, and create visualizations. AI models (OpenAI) provide natural language insights and anomaly detection. Results populate multiple dashboard templates—executive summary, departmental metrics, and detailed analytics—each tailored to specific stakeholder needs. Formatted reports are automatically distributed via Gmail with embedded charts and actionable recommendations. This eliminates hours of manual data gathering, reduces reporting errors, and ensures stakeholders receive timely, consistent insights. Setup Steps Configure Google Sheets credentials and specify source spreadsheet IDs Set up database connections (PostgreSQL, MySQL) with read-only access Add OpenAI API key for GPT-4 analytics and narrative generation Set Gmail OAuth credentials for automated email delivery Define recipient lists for each dashboard type (executive, departmental, detailed) Customize dashboard templates with company branding and preferred KPIs Prerequisites Active Google Workspace account with Sheets and Gmail access. Use Cases Automated weekly executive dashboards with YoY comparisons. Customization Modify dashboard templates to match corporate branding standards. Benefits Reduces report preparation time by 80% through full automation.
by Ranjan Dailata
Disclaimer Please note - This workflow is only available on n8n self-hosted as it’s making use of the community node for the Decodo Web Scraping This n8n workflow automates the process of scraping, analyzing, and summarizing Amazon product reviews using Decodo’s Amazon Scraper, OpenAI GPT-4.1-mini, and Google Sheets for seamless reporting. It turns messy, unstructured customer feedback into actionable product insights — all without manual review reading. Who this is for This workflow is designed for: E-commerce product managers** who need consolidated insights from hundreds of reviews. Brand analysts and marketing teams** performing sentiment or trend tracking. AI and data engineers** building automated review intelligence pipelines. Sellers and D2C founders** who want to monitor customer satisfaction and pain points. Product researchers** performing market comparison or competitive analysis. What problem this workflow solves Reading and analyzing hundreds or thousands of Amazon reviews manually is inefficient and subjective. This workflow automates the entire process — from data collection to AI summarization — enabling teams to instantly identify customer pain points, trends, and strengths. Specifically, it: Eliminates manual review extraction from product pages. Generates comprehensive and abstract summaries using GPT-4.1-mini. Centralizes structured insights into Google Sheets for visualization or sharing. Helps track product sentiment and emerging issues over time. What this workflow does Here’s a breakdown of the automation process: Set Input Fields Define your Amazon product URL, geo region, and desired file name. Decodo Amazon Scraper Fetches real-time product reviews from the Amazon product page, including star ratings and AI-generated summaries. Extract Reviews Node Extracts raw customer reviews and Decodo’s AI summary into a structured JSON format. Perform Review Analysis (GPT-4.1-mini) Uses OpenAI GPT-4.1-mini to create two key summaries: Comprehensive Review: A detailed summary that captures sentiment, recurring themes, and product pros/cons. Abstract Review: A concise executive summary that captures the overall essence of user feedback. Persist Structured JSON Saves the raw and AI-enriched data to a local file for reference. Append to Google Sheets Uploads both the original reviews and AI summaries into a Google Sheet for ongoing analysis, reporting, or dashboard integration. Outcome: You get a structured, AI-enriched dataset of Amazon product reviews — summarized, searchable, and easy to visualize. Setup Pre-requisite If you are new to Decode, please signup on this link visit.decodo.com Please make sure to install the n8n custom node for Decodo. Step 1 — Import the Workflow Open n8n and import the JSON workflow template. Ensure the following credentials are configured: Decodo Credentials account → Decodo API Key OpenAI account → OpenAI API Key Google Sheets account → Connected via OAuth Step 2 — Input Product Details In the Set node, replace: amazon_url → your product link (e.g., https://www.amazon.com/dp/B0BVM1PSYN) geo → your region (e.g., US, India) file_name → output file name (optional) Step 3 — Connect Google Sheets Link your desired Google Sheet for data storage. Ensure the sheet columns match: product_reviews all_reviews Step 4 — Run the Workflow Click Execute Workflow. Within seconds, your Amazon product reviews will be fetched, summarized by AI, and logged into Google Sheets. How to customize this workflow You can tailor this workflow for different use cases: Add Sentiment Analysis** — Add another GPT node to classify reviews as positive, neutral, or negative. Multi-Language Reviews** — Include a language detection node before summarization. Send Alerts** — Add a Slack or Gmail node to notify when negative sentiment exceeds a threshold. Store in Database** — Replace Google Sheets with MySQL, Postgres, or Notion nodes. Visualization Layer** — Connect your Google Sheet to Looker Studio or Power BI for dynamic dashboards. Alternative AI Models** — Swap GPT-4.1-mini with Gemini 1.5 Pro, Claude 3, or Mistral for experimentation. Summary This workflow transforms the tedious process of reading hundreds of Amazon reviews into a streamlined AI-powered insight engine. By combining Decodo’s scraping precision, OpenAI’s summarization power, and Google Sheets’ accessibility, it enables continuous review monitoring. In one click, it delivers comprehensive and abstract AI summaries, ready for your next product decision meeting or market strategy session.
by Pinecone
Try it out This n8n workflow template lets you chat with your Google Drive documents (.docx, .json, .md, .txt, .pdf) using OpenAI and Pinecone vector database. It retrieves relevant context from your files in real time so you can get accurate, context-aware answers about your proprietary data—without the need to train your own LLM. Not interested in chunking and embedding your own data or figuring out which search method to use? Try our n8n quickstart for Pinecone Assistant here or check out the full workflow to chat with your Google Drive documents here. Prerequisites A Pinecone account A GCP project with Google Drive API enabled and configured An Open AI account and API key A Cohere account and API key Setup Create a Pinecone index in the Pinecone Console here Name your index n8n-dense-index Select OpenAI's text-embedding-3-small Set the Dimension to 1536 Leave everything else as default If you use a different index name, update the related nodes to reflect this change Use the Connect to Pinecone button to authenticate to Pinecone or if you self-host n8n, create a Pinecone credential and add your Pinecone API key directly Setup your Google Drive OAuth2 API, Open AI, and Cohere credentials in n8n Download these files and add them to a Drive folder named n8n-pinecone-demo in the root of your My Drive https://docs.pinecone.io/release-notes/2022.md https://docs.pinecone.io/release-notes/2023.md https://docs.pinecone.io/release-notes/2024.md https://docs.pinecone.io/release-notes/2025.md https://docs.pinecone.io/release-notes/2026.md Activate the workflow or test it with a manual execution to ingest the documents Enter the chat prompts to chat with the Pinecone release notes What support does Pinecone have for MCP? When was fetch by metadata released? Ideas for customizing this workflow Use your own data and adjust the chunking strategy Update the AI Agent System Message to reflect how the Pinecone Vector Store Tool will be used. Be sure to include info on what data can be retrieved using that tool. Update the Pinecone Vector Store Tool Description to reflect what data you are storing in the Pinecone index Need help? You can find help by asking in the Pinecone Discord community or filing an issue on this repo.
by Davide
This workflow implements an AI-powered design and prototyping assistant that integrates Telegram, Google Gemini, and Google Stitch (MCP) to enable conversational UI generation and project management. Supported actions include: Creating new design projects Retrieving existing projects Listing projects and screens Fetching individual screens Generating new UI screens directly from text descriptions Key Advantages 1. ✅ Conversational Design Workflow Design and UI prototyping can be driven entirely through natural language. Users can create screens, explore layouts, or manage projects simply by chatting, without opening design tools. 2. ✅ Tight Integration with Google Stitch By leveraging the Stitch MCP API, the workflow provides direct access to structured design capabilities such as screen generation, project management, and UI exploration, avoiding manual API calls or custom scripting. 3. ✅ Intelligent Tool Selection The AI agent does not blindly call APIs. It first analyzes the user request, determines the required level of fidelity and intent, and then selects the most appropriate Stitch function or combination of functions. 4. ✅ Multi-Channel Support The workflow supports both generic chat triggers and Telegram, making it flexible for internal tools, demos, or production chatbots. 5. ✅ Security and Access Control Telegram access is restricted to a specific user ID, and execution only happens when a dedicated command is used. This prevents accidental or unauthorized usage. 6. ✅ Context Awareness with Memory The inclusion of conversational memory allows the agent to maintain context across interactions, enabling iterative design discussions rather than isolated commands. 7. ✅ Production-Ready Output Formatting Responses are automatically converted into Telegram-compatible HTML, ensuring clean, readable, and well-formatted messages without manual post-processing. 8. ✅ Extensible and Modular Architecture The workflow is highly modular: additional Stitch tools, AI models, or communication channels can be added with minimal changes, making it future-proof and easy to extend. How It Works This workflow functions as a Telegram-powered AI agent that leverages Google Stitch's MCP (Model Context Protocol) tools for design, UI generation, and product prototyping. It combines conversational AI, tool-based actions, and web search capabilities. Trigger & Authorization: The workflow is activated by an incoming message from a configured Telegram bot. A code node first checks the sender's Telegram User ID against a hardcoded value (xxx) to restrict access. Only authorized users can proceed. Command Parsing: An IF node filters messages, allowing the agent to proceed only if the message text starts with the command /stitch. This ensures the agent is only invoked intentionally. Query Preparation: The /stitch prefix is stripped from the message text, and the cleaned query, along with the user's ID (used as a session identifier), is passed to the main agent. AI Agent Execution: The core "Google Stitch Agent" node is an LLM-powered agent (using Google Gemini) equipped with: Tools: Access to several Google Stitch MCP functions (create_project, get_project, list_projects, list_screens, get_screen, generate_screen_from_text) and a Perplexity web search tool. Memory: A conversation buffer window to maintain context within a session. System Prompt: Instructs the agent to intelligently select and use the appropriate Stitch tools based on the user's design-related request (e.g., generating screens from text, managing projects). It is directed to use web search when necessary for additional context. Response Processing & Delivery: The agent's text output (in Markdown) is passed through another LLM chain ("From MD to HTML") that converts it to Telegram-friendly HTML. Finally, the formatted response is sent back to the user via the Telegram bot. Set Up Steps To make this workflow operational, you need to configure credentials and update specific nodes: Telegram Bot Configuration: In the "Code" node (id: 08bfae9e...), replace xxx in the condition $input.first().json.message.from.id !== xxx with your actual Telegram User ID. This ensures only you can trigger the agent. Ensure the "Telegram Trigger" and "Send a text message" nodes have valid Telegram Bot credentials configured. Google Stitch API Setup: Obtain an API key from Google Stitch. Configure the HTTP Header Auth credential named "Google Stitch" (referenced by all MCP tool nodes: Create Project, Get Project, etc.). Set the Header Auth with: Name: X-Goog-Api-Key Value: Your actual Google Stitch API Key (YOUR-API-KEY). AI Model & Tool Credentials: Verify the credentials for the Google Gemini Chat Model nodes are correctly set up for API access. Verify the credentials for the Perplexity API node ("Search on web") are configured if web search functionality is required. Activation: Once all credentials are configured, set the workflow to Active. The Telegram webhook will be registered, and the workflow will listen for authorized messages containing the /stitch command. 👉 Subscribe to my new YouTube channel. Here I’ll share videos and Shorts with practical tutorials and FREE templates for n8n. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Davide
This workflow creates an AI-powered chatbot that generates custom songs through an interactive conversation, then uploads the results to Google Drive. This workflow transforms n8n into a complete AI music production pipeline by combining: Conversational AI Structured data validation Tool orchestration External music generation API Cloud automation It demonstrates a powerful hybrid architecture: LLM Agent + Tools + API + Storage + Async Control Flow Key Advantages 1. ✅ Fully Automated AI Music Production From idea → to lyrics → to full generated track → to cloud storage All handled automatically. 2. ✅ Conversational UX Users don’t need technical knowledge. The AI collects missing information step-by-step. 3. ✅ Smart Tool Selection The agent dynamically chooses: Songwriter tool (for original lyrics) Search tool (for existing lyrics) This makes the system adaptive and intelligent. 4. ✅ Structured & Error-Safe Design Strict JSON schema enforcement Output parsing and validation Cleanup of malformed LLM responses Reduces failure rate dramatically. 5. ✅ Asynchronous API Handling Uses webhook-based resume Handles long-running AI generation Supports multiple song outputs Scalable and production-ready. 6. ✅ Modular & Extensible The architecture allows: Switching LLM provider Changing music API Adding new tools (e.g., cover art generation) Supporting different vocal styles or languages 7. ✅ Memory-Enabled Conversations Uses buffer memory (last 10 messages) Maintains conversational context and continuity. 8. ✅ Automatic File Management Generated songs are: Automatically downloaded Properly renamed Stored in Google Drive No manual file handling required. How it Works Here's the flow: User Interaction: The workflow starts with a chat trigger that receives user messages. A "Music Producer Agent" powered by Google Gemini engages with the user conversationally to gather all necessary song parameters. Data Collection: The agent collects four essential pieces of information: Song title Musical style (genre) Lyrics (prompt) - either generated by calling the "Songwriter" tool or searched online via the "Search songs" tool Negative tags (styles/elements to avoid) Validation & Formatting: The collected data passes through an IF condition checking for valid JSON format, then a Code node parses and cleans the JSON output. A "Fix Json Structure" node ensures proper formatting with strict rules (no line breaks, no double quotes). Song Generation: The formatted data is sent to the Kie.ai API (HTTP Request node) which generates the actual music track. The workflow includes a callback URL for asynchronous processing. Wait & Retrieve: A Wait node pauses execution until the Kie.ai API sends a webhook callback with the generated songs. The "Get songs" node then retrieves the song data. Process Results: The response is split out, and a Loop Over Items node processes each generated song individually. For each song, the workflow: Downloads the audio file via HTTP request Uploads it to a specified Google Drive folder with a timestamped filename Setup steps API Credentials (3 required): Google Gemini (PaLM) API: Configure in the two Gemini Chat Model nodes Gemini Search API: Set up in the "Search songs" tool node Kie AI Bearer Token: Add in the HTTP Request nodes (Create song and Get songs) Google Drive Configuration: Authenticate Google Drive OAuth2 in the "Upload song" node Verify/modify the folder ID if needed Ensure the Drive has proper write permissions Webhook Setup: The Wait node has a webhook ID that needs to be publicly accessible Configure this URL in your Kie.ai API settings as the callback endpoint Optional Customizations: Adjust the AI agent prompts in the "Music Producer Agent" and "Songwriter" nodes Modify song generation parameters in the Kie.ai API call (styleWeight, weirdnessConstraint, etc.) Update the Google Drive folder path for song storage Change the vocal gender or other music generation settings in the "Create song" node Testing: Activate the workflow and start a chat session to test song generation with sample requests like "Write a pop song about summer" or "Find lyrics for 'Bohemian Rhapsody' and make it in rock style" 👉 Subscribe to my new YouTube channel. Here I’ll share videos and Shorts with practical tutorials and FREE templates for n8n. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Jonas Frewert
Blog Post Content Creation (Multi-Topic with Brand Research, Google Drive, and WordPress) Description This workflow automates the full lifecycle of generating and publishing SEO-optimized blog posts from a list of topics. It reads topics (and optionally brands) from Google Sheets, performs brand research, generates a structured HTML article via AI, converts it into an HTML file for Google Drive, publishes a draft post on WordPress, and repeats this for every row in the sheet. When the final topic has been processed, a single Slack message is sent to confirm completion and share links. How It Works 1. Input from Google Sheets A Google Sheets node reads rows containing at least: Brand (optional, can be defaulted) Blog Title or Topic A Split In Batches node iterates through the rows one by one so each topic is processed independently. 2. Configuration The Configuration node maps each row’s values into: Brand Blog Title These values are used consistently across brand research, content creation, file naming, and WordPress publishing. 3. Brand Research A Language Model Chain node calls an OpenRouter model to gather background information about the brand and its services. The brand context is used as input for better, on-brand content generation. 4. Content Creation A second Language Model Chain node uses the brand research and the blog title or topic to generate a full-length, SEO-friendly blog article. Output is clean HTML with: Exactly one `` at the top Structured ` and ` headings Semantic tags only No inline CSS No <html> or <body> wrappers No external resources 5. HTML Processing A Code node in JavaScript: Strips any markdown-style code fences around the HTML Normalizes paragraph breaks Builds a safe file name from the blog title Encodes the HTML as a binary file payload 6. Upload to Google Drive A Google Drive node uploads the generated HTML file to a specified folder. Each topic creates its own HTML file, named after the blog title. 7. Publish to WordPress An HTTP Request node calls the WordPress REST API to create a post. The post content is the generated HTML, and the title comes from the Configuration node. By default, the post is created with status draft (can be changed to publish if desired). 8. Loop Control and Slack Notification After each topic is processed (Drive upload and WordPress draft), the workflow loops back to Split In Batches to process the next row. When there are no rows left, an IF node detects that the loop has finished. Only then is a single Slack message sent to: Confirm that all posts have been processed Share links to the last generated Google Drive file and WordPress post Integrations Used OpenRouter - AI models for brand research and SEO content generation Google Sheets - Source of topics and (optionally) brands Google Drive - Storage for generated HTML files WordPress REST API - Blog post creation (drafts or published posts) Slack - Final summary notification when the entire batch is complete Ideal Use Case Content teams and agencies managing a queue of blog topics in a spreadsheet Marketers who want a hands-off pipeline from topic list to WordPress drafts Teams who need generated HTML files stored in Drive for backup, review, or reuse Any workflow where automation should handle the heavy lifting and humans only review the final drafts Setup Instructions Google Sheets Create a sheet with columns like Brand and Blog Title or Topic. In the Get Blog Topics node, set the sheet ID and range to match your sheet. Add your Google Sheets credentials in n8n. OpenRouter (LLM) Add your OpenRouter API key as credentials. In the OpenRouter Chat Model nodes, select your preferred models and options if you want to customize behavior. Google Drive Add Google Drive credentials. Update the folder ID in the Upload file node to your target directory. WordPress In the Publish to WordPress node, replace the example URL with your site’s REST API endpoint. Configure authentication (for example, Application Passwords or Basic Auth). Adjust the status field (draft or publish) to match your desired workflow. Slack Add Slack OAuth credentials. Set the channel ID in the Slack node where the final summary message should be posted. Run the Workflow Click Execute Workflow. The workflow will loop through every row in the sheet, generating content, saving HTML files to Drive, and creating WordPress posts. When all rows have been processed, a single Slack notification confirms completion.
by Thapani Sawaengsri
Description This workflow automates compliance validation between a policy/procedure and a corresponding uploaded document. It leverages an AI agent to determine whether the content of the document aligns with the expectations outlined in the provided procedure or policy. How It Works Document Upload A document (e.g., PDF) is uploaded via an HTTP Request Webhook. The content is processed into vector embeddings using a Qdrant vector store and an embedding model. Procedure Submission A policy/procedure text and description are submitted via a second HTTP Request Webhook. These serve as the basis for evaluating the uploaded document. AI-Based Validation The AI agent receives: The uploaded document (via vector embeddings) The submitted procedure/policy text The description/context It returns a structured compliance analysis including: Summary of Compliance (sections that align with policy) Summary of Non-Compliance (gaps or missing elements) Supporting Text Citations (document evidence) Confidence Level (0–100 score based on evidence quality) Setup Instructions Pre-Conditions / Requirements An n8n instance running with access to: Qdrant (for vector storage) An embedding model (e.g., OpenAI, HuggingFace, or local model) Optional: Microsoft Graph or another storage system for document retrieval. Workflow Setup HTTP Request Node 1: Document Upload Accepts binary document files (PDF, DOCX, etc.). Extracts text, generates embeddings, and stores them in Qdrant. Returns a spDocumentId for reference. HTTP Request Node 2: Procedure Submission Accepts a JSON payload with: { "procedure": "Policy or procedure text", "description": "Brief context or objective", "spDocumentId": "ID of the uploaded document" } Links the procedure to the previously uploaded document. Order of Operations Step 1: Upload the document. Step 2: Submit the procedure referencing the same spDocumentId. Step 3: AI agent evaluates compliance and returns results. Example Input & Output Example Input: Document Upload (Webhook 1) Request: Binary file upload (example_policy.pdf) Response: { "spDocumentId": "12345" } Example Input: Procedure Submission (Webhook 2) { "procedure": "All financial records must be retained for 7 years.", "description": "Retention policy compliance validation", "spDocumentId": "12345" } Example Output: AI Compliance Validation { "compliance_summary": "The document includes a 7-year retention requirement for invoices and payroll records.", "non_compliance_summary": "No reference to retention of vendor contracts.", "citations": [ { "text": "Invoices will be stored for 7 years.", "page": 4 } ], "confidence": 87 }