by Trung Tran
🎙️ VoiceScribe AI: Telegram Audio Message Auto Transcription with OpenAI Whisper > Automatically transcribe Telegram voice messages and store them as structured logs in Google Sheets, while backing up the audio in Google Drive. 🧑💼 Who’s it for Journalists, content creators, or busy professionals who often record voice memos or short interviews on the go. Anyone who wants to turn voice recordings into searchable, structured notes. ⚙️ How it works / What it does User sends a voice message to a Telegram bot. n8n checks if the message is an audio voice note. If valid, it downloads the audio file and: Transcribes it using OpenAI Whisper (or your LLM of choice). Uploads the original audio to Google Drive for safekeeping. The transcript and audio metadata are merged. The workflow: Logs the data into a Google Sheet. Sends a formatted confirmation message to the user via Telegram. If the input is not audio, the bot politely informs the user that only voice messages are accepted. ✅ Features Accepts only Telegram voice messages. Transcribes via OpenAI Whisper. Logs DateTime, Duration, Transcript, and Audio URL to Google Sheets. Sends user feedback message via Telegram with download + transcript link. 🚀 How to set up Prerequisites Telegram Bot connected to n8n (via Telegram Trigger) Google Drive & Google Sheets credentials configured OpenAI or Whisper API credentials (for transcription) Steps Telegram Trigger Start the flow when a new message is sent to your bot. Check Message Type Use a conditional node to confirm it's a voice message. Download Voice Message Download the .oga file from Telegram. Transcribe Audio Send the binary audio to OpenAI Whisper or your transcription service. Upload to Google Drive Backup the original audio file. Merge Outputs Combine transcription with Drive metadata. Transform to Row Format Prepare structured JSON for Google Sheets. Append to Google Sheet Store the transcript log (DateTime, Duration, Transcript, AudioURL). Send Confirmation to User Inform the user via Telegram with their transcript and download link. Unsupported Message Handler Reply to users who send non-audio messages. 📄 Example Output in Google Sheet | DateTime | Duration | Transcript | AudioURL | |-----------------------|----------|--------------------------------------------|------------------------------------------------------------| | 2025-08-07T13:12:19Z | 27 | Dự án Outlet Activation là... | https://drive.google.com/uc?id=xxxx&export=download | 🧠 How to customize the workflow Swap Whisper with Deepgram, AssemblyAI, or other providers. Add speaker name detection or prompt-based tagging via GPT. Route transcripts into Notion, Airtable, or CRM systems. Add multi-language support or summarization steps. 📦 Requirements | Component | Required | |---------------------|----------| | Telegram API | ✅ | | Google Drive API | ✅ | | Google Sheets API | ✅ | | OpenAI Whisper API | ✅ | | n8n Cloud or Self-hosted | ✅ | Created with ❤️ using n8n
by Ranjan Dailata
Who this is for The TrustPilot SaaS Product Review Tracker is designed for product managers, SaaS growth teams, customer experience analysts, and marketing teams who need to extract, summarize, and analyze customer feedback at scale from TrustPilot. This workflow is tailored for: Product Managers** - Monitoring feedback to drive feature improvements Customer Support & CX Teams** - Identifying sentiment trends or recurring issues Marketing & Growth Teams** - Leveraging testimonials and market perception Data Analysts** - Tracking competitor reviews and benchmarking Founders & Executives** - Wanting aggregated insights into customer satisfaction What problem is this workflow solving? Manually monitoring, extracting, and summarizing TrustPilot reviews is time-consuming, fragmented, and hard to scale across multiple SaaS products. This workflow automates that process from unlocking the data behind anti-bot layers to summarizing and storing customer insights enabling teams to respond faster, spot trends, and make data-backed product decisions. This workflow solves: The challenge of scraping protected review data (using Bright Data Web Unlocker) The need for structured insights from unstructured review content The lack of automated delivery to storage and alerting systems like Google Sheets or webhooks What this workflow does Extract TrustPilot Reviews: Uses Bright Data Web Unlocker to bypass anti-bot protections and pull markdown-based content from product review pages Convert Markdown to Text: Leverages a basic LLM chain to clean and convert scraped markdown into plain text Structured Information Extraction: Uses OpenAI GPT-4o via the Information Extractor node to extract fields like product name, review date, rating, and reviewer sentiment Summarization Chain: Generates concise summaries of overall review sentiment and themes using OpenAI Merge & Aggregate Output: Consolidates individual extracted records into a structured batch output Outbound Data Delivery: Google Sheets – Appends summary and structured review data Write to Disk – Persists raw and processed content locally Webhook Notification – Sends a real-time alert with summarized insights Pre-conditions You need to have a Bright Data account and do the necessary setup as mentioned in the "Setup" section below. You need to have an OpenAI Account. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, Configure the Google Sheet Credentials with your own account. Follow this documentation - Set Google Sheet Credential In n8n, configure the OpenAi account credentials. Ensure the URL and Bright Data zone name are correctly set in the Set URL, Filename and Bright Data Zone node. Set the desired local path in the Write a file to disk node to save the responses. How to customize this workflow to your needs Target Multiple Products : Configure the Bright Data input URL dynamically for different SaaS product TrustPilot URLs Loop through a product list and run parallel jobs for each Customize Extraction Fields : Update the prompt in the Information Extractor to include: Review title Response from company Specific feature mentions Competitor references Tune Summarization Style Change tone**: executive summary, customer pain-point focus, or marketing quote extract Enable sentiment aggregation** (e.g., 30% negative, 50% neutral, 20% positive) Expand Output Destinations Push to Notion, Airtable, or CRM tools using additional webhook nodes Generate and send PDF reports (via PDFKit or HTML-to-PDF nodes) Schedule summary digests via Gmail or Slack
by Murtaja Ziad
A n8n workflow designed to shorten URLs using Dub.co API. How it works: It shortens a url using Dub.co API, with the ability to use custom domains and projects. It updates the current shortened url if the slug has been already used. Estimated Time: Around 15 minutes. Requirements: A Dub.co account. Configuration: Configure the "API Auth" node to add your Dub.co API key, project slug, and the long URL. There some extras that you're able to configure too. You will be able to do that by clicking the "API Auth" node and filling the fields. Detailed Instructions: Sticky notes within the workflow provide extensive setup information and guidance. Keywords: n8n workflow, dub.co, dub.sh, url shortener, short urls, short links
by Davide
This Workflow simulates an AI-powered phone agent with two main functions: 📅 Appointment Booking – It can schedule appointments directly into Google Calendar. 🧠 RAG-based Information Retrieval – It provides answers using a Retrieval-Augmented Generation (RAG) system. For example, it can respond to questions such as store opening hours, return policies, or product details. The guide also explains how to purchase a dedicated phone number (with a +1 prefix) and link it to the AI agent. This setup is cost-effective, as it uses a FREE $10 credit to operate without additional charges in the beginning. ✨ Advantages 🕐 24/7 Availability** – The AI agent can answer calls and assist customers at any time. 🤖 Automation** – It reduces the workload on human staff by handling repetitive tasks like appointment scheduling and FAQ responses. 🔌 Easy Integration** – Built with n8n, it’s flexible and customizable for various platforms and tools. 💸 Low-cost Setup** – Using the free credit, businesses can get started without an upfront investment. 📦 Use Cases 🛍 E-commerce** – Answer common product questions or order inquiries. 🏬 Retail Stores** – Provide store hours, address info, and return policies. 🍽 Restaurants** – Take reservations or share menu information. 💼 Service Providers** – Book appointments or consultations. 📞 Any Local Business** – Offer phone support without needing a live operator. How It Works This Workflow simulates an AI-powered phone agent with two primary functions: Appointment Booking The workflow captures call events (e.g., call_ended or call_analyzed) and extracts key details (transcript, caller info, duration, etc.). Using OpenAI, it summarizes the conversation and parses structured data (e.g., names, contact info, dates). For scheduling, it converts user-provided dates into Google Calendar-compatible formats and creates events automatically. RAG-Based Information Retrieval When a query is received (e.g., store hours, product details), the workflow retrieves relevant information from a Qdrant vector store. An AI agent processes the query using the retrieved data and responds via a webhook, ensuring accurate, context-aware answers. Set Up Steps Prepare Qdrant Vector Store Create/refresh a Qdrant collection (via HTTP requests). Upload and vectorize documents (e.g., from Google Drive) using OpenAI embeddings. Configure RetellAI Agent Sign up for RetellAI, create an agent, and set the webhook URLs (n8n_call for call events, n8n_rag_function for RAG queries). Purchase a Twilio phone number and link it to the agent. n8n Workflow Setup Connect OpenAI, Qdrant, Google Calendar, and Telegram nodes with credentials. Customize prompts for summarization, date parsing, and RAG responses. Test the workflow to ensure data flows from call events → processing → actions (e.g., calendar bookings, Telegram alerts). Deploy Trigger the workflow via RetellAI webhooks during calls. Monitor outputs (e.g., call summaries in Telegram, calendar events). Note: Replace placeholders (e.g., QDRANTURL, COLLECTION, CHAT_ID) with actual values. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Aditya Gaur
Who is this template for? This template is designed for teams who need to automate data retrieval from SharePoint lists using n8n. It is ideal for users who want to authenticate via OAuth and then use the token to access SharePoint API endpoints, pulling in list data directly into n8n. How it works The template first generates an OAuth token using the Microsoft OAuth API. This token is then used to authenticate requests to the SharePoint List API, allowing the workflow to fetch data from a specified SharePoint list. By following the n8n workflow, the user can configure the necessary credentials and endpoints to automate SharePoint data access securely. Setup steps Step 1: Replace {tenant_id}, {client_id}, and {client_secret} with your Azure AD details for OAuth authentication. Step 2: Specify the SharePoint list API endpoint in the template (under "SharePoint List Fetch" node). Step 3: Configure the SharePoint list URL and make adjustments for specific data fields if necessary.
by DataMinex
📊 Real-Time Flight Data Analytics Bot with Dynamic Chart Generation via Telegram 🚀 Template Overview This advanced n8n workflow creates an intelligent Telegram bot that transforms raw CSV flight data into stunning, interactive visualizations. Users can generate professional charts on-demand through a conversational interface, making data analytics accessible to anyone via messaging. Key Innovation: Combines real-time data processing, Chart.js visualization engine, and Telegram's messaging platform to deliver instant business intelligence insights. 🎯 What This Template Does Transform your flight booking data into actionable insights with four powerful visualization types: 📈 Bar Charts**: Top 10 busiest airlines by flight volume 🥧 Pie Charts**: Flight duration distribution (Short/Medium/Long-haul) 🍩 Doughnut Charts**: Price range segmentation with average pricing 📊 Line Charts**: Price trend analysis across flight durations Each chart includes auto-generated insights, percentages, and key business metrics delivered instantly to users' phones. 🏗️ Technical Architecture Core Components Telegram Webhook Trigger: Captures user interactions and button clicks Smart Routing Engine: Conditional logic for command detection and chart selection CSV Data Pipeline: File reading → parsing → JSON transformation Chart Generation Engine: JavaScript-powered data processing with Chart.js Image Rendering Service: QuickChart API for high-quality PNG generation Response Delivery: Binary image transmission back to Telegram Data Flow Architecture User Input → Command Detection → CSV Processing → Data Aggregation → Chart Configuration → Image Generation → Telegram Delivery 🛠️ Setup Requirements Prerequisites n8n instance** (self-hosted or cloud) Telegram Bot Token** from @BotFather CSV dataset** with flight information Internet connectivity** for QuickChart API Dataset Source This template uses the Airlines Flights Data dataset from GitHub: 🔗 Dataset: Airlines Flights Data by Rohit Grewal Required Data Schema Your CSV file should contain these columns: airline,flight,source_city,departure_time,arrival_time,duration,price,class,destination_city,stops File Structure /data/ └── flights.csv (download from GitHub dataset above) ⚙️ Configuration Steps 1. Telegram Bot Setup Create a new bot via @BotFather on Telegram Copy your bot token Configure the Telegram Trigger node with your token Set webhook URL in your n8n instance 2. Data Preparation Download the dataset from Airlines Flights Data Upload the CSV file to /data/flights.csv in your n8n instance Ensure UTF-8 encoding Verify column headers match the dataset schema Test file accessibility from n8n 3. Workflow Activation Import the workflow JSON Configure all Telegram nodes with your bot token Test the /start command Activate the workflow 🔧 Technical Implementation Details Chart Generation Process Bar Chart Logic: // Aggregate airline counts const airlineCounts = {}; flights.forEach(flight => { const airline = flight.airline || 'Unknown'; airlineCounts[airline] = (airlineCounts[airline] || 0) + 1; }); // Generate Chart.js configuration const chartConfig = { type: 'bar', data: { labels, datasets }, options: { responsive: true, plugins: {...} } }; Dynamic Color Schemes: Bar Charts: Professional blue gradient palette Pie Charts: Duration-based color coding (light→dark blue) Doughnut Charts: Price-tier specific colors (green→purple) Line Charts: Trend-focused red gradient with smooth curves Performance Optimizations Efficient Data Processing: Single-pass aggregations with O(n) complexity Smart Caching: QuickChart handles image caching automatically Minimal Memory Usage: Stream processing for large datasets Error Handling: Graceful fallbacks for missing data fields Advanced Features Auto-Generated Insights: Statistical calculations (percentages, averages, totals) Trend analysis and pattern detection Business intelligence summaries Contextual recommendations User Experience Enhancements: Reply keyboards for easy navigation Visual progress indicators Error recovery mechanisms Mobile-optimized chart dimensions (800x600px) 📈 Use Cases & Business Applications Airlines & Travel Companies Fleet Analysis**: Monitor airline performance and market share Pricing Strategy**: Analyze competitor pricing across routes Operational Insights**: Track duration patterns and efficiency Data Analytics Teams Self-Service BI**: Enable non-technical users to generate reports Mobile Dashboards**: Access insights anywhere via Telegram Rapid Prototyping**: Quick data exploration without complex tools Business Intelligence Executive Reporting**: Instant charts for presentations Market Research**: Compare industry trends and benchmarks Performance Monitoring**: Track KPIs in real-time 🎨 Customization Options Adding New Chart Types Create new Switch condition Add corresponding data processing node Configure Chart.js options Update user interface menu Data Source Extensions Replace CSV with database connections Add real-time API integrations Implement data refresh mechanisms Support multiple file formats Visual Customizations // Custom color palette backgroundColor: ['#your-colors'], // Advanced styling borderRadius: 8, borderSkipped: false, // Animation effects animation: { duration: 2000, easing: 'easeInOutQuart' } 🔒 Security & Best Practices Data Protection Validate CSV input format Sanitize user inputs Implement rate limiting Secure file access permissions Error Handling Graceful degradation for API failures User-friendly error messages Automatic retry mechanisms Comprehensive logging 📊 Expected Outputs Sample Generated Insights "✈️ Vistara leads with 350+ flights, capturing 23.4% market share" "📈 Long-haul flights dominate at 61.1% of total bookings" "💰 Budget category (₹0-10K) represents 47.5% of all bookings" "📊 Average prices peak at ₹14K for 6-8 hour duration flights" Performance Metrics Response Time**: <3 seconds for chart generation Image Quality**: 800x600px high-resolution PNG Data Capacity**: Handles 10K+ records efficiently Concurrent Users**: Scales with n8n instance capacity 🚀 Getting Started Download the workflow JSON Import into your n8n instance Configure Telegram bot credentials Upload your flight data CSV Test with /start command Deploy and share with your team 💡 Pro Tips Data Quality**: Clean data produces better insights Mobile First**: Charts are optimized for mobile viewing Batch Processing**: Handles large datasets efficiently Extensible Design**: Easy to add new visualization types Ready to transform your data into actionable insights? Import this template and start generating professional charts in minutes! 🚀
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for The Legal Case Research Extractor is a powerful automated workflow designed for legal tech teams, researchers, law firms, and data scientists focused on transforming unstructured legal case data into actionable, structured insights. This workflow is tailored for: Legal Researchers automating case law data mining Litigation Support Teams handling large volumes of case records LawTech Startups building AI-powered legal research assistants Compliance Analysts extracting case-specific insights AI Developers working on legal NLP, summarization, and search engines What problem is this workflow solving? Legal case data is often locked in semi-structured or raw HTML formats, scattered across jurisdiction-specific websites. Manually extracting and processing this data is tedious and inefficient. This workflow automates: Extraction of legal case data via Bright Data's powerful MCP infrastructure Parsing of HTML into clean, readable text using Google Gemini LLM Structuring and delivering the output through webhook and file storage What this workflow does Input Set the Legal Case Research URL node is responsible for setting the legal case URL for the data extraction. Bright Data MCP Data Extractor Bright Data MCP Client For Legal Case Research node is responsible for the legal case extraction via the Bright Data MCP tool - scrape_as_html Case Extractor Google Gemini based Case Extractor is responsible for producing a paginated list of cases Loop through Legal Case URLs Receives a collection of legal case links to process Each URL represents a different case from a target legal website Bright Data MCP Scraping Utilizes Bright Data’s scrape_as_html MCP mode Retrieves raw HTML content of each legal case Google Gemini LLM Extraction Transforms raw HTML into clean, structured text Performs additional information extraction if required (e.g., case summary, court, jurisdiction etc.) Webhook Notification Sends extracted legal case content to a configurable webhook URL Enables downstream processing or storage in legal databases Binary Conversion & File Persistence Converts the structured text to binary format Saves the final response to disk for archival or further processing Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token> How to customize this workflow to your needs Target New Legal Portals Modify the legal case input URLs to scrape from different state or federal case databases Customize LLM Extraction Modify the prompt to extract specific fields: case number, plaintiff, case summary, outcome, legal precedents etc. Add a summarization step if needed Enhance Loop Handling Integrate with a Google Sheet or API to dynamically fetch case URLs Add error handling logic to skip failed cases and log them Improve Security & Compliance Redact sensitive information before sending via webhook Store processed case data in encrypted cloud storage Output Formats Save as PDF, JSON, or Markdown Enable output to cloud storage (S3, Google Drive) or legal document management systems
by Autonomous Work
This workflow exports every table in a base as its own CSV, saves the files in a time-stamped folder in Amazon S3, pings you on Slack, and optionally prunes older copies. You get an automated weekly backup that is easy to inspect or re-import as needed. You can easily swap the S3 node for the storage provider of your choice. ++How it works++ Weekly Backup Schedule trigger fires weekly Sets and formats the week ex. [2025-W12] Create a folder in S3 bucket with the week Loops through all tables in Airtable base creating CSVs and uploading to the new path Slack message is sent on completion Monthly Prune Schedule trigger fires weekly Sets a cut-off date 4 weeks in the past Lists folders in S3 Deletes all folders > 4 weeks old ++Setup Steps++ Clone workflow Swap credentials for Airtable, AWS, and Slack Ensure AWS credential has appropriate IAM policy to manage bucket & objects Set workflow to "Active"
by Khairul Muhtadin
Tesseract - Money Mate Workflow Description Disclaimer: This template requires the n8n-nodes-tesseractjs community node, which is only available on self-hosted n8n instances. You’ll need a self-hosted n8n setup to use this workflow. Who is this for? This workflow is designed for individuals, freelancers, or small business owners who want an easy way to track expenses using Telegram. It’s ideal for anyone looking to digitize receipts—whether from photos or text messages—using free tools, without needing advanced technical skills. What problem does this workflow solve? Manually entering receipt details into a spreadsheet or app is time-consuming and prone to mistakes. This workflow automates the process by extracting information from receipt images or text messages sent via Telegram, categorizing expenses, and sending back a clear, formatted summary. It saves time, reduces errors, and makes expense tracking effortless. What this workflow does The workflow listens for messages sent to a Telegram bot, which can be either text descriptions of expenses or photos of receipts. If a photo is sent, Tesseract (an open-source text recognition tool) extracts the text. If text is sent, it’s processed directly. An AI model (LLaMA via OpenRouter) analyzes the input, categorizes it into expense types (e.g., Food & Beverages, Household, Transport), and creates a structured summary including store name, date, items, total, and category. The summary is then sent back to the user’s Telegram chat. Setup Instructions Follow these step-by-step instructions to set up the workflow. No advanced technical knowledge is required, but you’ll need a self-hosted n8n instance. Set Up a Self-Hosted n8n Instance: If you don’t have n8n installed, follow the n8n self-hosting guide to set it up. You can use platforms like Docker or a cloud provider (e.g., DigitalOcean, AWS). Ensure your n8n instance is running and accessible via a web browser. Install the Tesseract Community Node: In your n8n instance, go to Settings > Community Nodes in the sidebar. Click Install a Community Node, then enter n8n-nodes-tesseractjs in the search bar. Click Install and wait for confirmation. This node enables receipt image processing. If you encounter issues, check the n8n community nodes documentation for troubleshooting. Create a Telegram Bot: Open Telegram and search for @BotFather to start a new bot. Send /start to BotFather, then /newbot to create your bot. Follow the prompts to name your bot (e.g., “MoneyMateBot”). BotFather will provide a Bot Token (e.g., 23872837287:ExampleExampleExample). Copy this token. In n8n, go to Credentials > Add Credential, select Telegram API, and paste the token. Name the credential (e.g., “MoneyMateBot”) and save. Set Up OpenRouter for AI Processing: Sign up for a free account at OpenRouter. In your OpenRouter dashboard, generate an API Key under the API section. In n8n, go to Credentials > Add Credential, select OpenRouter API, and paste the API key. Name it (e.g., “OpenRouter Account”) and save. The free tier of OpenRouter’s LLaMA model is sufficient for this workflow. Import and Configure the Workflow: Download the workflow JSON file (provided separately or copy from the source). In n8n, go to Workflows > Import Workflow and upload the JSON file. Open the imported workflow (“Tesseract - Money Mate”). Ensure the Telegram Trigger and Send Expense Summary nodes use the Telegram credential you created. Ensure the AI Analyzer node uses the OpenRouter credential. Save the workflow. Test the Workflow: Activate the workflow by toggling the Active switch in n8n. In Telegram, find your bot (e.g., @MoneyMateBot) and send /start. Test with a sample input (see “Example Inputs” below). Check the n8n workflow execution panel to ensure data flows correctly. If errors occur, double-check credentials and node connections. Activate for Continuous Use: Once tested, keep the workflow active in n8n. Your bot will now process any text or image sent to it via Telegram. Example Inputs/Formats To help the workflow process your data accurately, use clear and structured inputs. Below are examples of valid inputs: Text Input Example: Send a message to your Telegram bot like this: Bought coffee at Starbucks, Jalan Sudirman, yesterday. Total Rp 50,000. 2 lattes, each Rp 25,000. Expected Output: hello [Your Name] Ini Rekap Belanjamu 📋 Store: Starbucks 📍 Location: Jalan Sudirman 📅 Date: 2025-05-26 🛒 Items: Latte: Rp 25,000 Latte: Rp 25,000 💸 Total: Rp 50,000 📌 Category: Food & Beverages Image Input Example: Upload a photo of a receipt to your Telegram bot. The receipt should contain: Store name (e.g., “Alfamart”) Address (e.g., “Jl. Gatot Subroto, Jakarta”) Date and time (e.g., “27/05/2025 14:00”) Items with prices (e.g., “Bread Rp 15,000”, “Milk Rp 20,000”) Total amount (e.g., “Total: Rp 35,000”) Expected Output: hello [Your Name] Ini Rekap Belanjamu 📋 Store: Alfamart 📍 Location: Jl. Gatot Subroto, Jakarta 📅 Date: 2025-05-27 14:00 🛒 Items: Bread: Rp 15,000 Milk: Rp 20,000 💸 Total: Rp 35,000 📌 Category: Household Tips for Images: Ensure the receipt is well-lit and text is readable. Avoid blurry or angled photos for better Tesseract accuracy. How to Customize This Workflow Change Expense Categories: In the **AI Categorizer node, edit the prompt to include custom categories (e.g., add “Entertainment” or “Utilities” to the list: Food & Beverages, Household, Transport). Modify Response Format: In the **Format Summary Message node, adjust the JavaScript code to change how the summary looks (e.g., add emojis, reorder fields). Save to a Database: Add a node (e.g., Google Sheets or PostgreSQL) after the **Format Summary Message node to store summaries. Support Other Languages: In the **AI Categorizer node, update the prompt to handle additional languages (e.g., Spanish, Mandarin) by specifying them in the instructions. Add Error Handling: Enhance the **Check Invalid Input node to catch more edge cases, like invalid dates. All Free, End-to-End This workflow is 100% free! It leverages: Telegram Bot API**: Free via BotFather. Tesseract**: Open-source text recognition. LLaMA via OpenRouter**: Free tier available for AI processing. Enjoy automating your expense tracking without any cost! Made by: khmuhtadin Need a custom? contact me on LinkedIn or Web
by Daniel Shashko
Note: This template is for self-hosted n8n instances only You can use this workflow to fully automate website content monitoring and change detection on a weekly basis—even when there’s no native node for scraping or structured comparison. It uses an AI-powered scraper, structured data extraction, and integrates Google Sheets, Drive, Docs, and email for seamless tracking and reporting. Main Use Cases Monitor and report changes to websites (e.g., pricing, content, headings, FAQs) over time Automate web audits, compliance checks, or competitive benchmarking Generate detailed change logs and share them automatically with stakeholders How it works The workflow operates as a scheduled process, organized into these stages: 1. Initialization & Configuration Triggers weekly (or manually) and initializes key variables: Google Drive folder, spreadsheet IDs, notification emails, and test mode. 2. Input Retrieval Reads the list of URLs to be monitored from a Google Sheet. 3. Web Scraping & Structuring For each URL, an AI agent uses Bright Data's scrape_as_markdown tool to extract the full web page content. The workflow then parses this content into a well-structured JSON, capturing elements like metadata, headings, pricing, navigation, calls to action, contacts, banners, and FAQs. 4. Saving Current Week’s Results The structured JSON is saved to Google Drive as the current week’s snapshot for each monitored URL. The Google Sheet is updated with file references for traceability. 5. Comparison with Previous Snapshot If a prior week’s file exists, it is downloaded and parsed. The workflow compares the current and previous JSON snapshots, detecting and categorizing all substantive content changes (e.g., new/updated plans, FAQ edits, contact info modifications). Optionally, in test mode, mock changes are introduced for demo and validation purposes. 6. Change Report Generation & Delivery A rich Markdown-formatted changelog is generated, summarizing the detected changes, and then converted to HTML. The changelog is uploaded to Google Docs and linked back to the tracking sheet. An HTML email with the full report and relevant links is sent to recipients. Summary Flow: Schedule/workflow trigger → initialize variables Read URL list from spreadsheet For each URL: Scrape & structure as JSON Save to Drive, update tracking sheet If previous week exists: Download & parse previous Compare, generate changelog Convert to HTML, save to Docs, update Sheet Email results Benefits: Fully automated website change tracking with end-to-end reporting Adaptable and extensible for any set of monitored pages and content types Easy integration with Google Workspace tools for collaboration and storage Minimal manual intervention required after initial setup
by Jonas Frewert
Blog Post Content Creation (Multi-Topic with Brand Research, Google Drive, and WordPress) Description This workflow automates the full lifecycle of generating and publishing SEO-optimized blog posts from a list of topics. It reads topics (and optionally brands) from Google Sheets, performs brand research, generates a structured HTML article via AI, converts it into an HTML file for Google Drive, publishes a draft post on WordPress, and repeats this for every row in the sheet. When the final topic has been processed, a single Slack message is sent to confirm completion and share links. How It Works 1. Input from Google Sheets A Google Sheets node reads rows containing at least: Brand (optional, can be defaulted) Blog Title or Topic A Split In Batches node iterates through the rows one by one so each topic is processed independently. 2. Configuration The Configuration node maps each row’s values into: Brand Blog Title These values are used consistently across brand research, content creation, file naming, and WordPress publishing. 3. Brand Research A Language Model Chain node calls an OpenRouter model to gather background information about the brand and its services. The brand context is used as input for better, on-brand content generation. 4. Content Creation A second Language Model Chain node uses the brand research and the blog title or topic to generate a full-length, SEO-friendly blog article. Output is clean HTML with: Exactly one `` at the top Structured ` and ` headings Semantic tags only No inline CSS No <html> or <body> wrappers No external resources 5. HTML Processing A Code node in JavaScript: Strips any markdown-style code fences around the HTML Normalizes paragraph breaks Builds a safe file name from the blog title Encodes the HTML as a binary file payload 6. Upload to Google Drive A Google Drive node uploads the generated HTML file to a specified folder. Each topic creates its own HTML file, named after the blog title. 7. Publish to WordPress An HTTP Request node calls the WordPress REST API to create a post. The post content is the generated HTML, and the title comes from the Configuration node. By default, the post is created with status draft (can be changed to publish if desired). 8. Loop Control and Slack Notification After each topic is processed (Drive upload and WordPress draft), the workflow loops back to Split In Batches to process the next row. When there are no rows left, an IF node detects that the loop has finished. Only then is a single Slack message sent to: Confirm that all posts have been processed Share links to the last generated Google Drive file and WordPress post Integrations Used OpenRouter - AI models for brand research and SEO content generation Google Sheets - Source of topics and (optionally) brands Google Drive - Storage for generated HTML files WordPress REST API - Blog post creation (drafts or published posts) Slack - Final summary notification when the entire batch is complete Ideal Use Case Content teams and agencies managing a queue of blog topics in a spreadsheet Marketers who want a hands-off pipeline from topic list to WordPress drafts Teams who need generated HTML files stored in Drive for backup, review, or reuse Any workflow where automation should handle the heavy lifting and humans only review the final drafts Setup Instructions Google Sheets Create a sheet with columns like Brand and Blog Title or Topic. In the Get Blog Topics node, set the sheet ID and range to match your sheet. Add your Google Sheets credentials in n8n. OpenRouter (LLM) Add your OpenRouter API key as credentials. In the OpenRouter Chat Model nodes, select your preferred models and options if you want to customize behavior. Google Drive Add Google Drive credentials. Update the folder ID in the Upload file node to your target directory. WordPress In the Publish to WordPress node, replace the example URL with your site’s REST API endpoint. Configure authentication (for example, Application Passwords or Basic Auth). Adjust the status field (draft or publish) to match your desired workflow. Slack Add Slack OAuth credentials. Set the channel ID in the Slack node where the final summary message should be posted. Run the Workflow Click Execute Workflow. The workflow will loop through every row in the sheet, generating content, saving HTML files to Drive, and creating WordPress posts. When all rows have been processed, a single Slack notification confirms completion.
by Thapani Sawaengsri
Description This workflow automates compliance validation between a policy/procedure and a corresponding uploaded document. It leverages an AI agent to determine whether the content of the document aligns with the expectations outlined in the provided procedure or policy. How It Works Document Upload A document (e.g., PDF) is uploaded via an HTTP Request Webhook. The content is processed into vector embeddings using a Qdrant vector store and an embedding model. Procedure Submission A policy/procedure text and description are submitted via a second HTTP Request Webhook. These serve as the basis for evaluating the uploaded document. AI-Based Validation The AI agent receives: The uploaded document (via vector embeddings) The submitted procedure/policy text The description/context It returns a structured compliance analysis including: Summary of Compliance (sections that align with policy) Summary of Non-Compliance (gaps or missing elements) Supporting Text Citations (document evidence) Confidence Level (0–100 score based on evidence quality) Setup Instructions Pre-Conditions / Requirements An n8n instance running with access to: Qdrant (for vector storage) An embedding model (e.g., OpenAI, HuggingFace, or local model) Optional: Microsoft Graph or another storage system for document retrieval. Workflow Setup HTTP Request Node 1: Document Upload Accepts binary document files (PDF, DOCX, etc.). Extracts text, generates embeddings, and stores them in Qdrant. Returns a spDocumentId for reference. HTTP Request Node 2: Procedure Submission Accepts a JSON payload with: { "procedure": "Policy or procedure text", "description": "Brief context or objective", "spDocumentId": "ID of the uploaded document" } Links the procedure to the previously uploaded document. Order of Operations Step 1: Upload the document. Step 2: Submit the procedure referencing the same spDocumentId. Step 3: AI agent evaluates compliance and returns results. Example Input & Output Example Input: Document Upload (Webhook 1) Request: Binary file upload (example_policy.pdf) Response: { "spDocumentId": "12345" } Example Input: Procedure Submission (Webhook 2) { "procedure": "All financial records must be retained for 7 years.", "description": "Retention policy compliance validation", "spDocumentId": "12345" } Example Output: AI Compliance Validation { "compliance_summary": "The document includes a 7-year retention requirement for invoices and payroll records.", "non_compliance_summary": "No reference to retention of vendor contracts.", "citations": [ { "text": "Invoices will be stored for 7 years.", "page": 4 } ], "confidence": 87 }