by Olaf Titel
Setup & Instructions — fluidX: Create Session, Analyze & Notify Goal: This workflow demonstrates the full fluidX THE EYE integration — starting a live session, inviting both the customer (via SMS) and the service agent (via email), and then accessing the media (photos and videos) created during the session. Captured images are automatically analyzed with AI, uploaded to an external storage (such as Google Drive), and a media summary for the session is generated at the end. The agent receives an email with a link to join the live session. The customer receives an SMS with a link to start sharing their camera. Once both are connected, the agent can view the live feed, and the system automatically stores uploaded images and videos in Google Drive. When the session ends, the workflow collects all media and creates a complete AI-powered session summary (stored and updated in Google Drive). Below is an example screenshot from the customer’s phone: Prerequisites Developer account:* https://live.fluidx.digital (activate the *TEST plan**, €0) API docs (Swagger):** fluidX.digital API 🔐 Required Credentials 1️⃣ fluidX API key (HTTP Header Auth) • Credential name in n8n: fluidx API key • Header name: x-api-key • Header value: YOUR_API_KEY 2️⃣ SMTP account (for outbound email) • Credential name in n8n: SMTP account • Configure host, port, username, and password according to your provider • Enable TLS/SSL as required 3️⃣ Google Drive account • Used to store photos, videos, and automatically update the session summary files. 4️⃣ OpenAI API (for AI analysis & summary) •Used in the Analyze Images (AI) and Generate Summary parts of the workflow. • Credential type: OpenAI • Credential name (suggested): OpenAI account • API Key: your OpenAI API key • Model: e.g. gpt-4.1, gpt-4o, or similar (choose in the OpenAI node settings) ⚙️ Configuration (in the “Set Config” node) BASE_URL: https://live.fluidx.digital company / project / billingcode / sku: adjust as needed emailAgent: set before running (empty in template) phoneNumberUser: set before running (empty in template) Flow Overview Form Trigger → Create Session → Set Session Vars → Send SMS (User) → Send Email (Agent) → Monitor Media → Analyze Images (AI) → Upload Files to Google Drive → Generate Summary → Update Summary File The workflow starts automatically when a Form submission is received. Users enter the customer’s phone number and agent’s email, and the system creates a new fluidX THE EYE session. As media is uploaded during the session, the workflow automatically retrieves, stores, analyzes, and summarizes it — providing a complete end-to-end automation example for remote inspection, support, or field-service use cases. Notes Do not store real personal data inside the template. Manage API keys and secrets via n8n Credentials or environment variables. Log out of https://live.fluidx.digital in the agent’s browser before testing, to ensure a clean invite flow and session creation.
by Avkash Kakdiya
How it works This workflow runs daily to review all active deals and evaluate their likelihood of closing successfully. It enriches deal data with recent engagement activity and applies AI-based behavioral scoring to predict conversion probability. High-risk or stalled deals are flagged automatically. Actionable alerts are sent to the sales team, and all analysis is logged for forecasting and tracking. Step-by-step Trigger and fetch deals** Schedule Trigger – Runs the workflow automatically at a fixed time each day. Get Active Deals from HubSpot – Retrieves all open, non-closed deals with key properties. Formatting Data – Normalizes deal fields such as value, stage, age, contacts, and activity dates. Enrich deals with engagement data** If – Filters only active deals for further processing. Loop Over Items – Processes each deal individually. HTTP Request – Fetches engagement associations for the current deal. Get an engagement – Retrieves detailed engagement records from HubSpot. Extracts Data – Structures engagement content, timestamps, and metadata for analysis. Analyze risk, alert, and store results** OpenAI Chat Model – Provides the language model used for analysis. AI Agent – Evaluates behavioral signals, predicts conversion probability, and recommends actions. Format Data – Parses AI output into structured, machine-readable fields. Filter Alerts Needed – Identifies deals that need immediate attention. Send Slack Alert – Sends detailed alerts for high-risk or stalled deals. Append or update row in sheet – Logs analysis results into Google Sheets for reporting. Why use this? Automatically identify high-risk deals before they stall or fail Give sales teams clear, data-driven next actions instead of raw CRM data Improve forecasting accuracy with AI-powered probability scoring Maintain a historical deal health log for audits and performance reviews Reduce manual pipeline reviews while increasing response speed
by Madame AI
AI Image Remix & Design Bot for Telegram with BrowserAct & Gemini This workflow transforms your Telegram bot into an intelligent creative assistant. It can chat conversationally, fetch trending image prompts from PromptHero for inspiration, or perform a deep "remix" of any photo you upload by analyzing its composition and regenerating it with high-fidelity prompt engineering. Target Audience Digital artists, designers, content creators, and hobbyists looking for AI-assisted inspiration and image generation. How it works Traffic Control: The workflow starts with a Telegram Trigger and immediately splits traffic: new messages go one way, while interactive button clicks (like "Regenerate") go another. Intent Classification: An AI Agent analyzes text inputs to decide if the user wants to "Chat" (small talk) or "Start" a creative session (fetch inspiration). Inspiration Mode: If "Start" is detected, BrowserAct scrapes trending prompts from PromptHero and saves them to a Google Sheet. Visual Forensics: If the user uploads an image, an AI Vision Agent (using OpenRouter/Gemini) analyzes it in extreme detail (lighting, composition, subjects) and saves the description. Master Prompt Engineering: Specialized AI Agents expand these inputs (either scraped prompts or image descriptions) into massive, detailed prompts using the "Rule of Multiplication." Production: Google Gemini generates the new image, which is sent back to Telegram with interactive buttons to "Regenerate" or move to the "Next" idea. ⚠️ Complex Workflow This workflow is complex. Please proceed using the tutorial video. How to set up Configure Credentials: Connect your Telegram, Google Sheets, BrowserAct, Google Gemini, and OpenRouter accounts in n8n. Prepare BrowserAct: Ensure the Image Remix & Design Bot template is saved in your BrowserAct account. Setup Google Sheet: Create a Google Sheet with four tabs: PromptHero, Current State, UserImage, and Current Image. Connect Sheet: Open all Google Sheets nodes in the workflow and paste your spreadsheet ID. Configure Telegram: Ensure your bot is created via BotFather and the API token is added to the Telegram credentials. Activate: Turn on the workflow. Requirements BrowserAct* account with the *Image Remix & Design Bot** template. Telegram** account (Bot Token). Google Sheets** account. Google Gemini** account. OpenRouter** account (or compatible LLM credentials). How to customize the workflow Change Art Style: Modify the system prompt in the Generate Image agents to enforce a specific style (e.g., "Cyberpunk," "Watercolor," or "Photorealistic"). Add More Sources: Update the BrowserAct template to scrape prompts from other sites like Civitai or Midjourney feed. Switch Image Model: Replace the Gemini image generation node with Stable Diffusion or DALL-E 3 if you prefer different aesthetics. Need Help? How to Find Your BrowserAct API Key & Workflow ID How to Connect n8n to BrowserAct How to Use & Customize BrowserAct Templates Workflow Guidance and Showcase Video How To create stateful n8n Workflows | AI Image Remix Bot with n8n & BrowserAct & Telegram 🎨
by Automate With Marc
🎨 Instagram Carousel & Caption Generator on Autopilot (GPT-5 + Nano Banana + Blotato + Google Sheets) Description Watch the full step-by-step tutorial on YouTube: https://youtu.be/id22R7iBTjo Disclaimer (self-hosted requirement): This template assumes you have valid API credentials for OpenAI, Wavespeed/Nano Banana, Blotato, and Google. If using n8n Self-Hosted, ensure HTTPS access and credentials are set in your instance. How It Works Chat Trigger – Receive a topic/idea (e.g. “5 best podcast tips”). Image Prompt Generator (GPT-5) – Creates 5 prompts using the “Hook → Problem → Insight → Solution → CTA” framework. Structured Output Parser – Formats output into a JSON array. Generate Images (Nano Banana) – Converts prompts into high-quality visuals. Wait for Render – Ensures image generation completes. Fetch Rendered Image URLs – Retrieves image links. Upload to Blotato – Hosts and prepares images for posting. Collect Media URLs – Gathers all uploaded image URLs. Log to Google Sheets – Stores image URLs + timestamps for tracking. Caption Generator (GPT-5) – Writes an SEO-friendly caption. Merge Caption + Images – Combines data. Post Carousel (Blotato) – Publishes directly to Instagram. Step-by-Step Setup Instructions 1) Prerequisites n8n (Cloud or Self-Hosted) OpenAI API Key (GPT-5) Wavespeed API Key (Nano Banana) Blotato API credentials (connected to Instagram) Google Sheets OAuth credentials 2) Add Credentials in n8n OpenAI: Settings → Credentials → Add “OpenAI API” Wavespeed: HTTP Header Auth (e.g. Authorization: Bearer <API_KEY>) Blotato: Add “Blotato API” Google Sheets: Add “Google Sheets OAuth2 API” 3) Configure & Test Run with an idea like “Top 5 design hacks”. Check generated images, caption, and logged sheet entry. Confirm posting works via Blotato. 4) Optional Add a Schedule Trigger for weekly automation. Insert a Slack approval loop before posting. Customization Guide ✏️ Change design style: Modify adjectives in the Image Prompt Generator. 📑 Adjust number of slides: Change Split node loop count. 💬 Tone of captions: Edit Caption Generator’s system prompt. ⏱️ Adjust render wait time: If image generation takes longer, increase the Wait node duration from 30 seconds to 60 seconds or more. 🗂️ Log extra data: Add columns in Google Sheets for campaign or topic. 🔁 Swap posting tool: Replace Blotato with your scheduler or email node. Requirements OpenAI API key (GPT-5 or compatible) Wavespeed API key (Nano Banana) Blotato API credentials Google Sheets OAuth credentials n8n account (Cloud or Self-Hosted)
by Vinay Gangidi
Cash Reconciliation with AI This template automates daily cash reconciliation by comparing your open invoices against bank statement transactions. Instead of manually scanning statements line by line, the workflow uses AI to: Match transactions to invoices and assign confidence scores Flag unapplied or review-needed payments Produce a reconciliation table with clear metrics (match %, unmatched count, etc.) The end result: faster cash application, fewer errors, and better visibility into your cash flow. Good to know Each AI transaction match call will consume credits from your OpenAI account. Check OpenAI pricing for costs. OCR is used to extract data from PDF bank statements, so you’ll need a Mistral OCR API key. This workflow assumes invoices are stored in an Excel or CSV file. You may need to tweak column names to match your file headers. How it works Import files:The workflow pulls your invoice file (Excel/CSV) and daily bank statement (from OneDrive, Google Drive, or local storage). Extract and normalize data: OCR is applied to bank statements if needed. Both data sources are cleaned and aligned into comparable formats. AI matching: The AI agent compares statement transactions against invoice records, assigns a confidence score, and flags items that require manual review. Reconciliation output:A ready-made table shows matched invoices (with amounts and confidence), unmatched items, and summary stats. How to use Start with the manual trigger node to test the flow. Once validated, replace it with a schedule trigger to run daily. Adjust thresholds (like date tolerances or amount variances) in the code nodes to fit your business rules. Review the reconciliation table each day most of the work is automated, you just handle exceptions. Requirements OpenAI API key Mistral OCR API key (for PDF bank statements) Microsoft OneDrive API key and Microsoft Excel API key Access to your invoice file (Excel/CSV) and daily bank statement source Setup steps Connect accounts: Enter your API keys (OpenAI, Mistral OCR, OneDrive, Excel). Configure input nodes: Point the Excel/CSV node to your invoice file. Connect the Get Bank Statement node to your statement storage. Configure AI agent: Add your OpenAI API credentials to the AI node. Customize if needed Update column mappings if your file uses different headers. Adjust matching thresholds and tolerance logic.
by Gabriela Macovei
WhatsApp Receipt OCR & Data Extraction Suite Categories: Accounting Automation • OCR Processing • AI Data Extraction • Business Tools This workflow transforms WhatsApp into a fully automated receipt-processing system using advanced OCR, multi-model AI parsing, and structured data storage. By combining LlamaParse, Claude (OpenRouter), Gemini, Google Sheets, and Twilio, it eliminates manual data entry and delivers instant, reliable receipt digitization for any business. What This Workflow Does When a user sends a receipt photo or PDF via WhatsApp, the automation: Receives the file through Twilio WhatsApp Uploads and parses it with LlamaParse (high-res OCR + invoice preset) Extracts structured data using Claude + Gemini + a strict JSON parser Cleans and normalizes the data (dates, ABN, vendor, tax logic) Uploads the receipt to Google Drive Logs the extracted fields into a Google Sheet Replies to the user on WhatsApp with the extracted details Asks for confirmation via quick-reply buttons Updates the Google Sheet based on user validation The result is a fast, scalable, human-free system for converting raw receipt photos into clean, structured accounting data. Key Benefits No friction for users:** receipts are submitted simply by sending a WhatsApp message. High-accuracy OCR:** LlamaParse extracts text, tables, totals, vendors, tax, and ABN with impressive reliability. Enterprise-grade data validation:** complex logic ensures the correct interpretation of GST, included taxes, or unidentified tax amounts. Multi-model extraction:** Claude and Gemini both analyse the OCR output for more reliable result. We have one primary LLM and a secondary one. Hands-off accounting:** every receipt becomes a standardized row in Google Sheets. Two-way WhatsApp communication:** users can confirm or reject extracted data instantly. Scalable architecture:** perfect for businesses handling dozens or thousands of receipts monthly. How It Works (Technical Overview) 1. Twilio → Webhook Trigger The workflow starts when a WhatsApp message containing a media file hits your Twilio webhook. 2. Initial Google Sheets Logging The MessageSid is appended to your tracking sheet to ensure every receipt is traceable. 3. LlamaParse OCR The file is sent to LlamaParse with the invoice preset, high-resolution OCR, and table extraction enabled. The workflow checks job completion before moving further. 4. LLM Data Extraction The OCR markdown is analyzed using: Claude Sonnet 4.5 (via OpenRouter) Gemini 2.5 Pro A strict structured JSON output parser Custom JS cleanup logic The system extracts: Vendor Cost Tax (with multi-rule Australian GST logic) Currency Date (parsed + normalized) ABN (validated and digit-normalized) 5. Google Drive Integration The uploaded receipt is stored, shared, and linked back to the record in Sheets. 6. Google Sheets Update Fields are appended/updated following a clean schema: Vendor Cost Tax Date Currency ABN Public drive link Status (Confirmed / Not confirmed) 7. User Response Flow The user receives a summary of extracted data via WhatsApp. Buttons allow them to approve or reject accuracy. The Google Sheet updates accordingly. Target Audience This workflow is ideal for: Accounting & bookkeeping firms Outsourced finance departments Small businesses tracking expenses Field workers submitting receipts Automation agencies offering DFY systems CFOs wanting real-time expense visibility Use Cases Expense reconciliation Automated bookkeeping Receipt digitization & compliance Real-time employee expense submission Multi-client automation at accounting agencies Required Integrations Twilio WhatsApp** (Business API number + webhook) LlamaParse API** OpenRouter (Claude Sonnet)** Google Gemini API** Google Drive** Google Sheets** Setup Instructions (High-Level) Import the n8n workflow. Connect your Twilio WhatsApp account. Add API credentials for: LlamaParse OpenRouter Google Gemini Google Drive Google Sheets Create your target Google Sheet. Configure your WhatsApp webhook URL in Twilio. Test with a sample receipt. Why This System Works Users send receipts using a tool they already use daily (WhatsApp). LlamaParse provides state-of-the-art OCR for low-quality receipts. Using multiple LLMs drastically increases accuracy for vendor, ABN, and tax extraction. Advanced normalization logic ensures data is clean and accounting-ready. Google Sheets enables reliable storage, reporting, and future integrations. End-to-end automation replaces hours of manual work with instant processing. Watch My Complete Build Process Want to see exactly how I built this entire AI design system from scratch? I walk through the complete development process on my YouTube channel
by Muhammad Asadullah
Daily Blog Automation Workflow Fully automated blog creation system using n8n + AI Agents + Image Generation Overview This workflow automates the entire blog creation pipeline—from topic research to final publication. Three specialized AI agents collaborate to produce publication-ready blog posts with custom images, all saved directly to your Supabase database. How It Works 1. Research Agent (Topic Discovery) Triggers**: Runs on schedule (default: daily at 4 AM) Process**: Fetches existing blog titles from Supabase to avoid duplicates Uses Google Search + RSS feeds to identify trending topics in your niche Scrapes competitor content to find content gaps Generates detailed topic briefs with SEO keywords, search intent, and differentiation angles Output**: Comprehensive research document with SERP analysis and content strategy 2. Writer Agent (Content Creation) Triggers**: Receives research from Agent 1 Process**: Writes full blog article based on research brief Follows strict SEO and readability guidelines (no AI fluff, natural tone, actionable content) Structures content with proper HTML markup Includes key sections: hook, takeaways, frameworks, FAQs, CTAs Places image placeholders with mock URLs (https://db.com/image_1, etc.) Output**: Complete JSON object with title, slug, excerpt, tags, category, and full HTML content 3. Image Prompt Writer (Visual Generation) Triggers**: Receives blog content from Agent 2 Process**: Analyzes blog content to determine number and type of images needed Generates detailed 150-word prompts for each image (feature image + content images) Creates prompts optimized for Nano-Banana image model Names each image descriptively for SEO Output**: Structured prompts for 3-6 images per blog post 4. Image Generation Pipeline Process**: Loops through each image prompt Generates images via Nano-Banana API (Wavespeed.ai) Downloads and converts images to PNG Uploads to Supabase storage bucket Generates permanent signed URLs Replaces mock URLs in HTML with real image URLs Output**: Blog HTML with all images embedded 5. Publication Final blog post saved to Supabase blogs table as draft Ready for immediate publishing or review Key Features ✅ Duplicate Prevention: Checks existing blogs before researching new topics ✅ SEO Optimized: Natural language, proper heading structure, keyword integration ✅ Human-Like Writing: No robotic phrases, varied sentence structure, actionable advice ✅ Custom Images: Generated specifically for each blog's content ✅ Fully Structured: JSON output with all metadata (tags, category, excerpt, etc.) ✅ Error Handling: Automatic retries with wait periods between agent calls ✅ Tool Integration: Google Search, URL scraping, RSS feeds for research Setup Requirements 1. API Keys Needed Google Gemini API**: For Gemini 2.5 Pro/Flash models (content generation/writing) Groq API (optional)**: For Kimi-K2-Instruct model (research/writing) Serper.dev API**: For Google Search (2,500 free searches/month) Wavespeed.ai API**: For Nano-Banana image generation Supabase Account**: For database and image storage 2. Supabase Setup Create blogs table with fields: title, slug, excerpt, category, tags, featured_image, status, featured, content Create storage bucket for blog images Configure bucket as public or use signed URLs 3. Workflow Configuration Update these placeholders: RSS Feed URLs**: Replace [your website's rss.xml] with your site's RSS feed Storage URLs**: Update Supabase storage paths in "Upload object" and "Generate presigned URL" nodes API Keys**: Add your credentials to all HTTP Request nodes Niche/Brand**: Customize Research Agent system prompt with your industry keywords Writing Style**: Adjust Writer Agent prompt for your brand voice Customization Options Change Image Provider Replace the "nano banana" node with: Gemini Imagen 3/4 DALL-E 3 Midjourney API Any Wavespeed.ai model Adjust Schedule Modify "Schedule Trigger" to run: Multiple times daily Specific days of week On-demand via webhook Alternative Research Tools Replace Serper.dev with: Perplexity API (included as alternative node) Custom web scraping Different search providers Output Format { "title": "Your SEO-Optimized Title", "slug": "your-seo-optimized-title", "excerpt": "Compelling 2-3 sentence summary with key benefits.", "category": "Your Category", "tags": ["tag1", "tag2", "tag3", "tag4"], "author_name": "Your Team Name", "featured": false, "status": "draft", "content": "...complete HTML with embedded images..." } Performance Notes Average runtime**: 15-25 minutes per blog post Cost per post**: ~$0.10-0.30 (depending on API usage) Image generation**: 10-15 seconds per image with Nano-Banana Retry logic**: Automatically handles API timeouts with 5-15 minute wait periods Best Practices Review Before Publishing: Workflow saves as "draft" status for human review Monitor API Limits: Track Serper.dev searches and image generation quotas Test Custom Prompts: Adjust Research/Writer prompts to match your brand Image Quality: Review generated images; regenerate if needed SEO Validation: Check slugs and meta descriptions before going live Workflow Architecture 3 Main Phases: Research → Writer → Image Prompts (Sequential AI Agent chain) Image Generation → Upload → URL Replacement (Loop-based processing) Final Assembly → Database Insert (Single save operation) Error Handling: Wait nodes between agents prevent rate limiting Retry logic on agent failures (max 2 retries) Conditional checks ensure content quality before proceeding Result: Hands-free blog publishing that maintains quality while saving 3-5 hours per post.
by SOLOVIEVA ANNA
Who this is for Users who frequently receive images or documents via LINE or email Teams needing automatic OCR + AI summarization Anyone who wants hands-free document processing and structured storage How it works Triggers: LINE Webhook and Gmail IMAP Trigger capture incoming messages or emails. Source Tagging: Inputs are tagged as LINE or EMAIL for later branching. File Handling: Files are uploaded to Google Drive and converted for analysis. OCR: An AI vision model extracts all readable text from the document image. AI Summarization: A text model produces a concise summary. Logging: The summary is appended to Google Sheets for record-keeping. Email Drafting: A Gmail Draft is generated containing the OCR text and summary. How to set up Connect your LINE, Gmail, OpenAI, and Google Drive/Sheets credentials. Update folder IDs, sheet names, and authentication fields as needed. Optional: customize summarization instructions. Customization ideas Add translation or classification steps Modify output format for Slack/Notion Store files in date-based Drive folders
by Ehsan
Analyze food ingredients from Telegram photos using Gemini and Airtable 🛡️ Personal Ingredient Bodyguard Turn your Telegram bot into an intelligent food safety scanner. This workflow analyzes photos of ingredient labels sent via Telegram, extracts the text using AI, and cross-references it against your personal database of "Good" and "Bad" ingredients in Airtable. It solves the problem of manually reading tiny, complex labels for allergies or dietary restrictions. Whether you are Vegan, Halal, allergic to nuts, or just avoiding specific additives, this workflow acts as a strict, personalized bodyguard for your diet. It even features a customizable "Persona" (like a Sarcastic Bodyguard) to make safety checks fun. 🎯 Who is it for? People with specific dietary restrictions (Vegan, Gluten-free, Keto). Individuals with food allergies (Nuts, Dairy, Shellfish). Special dietary observers (Halal, Kosher). Health-conscious shoppers avoiding specific additives (e.g., E120, Aspartame). 🚀 How it works Trigger: You send a photo of a product label to your Telegram Bot. Fetch Rules: The workflow retrieves your active "Watchlist" (Ingredients to avoid/prefer) and "Persona" settings from Airtable. Vision & Logic: It uses an AI Vision model to extract text from the image (OCR) and Google Gemini to analyze the text against your strict veto rules (e.g., "Safe" only if ZERO bad items are found). Response: The bot replies instantly on Telegram with a Safe/Unsafe verdict, highlighting detected ingredients using HTML formatting. Log: The result is saved back to Airtable for your records. ⚙️ How to set up This workflow relies on a specific Airtable structure to function as the "Brain." Set up Airtable Sign up for Airtable: Click here Copy the required Base: Click here to copy the "Ingredients Brain" base Connect Airtable to n8n (5-min guide): Watch Tutorial Set up Telegram Message @BotFather on Telegram to create a new bot and get your API Token. Add your Telegram credentials in n8n. Configure AI Add your Google Gemini API credentials. Note on OCR: This template is configured to use a local LLM for OCR to save costs (via the OpenAI-compatible node). If you do not have a local model running, simply swap the "OpenAI Chat Model" node for a standard GPT-4o or Gemini Vision node. 📋 Requirements n8n** (Cloud or Self-hosted) Airtable** account (Free tier works) Telegram** account Google Gemini** API Key Local LLM* (Optional, for free OCR) OR *OpenAI/Gemini** Key (for standard Cloud Vision) 🎨 How to customize Change the Persona:** Go to the "Preferences" table in Airtable to change the bot's personality (e.g., "Helpful Nutritionist") and output language. Update Ingredients:** Add or remove items in the "Watchlist" table. Mark them as "Good Stuff" or "Bad Stuff" and set Status to "Active". Adjust Sensitivity:** The AI prompt in the "AI Agent" node is set to strict "Veto" mode (Bad overrides Good). You can modify the system prompt to change this logic. ⚠️ Disclaimer This tool is for informational purposes only. Not Medical Advice: Do not rely on this for life-threatening allergies. AI Limitations: OCR can misread text, and AI can hallucinate. Verify: Always double-check the physical product label. Use at your own risk.
by vinci-king-01
Meeting Notes Distributor – Mailchimp and MongoDB This workflow automatically converts raw meeting recordings or written notes into concise summaries, stores them in MongoDB for future reference, and distributes the summaries to all meeting participants through Mailchimp. It is ideal for teams that want to keep everyone aligned without manual copy-and-paste or email chains. Pre-conditions/Requirements Prerequisites n8n instance (self-hosted or cloud) Audio transcription service or written notes available via HTTP endpoint MongoDB database (cloud or self-hosted) Mailchimp account with an existing Audience list Required Credentials MongoDB** – Connection string with insert permission Mailchimp API Key** – To send campaigns (Optional) HTTP Service Auth** – If your transcription/notes endpoint is secured Specific Setup Requirements | Component | Example Value | Notes | |------------------|--------------------------------------------|-----------------------------------------------------| | MongoDB Database | meeting_notes | Database in which summaries will be stored | | Collection Name | summaries | Collection automatically created if it doesn’t exist| | Mailchimp List | Meeting Participants | Audience list containing participant email addresses| | Notes Endpoint | https://example.com/api/meetings/{id} | Returns raw transcript or note text (JSON) | How it works This workflow automatically converts raw meeting recordings or written notes into concise summaries, stores them in MongoDB for future reference, and distributes the summaries to all meeting participants through Mailchimp. It is ideal for teams that want to keep everyone aligned without manual copy-and-paste or email chains. Key Steps: Schedule Trigger**: Fires daily (or on-demand) to check for new meeting notes. HTTP Request**: Downloads raw notes or transcript from your endpoint. Code Node**: Uses an AI or custom function to generate a concise summary. If Node**: Skips processing if the summary already exists in MongoDB. MongoDB**: Inserts the new summary document. Split in Batches**: Splits participants into Mailchimp-friendly batch sizes. Mailchimp**: Sends personalized summary emails to each participant. Wait**: Ensures rate limits are respected between Mailchimp calls. Merge**: Consolidates success/failure results for logging or alerting. Set up steps Setup Time: 15-25 minutes Clone the workflow: Import or copy the JSON into your n8n instance. Configure Schedule Trigger: Set the cron expression (e.g., every weekday at 18:00). Set HTTP Request URL: Replace placeholder with your transcription/notes endpoint. Add auth headers if needed. Add MongoDB Credentials: Enter your connection string in the MongoDB node. Customize Summary Logic: Open the Code node to tweak summarization length, language, or model. Mailchimp Credentials: Supply your API key and select the correct Audience list. Map Email Fields: Ensure participant emails are supplied from transcription metadata or external source. Test Run: Execute once manually to verify MongoDB insert and email delivery. Activate Workflow: Enable the workflow so it runs on its defined schedule. Node Descriptions Core Workflow Nodes: Schedule Trigger** – Initiates the workflow at predefined intervals. HTTP Request** – Retrieves the latest meeting data (transcript or notes). Code** – Generates a summarized version of the meeting content. If** – Checks MongoDB for duplicates to avoid re-sending. MongoDB** – Stores finalized summaries for archival and audit. SplitInBatches** – Breaks participant list into manageable chunks. Mailchimp** – Sends summary emails via campaigns or transactional messages. Wait** – Pauses between batches to honor Mailchimp rate limits. Merge** – Aggregates success/failure responses for logging. Data Flow: Schedule Trigger → HTTP Request → Code → If If summary is new: MongoDB → SplitInBatches → Mailchimp → Wait Merge collates all results Customization Examples 1. Change Summary Length // Inside the Code Node const rawText = items[0].json.text; const maxSentences = 5; // adjust to 3, 7, etc. items[0].json.summary = summarize(rawText, maxSentences); return items; 2. Personalize Mailchimp Subject // In the Set node before Mailchimp items[0].json.subject = Recap: ${items[0].json.meetingTitle} – ${new Date().toLocaleDateString()}; return items; Data Output Format The workflow outputs structured JSON data: { "meetingId": "abc123", "meetingTitle": "Quarterly Planning", "summary": "Key decisions on roadmap, budget approvals...", "participants": [ "alice@example.com", "bob@example.com" ], "mongoInsertId": "65d9278fa01e3f94b1234567", "mailchimpBatchIds": ["2024-01-01T12:00:00Z#1", "2024-01-01T12:01:00Z#2"] } Troubleshooting Common Issues Mailchimp rate-limit errors – Increase Wait node delay or reduce batch size. Duplicate summaries – Ensure the If node correctly queries MongoDB using meeting ID as a unique key. Performance Tips Keep batch sizes under 500 to stay well within Mailchimp limits. Offload AI summarization to external services if Code node execution time is high. Pro Tips: Store full transcripts in MongoDB GridFS for future reference. Use environment variables in n8n for all API keys to simplify workflow export/import. Add a notifier (e.g., Slack node) after Merge to alert admins on failures. This is a community template provided “as-is” without warranty. Always validate the workflow in a test environment before using it in production.
by Cheng Siong Chin
How It Works This workflow automates veterinary clinic operations and client communications for animal hospitals and veterinary practices managing appointments, inventory, and patient care. It solves the dual challenge of maintaining medical supply levels while delivering personalized pet care updates and appointment coordination. The system processes scheduled inventory data through AI-powered quality validation and restocking recommendations, then branches into two intelligent pathways: supplier coordination via email for replenishment, and client engagement through personalized appointment reminders, follow-up care instructions, and satisfaction surveys distributed via email and messaging platforms. This eliminates manual inventory tracking, reduces appointment no-shows, and ensures consistent post-visit care communication. Setup Steps Configure webhook or schedule trigger for veterinary management system inventory data sync Add AI model API keys for inventory quality validation Connect supplier email system with template configurations for automated purchase orders Set up client communication channels with appointment and care instruction templates Integrate customer database for pet records and appointment history Prerequisites Veterinary practice management software with API/webhook capabilities, AI service API access Use Cases Multi-location veterinary hospitals coordinating inventory across sites Customization Modify AI prompts for species-specific care instructions Benefits Reduces supply management time by 75%, prevents critical medication stockouts
by Rajeet Nair
Overview This workflow enables GDPR-compliant document processing by detecting, masking, and securely handling personally identifiable information (PII) before AI analysis. It ensures that sensitive data is never exposed to AI systems by replacing it with tokens, while still allowing controlled re-injection of original values when permitted. The workflow also maintains full audit logs for compliance and traceability. How It Works Document Upload & Configuration Receives documents via webhook and initializes configuration such as document ID, thresholds, and database tables. Text Extraction Extracts raw text from uploaded documents for processing. Multi-Detector PII Detection Detects emails, phone numbers, ID numbers, and addresses using regex and AI-based detection. PII Aggregation & Conflict Resolution Merges detections, resolves overlaps, removes duplicates, and builds a unified PII map. Tokenization & Vault Storage Replaces sensitive data with secure tokens and stores original values in a database vault. Masking & Validation Generates masked text and verifies that all PII has been successfully removed before AI processing. AI Processing (Masked Data) Processes the document using AI while preserving tokens to prevent exposure of sensitive information. Re-Injection Controller Determines which fields are allowed to restore original PII based on permissions. Secure Retrieval & Restoration Retrieves original values from the vault and restores them only where permitted. Audit Logging Stores metadata, detected PII types, and re-injection events for compliance tracking. Error Handling & Alerts Blocks processing and triggers alerts if masking fails or compliance rules are violated. Setup Instructions Activate the webhook and upload a document (PDF or supported file) Configure AI credentials (Anthropic / OpenAI) Set database credentials for PII vault and audit logs Adjust detection thresholds and compliance settings if needed Execute the workflow and review outputs and logs Use Cases GDPR-compliant document processing pipelines Secure AI document analysis with PII protection Automated redaction and tokenization systems Financial, legal, or healthcare document processing Privacy-first AI workflows for sensitive data Requirements n8n (latest version recommended) Anthropic or OpenAI API credentials PostgreSQL (or compatible database) for vault and audit logs Input documents (PDF or text-based files)