by Intuz
This n8n template from Intuz provides a complete solution to automate a powerful, AI-driven 'Chat with your PDF' bot on Telegram. It uses Retrieval-Augmented Generation (RAG) to allow users to upload documents, which are then indexed into a vector database, enabling the bot to answer questions based only on the provided content. Who's this workflow for? Researchers & Students Legal & Compliance Teams Business Analysts & Financial Advisors Anyone needing to quickly find information within large documents How it works This workflow has two primary functions: indexing a new document and answering questions about it. 1. Uploading & Indexing a Document: A user sends a PDF file to the Telegram bot. n8n downloads the document, extracts the text, and splits it into small, manageable chunks. Using Google Gemini, each text chunk is converted into a numerical representation (an "embedding"). These embeddings are stored in a Pinecone vector database, making the document's content searchable. The bot sends a confirmation message to the user that the document has been successfully saved. 2. Asking a Question (RAG): A user sends a regular text message (a question) to the bot. n8n converts the user's question into an embedding using Google Gemini. It then searches the Pinecone database to find the most relevant text chunks from the uploaded PDF that match the question. These relevant chunks (the "context") are sent to the Gemini chat model along with the original question. Gemini generates a new, accurate answer based only on the provided context and sends it back to the user in Telegram. Key Requirements to Use This Template 1. n8n Instance & Required Nodes: An active n8n account (Cloud or self-hosted). This workflow uses the official n8n LangChain integration (@n8n/n8n-nodes-langchain). If you are using a self-hosted version of n8n, please ensure this package is installed. 2. Telegram Account: A Telegram bot created via the BotFather, along with its API token. 3. Google Gemini AI Account: A Google Cloud account with the Vertex AI API enabled and an associated API Key. 4. Pinecone Account: A Pinecone account with an API key. You must have a vector index created in Pinecone. For use with Google Gemini's embedding-001 model, the index must be configured with 768 dimensions. Setup Instructions 1. Telegram Configuration: In the "Telegram Message Trigger" node, create a new credential and add your Telegram bot's API token. Do the same for the "Telegram Response" and "Telegram Response about Database" nodes. 2. Pinecone Configuration: In both "Pinecone Vector Store" nodes, create a new credential and add your Pinecone API key. In the "Index" field of both nodes, enter the name of your pre-configured Pinecone index (e.g., telegram). 3. Google Gemini Configuration: In all three Google Gemini nodes (Embeddings Google Gemini, Embeddings Google Gemini1, and Google Gemini Chat Model), create a new credential and add your Google Gemini (Palm) API key. 4. Activate and Use: Save the workflow and toggle the "Active" switch to ON. To use: First, send a PDF document to your bot. Wait for the confirmation message. Then, you can start asking questions about the content of that PDF. Connect with us Website: https://www.intuz.com/services Email: getstarted@intuz.com LinkedIn: https://www.linkedin.com/company/intuz Get Started: https://n8n.partnerlinks.io/intuz For Custom Workflow Automation Click here- Get Started
by Jameson Kanakulya
Automated Content Page Generator with AI, Tavily Research, and Supabase Storage > ⚠️ Self-Hosted Disclaimer: This template requires self-hosted n8n installation and external service credentials (OpenAI, Tavily, Google Drive, NextCloud, Supabase). It cannot run on n8n Cloud due to dependency requirements. Overview Transform simple topic inputs into professional, multi-platform content automatically. This workflow combines AI-powered content generation with intelligent research and seamless storage integration to create website content, blog articles, and landing pages optimized for different audiences. Key Features Automated Research**: Uses Tavily's advanced search to gather relevant, up-to-date information Multi-Platform Content**: Generates optimized content for websites, blogs, and landing pages Image Management**: Downloads from Google Drive and uploads to NextCloud with public URL generation Database Integration**: Stores all content in Supabase for easy retrieval Error Handling**: Built-in error management workflow for reliability Content Optimization**: AI-driven content strategy with trend analysis and SEO optimization Required Services & APIs Core Services n8n**: Self-hosted instance (required) OpenAI**: GPT-4 API access for content generation Tavily**: Research API for content discovery Google Drive**: Image storage and retrieval Google Sheets**: Content input and workflow triggering NextCloud**: Image hosting and public URL generation Supabase**: Database storage for generated content Setup Instructions ## Prerequisites Before setting up this workflow, ensure you have: Self-hosted n8n installation API credentials for all required services Database table created in Supabase ## Step 1: Service Account Configuration OpenAI Setup Create an OpenAI account at platform.openai.com Generate API key from the API Keys section In n8n, create new OpenAI credentials using your API key Test connection to ensure GPT-4 access Tavily Research Setup Sign up at tavily.com Get your API key from the dashboard Add Tavily credentials in n8n Configure search depth to "advanced" for best results Google Services Setup Create Google Cloud Project Enable Google Drive API and Google Sheets API Create OAuth2 credentials Configure Google Drive and Google Sheets credentials in n8n Share your input spreadsheet with the service account NextCloud Setup Install NextCloud or use hosted solution Create application password for API access Configure NextCloud credentials in n8n Create /images/ folder for content storage Supabase Setup Create Supabase project at supabase.com Create table with the following structure: CREATE TABLE works ( id SERIAL PRIMARY KEY, title TEXT NOT NULL, content TEXT NOT NULL, image_url TEXT, category TEXT, created_at TIMESTAMP DEFAULT NOW() ); Get project URL and service key from settings Configure Supabase credentials in n8n ## Step 2: Google Sheets Input Setup Create a Google Sheets document with the following columns: TITLE**: Topic or title for content generation IMAGE_URL**: Google Drive sharing URL for associated image Example format: TITLE | IMAGE_URL AI Chatbot Implementation | https://drive.google.com/file/d/your-file-id/view Digital Marketing Trends 2024 | https://drive.google.com/file/d/another-file-id/view ## Step 3: Workflow Import and Configuration Import the workflow JSON into your n8n instance Configure all credential connections: Link OpenAI credentials to "OpenAI_GPT4_Model" node Link Tavily credentials to "Tavily_Research_Agent" node Link Google credentials to "Google_Sheets_Trigger" and "Google_Drive_Image_Downloader" nodes Link NextCloud credentials to "NextCloud_Image_Uploader" and "NextCloud_Public_URL_Generator" nodes Link Supabase credentials to "Supabase_Content_Storage" node Update the Google Sheets Trigger node: Set your spreadsheet ID in the documentId field Configure polling frequency (default: every minute) Test each node connection individually before activating ## Step 4: Error Handler Setup (Optional) The workflow references an error handler workflow (GWQ4UI1i3Z0jp3GF). Either: Create a simple error notification workflow with this ID Remove the error handling references if not needed Update the workflow ID to match your error handler ## Step 5: Workflow Activation Save all node configurations Test the workflow with a sample row in your Google Sheet Verify content generation and storage in Supabase Activate the workflow for continuous monitoring How It Works ## Workflow Process Trigger: Google Sheets monitors for new rows with content topics Research: Tavily searches for 3 relevant articles about the topic Content Generation: AI agent creates multi-platform content (website, blog, landing page) Content Cleaning: Text processing removes formatting artifacts Image Processing: Downloads image from Google Drive, uploads to NextCloud URL Generation: Creates public sharing links for images Storage: Saves final content package to Supabase database ## Content Output Structure Each execution generates: Optimized Title**: SEO-friendly, platform-appropriate headline Multi-Platform Content**: Website content (professional, authority-building) Blog content (educational, SEO-optimized) Landing page content (conversion-focused) Category Classification**: Automated content categorization Image Assets**: Processed and publicly accessible images Customization Options ## Content Strategy Modification Edit the AI agent's system message to change content style Adjust character limits for different platform requirements Modify category classifications for your industry ## Research Parameters Change Tavily search depth (basic, advanced) Adjust number of research sources (1-10) Modify search topic focus ## Storage Configuration Update Supabase table structure for additional fields Change NextCloud folder organization Modify image naming conventions Troubleshooting ## Common Issues Workflow not triggering: Check Google Sheets permissions Verify polling frequency settings Ensure spreadsheet format matches requirements Content generation errors: Verify OpenAI API key and credits Check GPT-4 model access Review system message formatting Image processing failures: Confirm Google Drive sharing permissions Check NextCloud storage space and permissions Verify file formats are supported Database storage issues: Validate Supabase table structure Check API key permissions Review field mapping in storage node ## Performance Optimization Adjust polling frequency based on your content volume Monitor API usage to stay within limits Consider batch processing for high-volume scenarios Support and Updates This template is designed for self-hosted n8n environments and requires technical setup. For issues: Check n8n community forums Review service-specific documentation Test individual nodes in isolation Monitor execution logs for detailed error information
by Trung Tran
📘 Code of Conduct Q&A Slack Chatbot with RAG Powered > Empower employees to instantly access and understand the company’s Code of Conduct via a Slack chatbot, powered by Retrieval-Augmented Generation (RAG) and LLMs. 🧑💼 Who’s it for This workflow is designed for: HR and compliance teams** to automate policy-related inquiries Employees** who want quick answers to Code of Conduct questions directly inside Slack Startups or enterprises** that need internal compliance self-service tools powered by AI ⚙️ How it works / What it does This RAG-powered Slack chatbot answers user questions based on your uploaded Code of Conduct PDF using GPT-4 and embedded document chunks. Here's the flow: Receive Message from Slack: A webhook triggers when a message is posted in Slack. Check if it’s a valid query: Filters out non-user messages (e.g., bot mentions). Run Agent with RAG: Uses GPT-4 with Query Data Tool to retrieve relevant document chunks. Returns a well-formatted, context-aware answer. Send Response to Slack: Fetches user info and posts the answer back in the same channel. Document Upload Flow: HR can upload the PDF Code of Conduct file. It’s parsed, chunked, embedded using OpenAI, and stored for future query retrieval. A backup copy is saved to Google Drive. 🛠️ How to set up Prepare your environment: Slack Bot token & webhook configured (Sample slack app manifest: https://wisestackai.s3.ap-southeast-1.amazonaws.com/slack_bot_manifest.json) OpenAI API key (for GPT-4 & embedding) Google Drive credentials (optional for backup) Upload the Code of Conduct PDF: Use the designated node to upload your document (Sample file: https://wisestackai.s3.ap-southeast-1.amazonaws.com/20220419-ingrs-code-of-conduct-policy-en.pdf) This triggers chunking → embedding → data store. Deploy the chatbot: Host the webhook and connect it to your Slack app. Share the command format with employees (e.g., @CodeBot Can I accept gifts from partners?) Monitor and iterate: Improve chunk size or embed model if queries aren’t accurate. Review unanswered queries to enhance coverage. 📋 Requirements n8n (Self-hosted or Cloud) Slack App (with chat:write, users:read, commands) OpenAI account (embedding + GPT-4 access) Google Drive integration (for backups) Uploaded Code of Conduct in PDF format 🧩 How to customize the workflow | What to Customize | How to Do It | |-----------------------------|------------------------------------------------------------------------------| | 🔤 Prompt style | Edit the System & User prompts inside the Code Of Conduct Agent node | | 📄 Document types | Upload additional policy PDFs and tag them differently in metadata | | 🤖 Agent behavior | Tune GPT temperature or replace with different LLM | | 💬 Slack interaction | Customize message formats or trigger phrases | | 📁 Data Store engine | Swap to Pinecone, Weaviate, Supabase, etc. depending on use case | | 🌐 Multilingual support | Preprocess text and support locale detection via Slack metadata |
by Trung Tran
Multi-Agent Architecture Free Bootstrap Template for Beginners Free template to learn and reuse a multi-agent architecture in n8n. The company metaphor: a CEO (orchestrator) delegates to Marketing, Operations, Finance to produce a short sales-season plan, export it to PDF, and share it. Who’s it for Builders who want a clear, minimal pattern for multi-agent orchestration in n8n. Teams demoing/teaching agent collaboration with one coordinator + three specialists. Anyone needing a repeatable template to generate plans from multiple “departments”. How it works / What it does Trigger (Manual) — Click Execute workflow to start. Edit Fields — Provide brief inputs (company, products, dates, constraints, channels, goals). CEO Agent (Orchestrator) — Reads the brief, calls 3 tool agents once, merges results, resolves conflicts. Marketing Agent — Proposes top campaigns + channels + content calendar. Operations Agent — Outlines inventory/staffing readiness, fulfillment steps, risks. Finance Agent — Suggests pricing/discounts, budget split, targets. Compose Document — CEO produces Markdown; node converts to Google Doc → PDF. Share — Upload the PDF to Slack (or Drive) for review. Outputs Markdown plan** with sections (Summary, Timeline, Marketing, Ops, Pricing, Risks, Next Actions). Compact JSON** for automation (campaigns, budget, dates, actions). PDF** file for stakeholders. How to set up Add credentials OpenAI (or your LLM provider) for all agents. Google (Drive/Docs) to create the document and export PDF. Slack (optional) to upload/share the PDF. Map nodes (suggested) When clicking ‘Execute workflow’ → Edit Fields (form with: company, products, audience, start_date, end_date, channels, constraints, metrics). CEO Agent (AI Tool Node) → calls Marketing Agent, Operations Agent, Finance Agent (AI Tool Nodes). Configure metadata (doc title from company + window). Create document file (Google Docs API) with CEO Markdown. Convert to PDF (export). Upload a file (Slack) to share. Prompts (drop-in) CEO (system): orchestrate 3 tools; request concise JSON+Markdown; merge & resolve; output sections + JSON. Marketing / Operations / Finance (system): each returns a small JSON per its scope (campaigns/calendar; staffing/steps/risks; discounts/budget/targets). Test — Run once; verify the PDF and Slack message. Requirements n8n (current version with AI Tool Node). LLM credentials (e.g., OpenAI). Google credentials for Docs/Drive (to create & export). Optional Slack bot token for file uploads. How to customize the workflow Swap roles**: Replace departments (e.g., Product, Legal, Support) or add more tool agents. Change outputs: Export to **DOCX/HTML/Notion; add a cover page; attach brand styles. Approval step: Insert **Slack “Send & Wait” before PDF generation for review/edits. Data grounding**: Add RAG (Sheets/DB/Docs) so agents cite inventory, pricing, or past campaign KPIs. Automation JSON**: Extend the schema to match your CRM/PM tool and push next_actions into Jira/Asana. Scheduling: Replace manual trigger with a **cron (weekly/monthly planning). Localization**: Add a Translation agent or set language via input field. Guardrails**: Add length limits, cost caps (max tokens), and validation on agent JSON.
by Dzaky Jaya
This n8n workflow demonstrate how to configure AI Agent for financial research purposes especially for IDX data through Sectors App API. use cases: research stock market in Indonesia. analyze the performance of companies belonging to certain subsectors or company comparing financial metrics between BBCA and BBRI providing technical analysis for certain ticker stock movement and many more all from conversational agent UI chat. Main components Input-n8nChatNative**: handling and process input from native n8n chat ui Input-TelegramBot**: handling and process input from Telegram Bot Input-WebUI(Webhook)**: handling and process input from hosted Web UI through webhook Main Agent**: processing raw user queries and delegate task to specialized agent if needed. Spec Agent - Sectors App**: make request to Sectors App API to get real time financial data listed in IDX from available endpoint Spec Agent - Web Search**: make web search from Google Grounding (Gemini API) and Tavily Search. Vector Document Processing**: process document upload from user into embedding and vector store. How it works user queries may received from multiple platform (we use three here: Telegram, hosted Web UI, and native n8n chat UI) if user also uploading document, it will process the document and store it in vector store the request send to the Main Agent to process the queries the Main Agent decide the task to delegate to Specialized Agent if nedded. the result then sent back to user based on the platform How to use You need this API: Gemini API: get it free from https://aistudio.google.com/ Tavily API: get it free from https://www.tavily.com/ Sectors App API: get it from https://sectors.app/api/ you can optionally change the model or adding fallback model to handle token request, cause it may use quite many tokens.
by Cheng Siong Chin
How It Works A scheduled trigger initiates automated retrieval of TOTO/4D data, including both current and historical records. The datasets are merged and validated to ensure structural consistency before branching into parallel analytical pipelines. One track performs pattern mining and anomaly detection, while the other generates statistical and time-series forecasts. Results are then routed to an AI agent that integrates multi-model insights, evaluates prediction confidence, and synthesizes the final output. The system formats the results and delivers them through the selected export channel. Setup Instructions 1. Scheduler Config: Adjust the trigger frequency (daily or weekly). 2. Data Sources: Configure API endpoints or database connectors for TOTO/4D retrieval. 3. Data Mapping: Align and map column structures for both 1D and 4D datasets in merge nodes. 4. AI Integration: Insert the OpenAI API key and connect the required model nodes. 5. Export Paths: Select and configure output channels (email, Google Sheets, webhook, or API). Prerequisites TOTO/4D historical data source with API access OpenAI API key (GPT-4 recommended) n8n environment with HTTP/database connectivity Basic time series analysis knowledge Use Cases Traders: Pattern recognition for draw prediction with confidence scoring Analysts: Multivariate forecasting across cycles with validation Customization Data: Swap TOTO/4D with stock prices, crypto, sensors, or any time series Models: Replace OpenAI with Claude, local LLMs, or HuggingFace models Benefits Automation: Runs 24/7 without manual intervention Intelligence: Ensemble approach prevents overfitting and single-model bias
by Fabrice
This n8n template shows you how to automate document summarization while keeping full digital sovereignty. By combining Nextcloud for file storage and the IONOS AI Model Hub, your sensitive documents stay on European infrastructure—fully outside US CLOUD Act jurisdiction. Use cases Daily briefings: Automatically summarize lengthy reports as soon as your team uploads them. Meeting notes: Turn uploaded transcripts into clear, actionable bullet points—hands-free. Research management: Get instant summaries of academic papers or PDFs the moment they land in your research folder. How it works A Schedule Trigger kicks off the workflow at regular intervals. It calls the Nextcloud List a Folder node to retrieve all files in your chosen directory. A Code node then acts as a filter, checking each file's last-modified timestamp and letting through only files changed within the past 24 hours. Filtered files are pulled into the workflow via the Nextcloud Download a File node. From there, the Summarization Chain takes over: a Default Data Loader and Token Splitter prepare the content, and the IONOS AI Model Hub Chat Model generates a concise, structured summary. The final output is saved back to your Nextcloud instance as a note for your team. Good to know Nextcloud setup: Make sure your Nextcloud credentials include both read and write permissions—one to download source files, one to upload the generated summary. Model selection: The IONOS AI Model Hub currently offers several LLMs, including Llama 3.1 8B, Mistral Nemo 12B, Mistral Small 24B, Llama 3.3 70B, GPT-OSS 120B, and Llama 3.1 405B. For document summarization, Llama 3.3 70B strikes the best balance between output quality and speed. If you're processing high volumes or need faster turnaround, Mistral Nemo 12B is a leaner alternative. How to set it up Set your folder path in the List a Folder node to the directory you want to monitor. In the New Files Only code node, adjust the 24 value to change how far back the workflow looks. Open the Summarization Chain and define your output format—for example: "Summarize this document in 3 bullet points." Requirements Nextcloud account for file hosting and retrieval IONOS Cloud account to access the AI Model Hub
by Vinay Gangidi
Cash Reconciliation with AI This template automates daily cash reconciliation by comparing your open invoices against bank statement transactions. Instead of manually scanning statements line by line, the workflow uses AI to: Match transactions to invoices and assign confidence scores Flag unapplied or review-needed payments Produce a reconciliation table with clear metrics (match %, unmatched count, etc.) The end result: faster cash application, fewer errors, and better visibility into your cash flow. Good to know Each AI transaction match call will consume credits from your OpenAI account. Check OpenAI pricing for costs. OCR is used to extract data from PDF bank statements, so you’ll need a Mistral OCR API key. This workflow assumes invoices are stored in an Excel or CSV file. You may need to tweak column names to match your file headers. How it works Import files:The workflow pulls your invoice file (Excel/CSV) and daily bank statement (from OneDrive, Google Drive, or local storage). Extract and normalize data: OCR is applied to bank statements if needed. Both data sources are cleaned and aligned into comparable formats. AI matching: The AI agent compares statement transactions against invoice records, assigns a confidence score, and flags items that require manual review. Reconciliation output:A ready-made table shows matched invoices (with amounts and confidence), unmatched items, and summary stats. How to use Start with the manual trigger node to test the flow. Once validated, replace it with a schedule trigger to run daily. Adjust thresholds (like date tolerances or amount variances) in the code nodes to fit your business rules. Review the reconciliation table each day most of the work is automated, you just handle exceptions. Requirements OpenAI API key Mistral OCR API key (for PDF bank statements) Microsoft OneDrive API key and Microsoft Excel API key Access to your invoice file (Excel/CSV) and daily bank statement source Setup steps Connect accounts: Enter your API keys (OpenAI, Mistral OCR, OneDrive, Excel). Configure input nodes: Point the Excel/CSV node to your invoice file. Connect the Get Bank Statement node to your statement storage. Configure AI agent: Add your OpenAI API credentials to the AI node. Customize if needed Update column mappings if your file uses different headers. Adjust matching thresholds and tolerance logic.
by Gabriela Macovei
WhatsApp Receipt OCR & Data Extraction Suite Categories: Accounting Automation • OCR Processing • AI Data Extraction • Business Tools This workflow transforms WhatsApp into a fully automated receipt-processing system using advanced OCR, multi-model AI parsing, and structured data storage. By combining LlamaParse, Claude (OpenRouter), Gemini, Google Sheets, and Twilio, it eliminates manual data entry and delivers instant, reliable receipt digitization for any business. What This Workflow Does When a user sends a receipt photo or PDF via WhatsApp, the automation: Receives the file through Twilio WhatsApp Uploads and parses it with LlamaParse (high-res OCR + invoice preset) Extracts structured data using Claude + Gemini + a strict JSON parser Cleans and normalizes the data (dates, ABN, vendor, tax logic) Uploads the receipt to Google Drive Logs the extracted fields into a Google Sheet Replies to the user on WhatsApp with the extracted details Asks for confirmation via quick-reply buttons Updates the Google Sheet based on user validation The result is a fast, scalable, human-free system for converting raw receipt photos into clean, structured accounting data. Key Benefits No friction for users:** receipts are submitted simply by sending a WhatsApp message. High-accuracy OCR:** LlamaParse extracts text, tables, totals, vendors, tax, and ABN with impressive reliability. Enterprise-grade data validation:** complex logic ensures the correct interpretation of GST, included taxes, or unidentified tax amounts. Multi-model extraction:** Claude and Gemini both analyse the OCR output for more reliable result. We have one primary LLM and a secondary one. Hands-off accounting:** every receipt becomes a standardized row in Google Sheets. Two-way WhatsApp communication:** users can confirm or reject extracted data instantly. Scalable architecture:** perfect for businesses handling dozens or thousands of receipts monthly. How It Works (Technical Overview) 1. Twilio → Webhook Trigger The workflow starts when a WhatsApp message containing a media file hits your Twilio webhook. 2. Initial Google Sheets Logging The MessageSid is appended to your tracking sheet to ensure every receipt is traceable. 3. LlamaParse OCR The file is sent to LlamaParse with the invoice preset, high-resolution OCR, and table extraction enabled. The workflow checks job completion before moving further. 4. LLM Data Extraction The OCR markdown is analyzed using: Claude Sonnet 4.5 (via OpenRouter) Gemini 2.5 Pro A strict structured JSON output parser Custom JS cleanup logic The system extracts: Vendor Cost Tax (with multi-rule Australian GST logic) Currency Date (parsed + normalized) ABN (validated and digit-normalized) 5. Google Drive Integration The uploaded receipt is stored, shared, and linked back to the record in Sheets. 6. Google Sheets Update Fields are appended/updated following a clean schema: Vendor Cost Tax Date Currency ABN Public drive link Status (Confirmed / Not confirmed) 7. User Response Flow The user receives a summary of extracted data via WhatsApp. Buttons allow them to approve or reject accuracy. The Google Sheet updates accordingly. Target Audience This workflow is ideal for: Accounting & bookkeeping firms Outsourced finance departments Small businesses tracking expenses Field workers submitting receipts Automation agencies offering DFY systems CFOs wanting real-time expense visibility Use Cases Expense reconciliation Automated bookkeeping Receipt digitization & compliance Real-time employee expense submission Multi-client automation at accounting agencies Required Integrations Twilio WhatsApp** (Business API number + webhook) LlamaParse API** OpenRouter (Claude Sonnet)** Google Gemini API** Google Drive** Google Sheets** Setup Instructions (High-Level) Import the n8n workflow. Connect your Twilio WhatsApp account. Add API credentials for: LlamaParse OpenRouter Google Gemini Google Drive Google Sheets Create your target Google Sheet. Configure your WhatsApp webhook URL in Twilio. Test with a sample receipt. Why This System Works Users send receipts using a tool they already use daily (WhatsApp). LlamaParse provides state-of-the-art OCR for low-quality receipts. Using multiple LLMs drastically increases accuracy for vendor, ABN, and tax extraction. Advanced normalization logic ensures data is clean and accounting-ready. Google Sheets enables reliable storage, reporting, and future integrations. End-to-end automation replaces hours of manual work with instant processing. Watch My Complete Build Process Want to see exactly how I built this entire AI design system from scratch? I walk through the complete development process on my YouTube channel
by Muhammad Asadullah
Daily Blog Automation Workflow Fully automated blog creation system using n8n + AI Agents + Image Generation Overview This workflow automates the entire blog creation pipeline—from topic research to final publication. Three specialized AI agents collaborate to produce publication-ready blog posts with custom images, all saved directly to your Supabase database. How It Works 1. Research Agent (Topic Discovery) Triggers**: Runs on schedule (default: daily at 4 AM) Process**: Fetches existing blog titles from Supabase to avoid duplicates Uses Google Search + RSS feeds to identify trending topics in your niche Scrapes competitor content to find content gaps Generates detailed topic briefs with SEO keywords, search intent, and differentiation angles Output**: Comprehensive research document with SERP analysis and content strategy 2. Writer Agent (Content Creation) Triggers**: Receives research from Agent 1 Process**: Writes full blog article based on research brief Follows strict SEO and readability guidelines (no AI fluff, natural tone, actionable content) Structures content with proper HTML markup Includes key sections: hook, takeaways, frameworks, FAQs, CTAs Places image placeholders with mock URLs (https://db.com/image_1, etc.) Output**: Complete JSON object with title, slug, excerpt, tags, category, and full HTML content 3. Image Prompt Writer (Visual Generation) Triggers**: Receives blog content from Agent 2 Process**: Analyzes blog content to determine number and type of images needed Generates detailed 150-word prompts for each image (feature image + content images) Creates prompts optimized for Nano-Banana image model Names each image descriptively for SEO Output**: Structured prompts for 3-6 images per blog post 4. Image Generation Pipeline Process**: Loops through each image prompt Generates images via Nano-Banana API (Wavespeed.ai) Downloads and converts images to PNG Uploads to Supabase storage bucket Generates permanent signed URLs Replaces mock URLs in HTML with real image URLs Output**: Blog HTML with all images embedded 5. Publication Final blog post saved to Supabase blogs table as draft Ready for immediate publishing or review Key Features ✅ Duplicate Prevention: Checks existing blogs before researching new topics ✅ SEO Optimized: Natural language, proper heading structure, keyword integration ✅ Human-Like Writing: No robotic phrases, varied sentence structure, actionable advice ✅ Custom Images: Generated specifically for each blog's content ✅ Fully Structured: JSON output with all metadata (tags, category, excerpt, etc.) ✅ Error Handling: Automatic retries with wait periods between agent calls ✅ Tool Integration: Google Search, URL scraping, RSS feeds for research Setup Requirements 1. API Keys Needed Google Gemini API**: For Gemini 2.5 Pro/Flash models (content generation/writing) Groq API (optional)**: For Kimi-K2-Instruct model (research/writing) Serper.dev API**: For Google Search (2,500 free searches/month) Wavespeed.ai API**: For Nano-Banana image generation Supabase Account**: For database and image storage 2. Supabase Setup Create blogs table with fields: title, slug, excerpt, category, tags, featured_image, status, featured, content Create storage bucket for blog images Configure bucket as public or use signed URLs 3. Workflow Configuration Update these placeholders: RSS Feed URLs**: Replace [your website's rss.xml] with your site's RSS feed Storage URLs**: Update Supabase storage paths in "Upload object" and "Generate presigned URL" nodes API Keys**: Add your credentials to all HTTP Request nodes Niche/Brand**: Customize Research Agent system prompt with your industry keywords Writing Style**: Adjust Writer Agent prompt for your brand voice Customization Options Change Image Provider Replace the "nano banana" node with: Gemini Imagen 3/4 DALL-E 3 Midjourney API Any Wavespeed.ai model Adjust Schedule Modify "Schedule Trigger" to run: Multiple times daily Specific days of week On-demand via webhook Alternative Research Tools Replace Serper.dev with: Perplexity API (included as alternative node) Custom web scraping Different search providers Output Format { "title": "Your SEO-Optimized Title", "slug": "your-seo-optimized-title", "excerpt": "Compelling 2-3 sentence summary with key benefits.", "category": "Your Category", "tags": ["tag1", "tag2", "tag3", "tag4"], "author_name": "Your Team Name", "featured": false, "status": "draft", "content": "...complete HTML with embedded images..." } Performance Notes Average runtime**: 15-25 minutes per blog post Cost per post**: ~$0.10-0.30 (depending on API usage) Image generation**: 10-15 seconds per image with Nano-Banana Retry logic**: Automatically handles API timeouts with 5-15 minute wait periods Best Practices Review Before Publishing: Workflow saves as "draft" status for human review Monitor API Limits: Track Serper.dev searches and image generation quotas Test Custom Prompts: Adjust Research/Writer prompts to match your brand Image Quality: Review generated images; regenerate if needed SEO Validation: Check slugs and meta descriptions before going live Workflow Architecture 3 Main Phases: Research → Writer → Image Prompts (Sequential AI Agent chain) Image Generation → Upload → URL Replacement (Loop-based processing) Final Assembly → Database Insert (Single save operation) Error Handling: Wait nodes between agents prevent rate limiting Retry logic on agent failures (max 2 retries) Conditional checks ensure content quality before proceeding Result: Hands-free blog publishing that maintains quality while saving 3-5 hours per post.
by Cheng Siong Chin
How It Works This workflow automates veterinary clinic operations and client communications for animal hospitals and veterinary practices managing appointments, inventory, and patient care. It solves the dual challenge of maintaining medical supply levels while delivering personalized pet care updates and appointment coordination. The system processes scheduled inventory data through AI-powered quality validation and restocking recommendations, then branches into two intelligent pathways: supplier coordination via email for replenishment, and client engagement through personalized appointment reminders, follow-up care instructions, and satisfaction surveys distributed via email and messaging platforms. This eliminates manual inventory tracking, reduces appointment no-shows, and ensures consistent post-visit care communication. Setup Steps Configure webhook or schedule trigger for veterinary management system inventory data sync Add AI model API keys for inventory quality validation Connect supplier email system with template configurations for automated purchase orders Set up client communication channels with appointment and care instruction templates Integrate customer database for pet records and appointment history Prerequisites Veterinary practice management software with API/webhook capabilities, AI service API access Use Cases Multi-location veterinary hospitals coordinating inventory across sites Customization Modify AI prompts for species-specific care instructions Benefits Reduces supply management time by 75%, prevents critical medication stockouts
by Cheng Siong Chin
How It Works This workflow automates hospital operational event management by intelligently processing incoming events and orchestrating appropriate responses across multiple hospital systems. Designed for hospital operations managers, healthcare IT teams, and clinical administrators, it solves the complex challenge of coordinating rapid responses to diverse hospital events including patient admissions, equipment alerts, staffing emergencies, and clinical escalations. The system receives event triggers via webhook, uses AI-powered orchestration to analyze event context and determine required actions, then intelligently routes tasks to appropriate systems including appointment scheduling, task management, and insurance verification. It calculates priority scores, assigns tasks, verifies insurance coverage, and merges results while masking sensitive PHI data for compliance. The workflow leverages Anthropic's Claude and multiple AI tools to ensure context-aware decision-making aligned with hospital protocols. Setup Steps Configure webhook URL endpoint for hospital event system integration Set up Anthropic API credentials for Claude model access in orchestration agent Configure Hospital Orchestration Agent Tool with your facility's event protocols Connect Schedule Appointment API with hospital scheduling system credentials Set up Task Management API integration for staff assignment system Configure Insurance Verification API with payer network access credentials Prerequisites Active Anthropic API account, hospital event management system with webhook capability Use Cases Patient admission coordination, equipment failure response, code blue orchestration Customization Modify orchestration agent prompts for facility-specific protocols Benefits Reduces event response time by 75%, ensures consistent protocol adherence