by Surya Vardhan Yalavarthi
What this workflow does This workflow automates the full machine learning lifecycle end-to-end using Claude AI as the intelligent decision-maker at every stage. Send one HTTP request with a dataset URL and a business goal — and the pipeline handles everything from raw CSV to a human-approved, documented model ready for GitHub. The pipeline runs in 5 sequential phases: Phase 1 — Strategy Claude Sonnet 4 receives the dataset URL, target variable, and business goal. It outputs a structured JSON plan covering feature ideas, algorithm choices, and the evaluation metric. A fallback parser ensures the pipeline continues even if the LLM output is slightly malformed. Phase 2 — Data Engineering The workflow fetches the CSV via HTTP Request and runs it through a custom quoted-field CSV parser (handles commas inside quoted name fields, common in datasets like Titanic). It drops rows with missing targets, imputes missing Age values, and encodes categorical columns (Sex, Embarked) into numeric form. Phase 3 — Feature Engineering Claude Haiku reviews the cleaned dataset and confirms the 3 best features to engineer. A Code node then creates FamilySize (SibSp + Parch + 1), IsAlone (binary flag), and TitleEncoded (extracted and mapped from passenger name). A row-count validation gate ensures no data is silently lost. Phase 4 — Training & Evaluation Three algorithms are trained from scratch in pure JavaScript — no external ML libraries required: Logistic Regression** via gradient descent (200 epochs) Random Forest** via 10 bagged decision stumps XGBoost** via gradient boosting with residual-based stump selection Precision, recall, F1, and accuracy are computed for each. Claude Sonnet then acts as an LLM judge: it reads all three result sets alongside the original business goal and selects the winner with a one-sentence justification. A deterministic fallback (highest F1) runs if the LLM response fails to parse. Phase 5 — HITL Deployment Claude Sonnet writes a structured MODEL_CARD.md covering model overview, performance metrics, training data summary, feature engineering decisions, intended use, and limitations. The full results are then posted to a Slack channel as a formatted approval request. A human can review the results and reply to approve or reject deployment. An optional Supabase audit log records each phase transition with timestamp, phase name, status, and run ID. Tested results Tested on the Titanic dataset (891 rows): | Model | F1 Score | Accuracy | |---|---|---| | Logistic Regression | 0.712 | 0.787 | | Random Forest | 0.739 | 0.804 | | XGBoost | 0.761 | 0.821 | Claude correctly identified XGBoost as the winner and generated a complete model card in under 10 seconds. What you need | Requirement | Details | |---|---| | Anthropic API key | Used in P1, P4 (Claude Sonnet 4), and P3 (Claude Haiku). Get at console.anthropic.com | | Slack Bot Token | OAuth bot token with chat:write scope. Bot must be invited to the target channel via /invite @bot-name | | Supabase project (optional) | For audit logging. Replace YOUR_PROJECT.supabase.co and YOUR_SUPABASE_SERVICE_ROLE_KEY in the 5 log nodes, or delete them | | Public CSV URL | The dataset must be reachable by your n8n instance via HTTP GET | Setup steps Import the workflow JSON into your n8n instance Add your Anthropic API credential and assign it to the 3 lmChatAnthropic nodes (P1, P3, P4) Add your Slack Bot Token credential and assign it to the P5 Slack node. Replace YOUR_SLACK_CHANNEL_ID with your real channel ID (e.g. C012AB3CD) (Optional) Set up the Supabase audit log table using the SQL in the setup sticky note, then replace the two placeholder values in the 5 log HTTP Request nodes Activate the workflow and send a test request: POST https://your-n8n-instance.com/webhook/mlops-v2 Content-Type: application/json { "dataset_url": "https://raw.githubusercontent.com/datasciencedojo/datasets/master/titanic.csv", "target_variable": "Survived", "business_goal": "Predict passenger survival to optimise lifeboat boarding policy" } Extending the workflow The Phase 5 sticky note includes a tip for extending the HITL loop: add a Webhook node to receive the Slack approval callback and an If node to branch into a GitHub API call that commits the model card to a new repository. The model_card_b64 field (Base64-encoded model card content) is already assembled in the payload, ready to be passed directly to the GitHub Contents API. Node count & complexity 28 nodes** total (22 active, 6 sticky notes) 3 LLM calls** (Claude Sonnet ×2, Claude Haiku ×1) 5 JavaScript Code nodes** (all pure JS, no external libraries) 5 Supabase log nodes** (optional, deletable) 1 Slack node** Fan-out connections** used to run log nodes as parallel dead-ends without blocking the main data path Tags AI, Machine Learning, MLOps, Claude AI, Slack, Automation, Data Science, HITL, LLM
by Artem Boiko
A full-featured Telegram bot that accepts text descriptions, photos, or PDF floor plans and returns detailed cost estimates with work breakdown. Powered by GPT-4 Vision / Gemini 2.0, vector search, and the open-source DDC CWICR database (55,000+ construction rates). Who's it for Contractors & Estimators** who need estimates from any input format Construction managers** evaluating scope from site photos or drawings Architects** getting quick cost feedback on floor plans Real estate professionals** assessing renovation costs Project managers** doing rapid feasibility checks via mobile What it does Receives text / photo / PDF via Telegram Analyzes input with AI (Gemini 2.0 Flash or GPT-4 Vision) Extracts work items with quantities and units Searches DDC CWICR vector database for matching rates Generates professional HTML report with full cost breakdown Exports results as Excel or PDF Supports 9 languages: 🇩🇪 DE · 🇬🇧 EN · 🇷🇺 RU · 🇪🇸 ES · 🇫🇷 FR · 🇮🇹 IT · 🇵🇱 PL · 🇧🇷 PT · 🇺🇦 UK How it works ┌─────────────────────────────────────────────────────────────────────┐ │ TELEGRAM INPUT │ │ 📝 Text Description │ 📷 Construction Photo │ 📄 PDF Floor Plan │ └─────────────────────────────────────────────────────────────────────┘ ↓ ┌─────────────────────────────────────────────────────────────────────┐ │ MAIN ROUTER │ │ Parse message → Detect content type → Route to handler (17 actions) │ └─────────────────────────────────────────────────────────────────────┘ ↓ ┌──────────────────────────┼──────────────────────────┐ ↓ ↓ ↓ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │ Text LLM │ │ Vision API │ │ Vision PDF │ │ Parse works │ │ Analyze photo │ │ Read floor plan│ │ from text │ │ GPT-4/Gemini │ │ Gemini 2.0 │ └─────────────────┘ └─────────────────┘ └─────────────────┘ └──────────────────────────┼──────────────────────────┘ ↓ ┌─────────────────────────────────────────────────────────────────────┐ │ CALCULATION LOOP │ │ For each work item: │ │ 1️⃣ Transform query → 2️⃣ Optimize search → 3️⃣ Get embedding │ │ 4️⃣ Qdrant search → 5️⃣ Score results → 6️⃣ AI rerank → 7️⃣ Calculate │ └─────────────────────────────────────────────────────────────────────┘ ↓ ┌─────────────────────────────────────────────────────────────────────┐ │ OUTPUT │ │ 📊 Telegram message │ 🌐 HTML Report │ 📑 Excel │ 📄 PDF │ └─────────────────────────────────────────────────────────────────────┘ Input Types | Type | Description | AI Used | |------|-------------|---------| | 📝 Text | Work lists, specifications, notes | OpenAI GPT-4 | | 📷 Photo | Construction site photos (up to 4) | GPT-4 Vision / Gemini | | 📄 PDF | Floor plans, architectural drawings | Gemini 2.0 Flash | Route Actions (17 total) | # | Action | Description | |---|--------|-------------| | 0 | show_lang | Language selection menu | | 1 | ask_photo | Request photo upload | | 2 | lang_selected | Save language preference | | 3 | show_analyze | Photo analysis options | | 4 | analyze | Run AI vision analysis | | 5 | show_edit_menu | Edit work quantities | | 6 | works_updated | After quantity change | | 7 | ask_new_work | Add manual work item | | 8 | start_calc | Start cost calculation | | 9 | show_help | Display help message | | 10 | view_details | Show resource details | | 11 | export_excel | Generate CSV export | | 12 | export_pdf | Generate PDF export | | 13 | process_pdf | Analyze PDF floor plan | | 14 | analyze_text | Parse text description | | 15 | refine | Re-analyze with context | | 16 | fallback | Handle unknown input | Prerequisites | Component | Requirement | |-----------|-------------| | n8n | v1.30+ with Telegram Trigger | | Telegram Bot | Token from @BotFather | | OpenAI API | For embeddings + text parsing | | Gemini API | For Vision (photos/PDF) — or use GPT-4 Vision | | Qdrant | Vector DB with DDC CWICR collections | | DDC CWICR Data | github.com/datadrivenconstruction/DDC-CWICR | Setup 1. Configure 🔑 TOKEN Node { "bot_token": "YOUR_TELEGRAM_BOT_TOKEN", "AI_PROVIDER": "gemini", "GEMINI_API_KEY": "YOUR_GEMINI_KEY", "OPENAI_API_KEY": "YOUR_OPENAI_KEY", "QDRANT_URL": "http://localhost:6333", "QDRANT_API_KEY": "YOUR_QDRANT_KEY" } 2. Vision Provider Selection AI_PROVIDER: "gemini" → Gemini 2.0 Flash (recommended for photos + PDF) AI_PROVIDER: "openai" → GPT-4 Vision (photos only) 3. n8n Credentials Settings → Credentials → Add → Telegram API Enter bot token, save Select credential in Telegram Trigger node 4. Qdrant Collections Load DDC CWICR embeddings for target languages (example for Russian): RU_STPETERSBURG_workitems_costs_resources_EMBEDDINGS_3072_DDC_CWICR 5. Activate & Test Activate workflow Send /start to your bot Select language → send photo/text/PDF Features | Feature | Description | |---------|-------------| | 📷 Photo Analysis | GPT-4 Vision or Gemini 2.0 for site photos | | 📄 PDF Processing | Floor plan analysis with room extraction | | 📝 Text Parsing | Natural language work lists | | 🔍 Vector Search | Semantic matching via Qdrant + OpenAI embeddings | | 🤖 AI Reranking | LLM-based result scoring for accuracy | | ✏️ Inline Editing | Modify quantities via Telegram buttons | | 📊 HTML Report | Professional expandable report with KPIs | | 📑 Excel Export | CSV with full work breakdown | | 📄 PDF Export | HTML-based PDF document | | 🌍 9 Languages | Full UI + database localization | | 💾 Session State | Multi-turn conversation support | | 🔧 Refine Mode | Re-analyze with additional context | Example Workflow User: /start Bot: Language selection menu (9 options) User: Selects 🇷🇺 Russian Bot: "Отправьте фото, PDF или текстовое описание работ" User: Sends bathroom photo Bot: "📷 Анализ фото... ⏳" Bot: Shows detected works: 🏠 Ванная комната — 4.5 m² Найдено 12 работ: Демонтаж плитки стен — 18 m² Демонтаж плитки пола — 4.5 m² Гидроизоляция пола — 4.5 m² Гидроизоляция стен — 8 m² Стяжка пола — 4.5 m² Укладка плитки стены — 18 m² Укладка плитки пол — 4.5 m² Установка унитаза — 1 шт Установка раковины — 1 шт Установка смесителя — 2 шт ... [✏️ Редактировать] [📊 Рассчитать] User: Taps 📊 Calculate Bot: Shows progress per item, then final result: ✅ Смета готова — 12 позиций 💰 Итого: ₽ 89,450 Работа: ₽ 35,200 (39%) Материалы: ₽ 48,750 (55%) Механизмы: ₽ 5,500 (6%) [📋 Детали] [↓ Excel] [↓ PDF] [↻ Заново] HTML Report Features KPI Cards:** Total cost, item count, labor days, cost breakdown % Expandable rows:** Click work item to show resources Resource tags:** Color-coded (Labor/Material/Machine) Scope of work:** Expandable detailed descriptions Quality indicators:** Match quality dots (high/medium/low) Responsive design:** Works on mobile and desktop Export buttons:** Expand/Collapse all Notes & Tips Photo tips:** Capture full room, include reference objects (doors, tiles) PDF support:** Works best with clear floor plans and room schedules Text input:** Supports lists, tables, free-form descriptions Rate accuracy:** Depends on DDC CWICR coverage for your region Session timeout:** User sessions persist across messages Extend:** Chain with CRM, project management, or notification tools Categories AI · Communication · Data Extraction · Document Ops Tags telegram-bot, construction, cost-estimation, gpt-4-vision, gemini, pdf-analysis, qdrant, vector-search, multilingual, html-report Author DataDrivenConstruction.io https://DataDrivenConstruction.io info@datadrivenconstruction.io Consulting & Training We help construction, engineering, and technology firms implement: AI-powered estimation systems (text, photo, PDF) Multi-channel bot integrations (Telegram, WhatsApp, Web) Vector database solutions for construction data Multilingual cost database deployment Contact us to test with your data or adapt to your project requirements. Resources DDC CWICR Database:** GitHub Qdrant Documentation:** qdrant.tech/documentation Gemini API:** aistudio.google.com n8n Telegram Trigger:** docs.n8n.io ⭐ Star us on GitHub! github.com/datadrivenconstruction/DDC-CWICR
by Bhavy Shekhaliya
Overview This n8n template demonstrates how to use AI to automatically analyze WordPress blog content and generate relevant, SEO-optimized tags for WordPress posts. Use cases Automate content tagging for WordPress blogs, maintain consistent taxonomy across large content libraries, save hours of manual tagging work, or improve SEO by ensuring every post has relevant, searchable tags! Good to know The workflow creates new tags automatically if they don't exist in WordPress. Tag generation is intelligent - it avoids duplicates by mapping to existing tag IDs. How it works We fetch a WordPress blog post using the WordPress node with sticky data enabled for testing. The post content is sent to GPT-4.1-mini which analyzes it and generates 5-10 relevant tags using a structured output parser. All existing WordPress tags are fetched via HTTP Request to check for matches. A smart loop processes each AI-generated tag: If the tag already exists, it maps to the existing tag ID If it's new, it creates the tag via WordPress API All tag IDs are aggregated and the WordPress post is updated with the complete tag list. How to use The manual trigger node is used as an example but feel free to replace this with other triggers such as webhook, schedule, or WordPress webhook for new posts. Modify the "Fetch One WordPress Blog" node to fetch multiple posts or integrate with your publishing workflow. Requirements WordPress site with REST API enabled OpenAI API Customising this workflow Adjust the AI prompt to generate tags specific to your industry or SEO strategy Change the tag count (currently 5-10) based on your needs Add filtering logic to only tag posts in specific categories
by Paul Roussel
Automated workflow that generates custom AI image backgrounds from text prompts using Gemini's Nano Banana (native image generation), removes video backgrounds, and composites videos on AI-generated scenes. Create any background you can imagine without needing stock images. How it works • Describe background: Provide video URL and text prompt describing desired background scene (e.g., "modern office with city skyline at golden hour") • AI generates image: Gemini creates a background image from your prompt in ~10-20 seconds • Upload to Drive: Generated background is saved to Google Drive and made publicly accessible • Remove & composite: Video background is removed and composited on AI-generated scene with centered template • Save final video: Completed video is uploaded to Google Drive with shareable link Set up steps ⏱️ Total setup time: ~5 minutes • Get Gemini API Key (~1 min): Visit https://aistudio.google.com/apikey, create new API key, add to n8n Settings → Variables as GEMINI_KEY • Get VideoBGRemover API Key (~2 min): Visit https://videobgremover.com/n8n, sign up, add to n8n as VIDEOBGREMOVER_KEY • Connect Google Drive (~2 min): Click "Save Background Image to Drive" node, click "Connect", authorize n8n Use cases: Marketing videos with custom branded environments tailored to your message Product demos with unique AI-generated backgrounds that match your product aesthetic Social media content with creative scenes you can't find in stock libraries AI avatars placed in AI-generated worlds Presentations with custom backgrounds generated for specific topics A/B testing different background variations for the same video Pricing: Gemini: ~$0.03 per generated image VideoBGRemover: $0.50-$2.00 per minute of video Total: ~$0.53-$2.03 per video Triggers: Webhook (for automation) or Manual (for testing) Processing time: Typically 5-7 minutes total Prompt tips: Be descriptive and specific. Instead of "office," try: "A modern minimalist office with floor-to-ceiling windows overlooking a city skyline at golden hour. Warm sunlight, polished concrete floors, sleek wooden desks, green plants."
by Aadarsh Jain
Document Analyzer and Q&A Workflow AI-powered document and web page analysis using n8n and GPT model. Ask questions about any local file or web URL and get intelligent, formatted answers. Who's it for Perfect for researchers, developers, content analysts, students, and anyone who needs quick insights from documents or web pages without uploading files to external services. What it does Analyzes local files**: PDF, Markdown, Text, JSON, YAML, Word docs Fetches web content**: Documentation sites, blogs, articles Answers questions**: Using GPT model with structured, well-formatted responses Input format: path_or_url | your_question Examples: /Users/docs/readme.md | What are the installation steps? https://n8n.io | What is n8n? Setup Import workflow into n8n Add your OpenAI API key to credentials Link the credential to the "OpenAI Document Analyzer" node Activate the workflow Start chatting! Customize Change AI model → Edit "OpenAI Document Analyzer" node (switch to gpt-4o-mini for cost savings) Adjust content length → Modify maxLength in "Process Document Content" node (default: 15000 chars) Add file types → Update supportedTypes array in "Parse Document & Question" node Increase timeout → Change timeout value in "Fetch Web Content" node (default: 30s)
by Guillaume Duvernay
Move beyond generic AI-generated content and create articles that are high-quality, factually reliable, and aligned with your unique expertise. This template orchestrates a sophisticated "research-first" content creation process. Instead of simply asking an AI to write an article from scratch, it first uses an AI planner to break your topic down into logical sub-questions. It then queries a Lookio assistant—which you've connected to your own trusted knowledge base of uploaded documents—to build a comprehensive research brief. Only then is this fact-checked brief handed to a powerful AI writer to compose the final article, complete with source links. This is the ultimate workflow for scaling expert-level content creation. Who is this for? Content marketers & SEO specialists:** Scale the creation of authoritative, expert-level blog posts that are grounded in factual, source-based information. Technical writers & subject matter experts:** Transform your complex internal documentation into accessible public-facing articles, tutorials, and guides. Marketing agencies:** Quickly generate high-quality, well-researched drafts for clients by connecting the workflow to their provided brand and product materials. What problem does this solve? Reduces AI "hallucinations":** By grounding the entire writing process in your own trusted knowledge base, the AI generates content based on facts you provide, not on potentially incorrect information from its general training data. Ensures comprehensive topic coverage:** The initial AI-powered "topic breakdown" step acts like an expert outliner, ensuring the final article is well-structured and covers all key sub-topics. Automates source citation:** The workflow is designed to preserve and integrate source URLs from your knowledge base directly into the final article as hyperlinks, boosting credibility and saving you manual effort. Scales expert content creation:** It effectively mimics the workflow of a human expert (outline, research, consolidate, write) but in an automated, scalable, and incredibly fast way. How it works This workflow follows a sophisticated, multi-step process to ensure the highest quality output: Decomposition: You provide an article title and guidelines via the built-in form. An initial AI call then acts as a "planner," breaking down the main topic into an array of 5-8 logical sub-questions. Fact-based research (RAG): The workflow loops through each of these sub-questions and queries your Lookio assistant. This assistant, which you have pre-configured by uploading your own documents, finds the relevant information and source links for each point. Consolidation: All the retrieved question-and-answer pairs are compiled into a single, comprehensive research brief. Final article generation: This complete, fact-checked brief is handed to a final, powerful AI writer (e.g., GPT-4o). Its instructions are clear: write a high-quality article using only the provided information and integrate the source links as hyperlinks where appropriate. Building your own RAG pipeline VS using Lookio or alternative tools Building a RAG system natively within n8n offers deep customization, but it requires managing a toolchain for data processing, text chunking, and retrieval optimization. An alternative is to use a managed service like Lookio, which provides RAG functionality through an API. This approach abstracts the backend infrastructure for document ingestion and querying, trading the granular control of a native build for a reduction in development and maintenance tasks. Implementing the template 1. Set up your Lookio assistant (Prerequisite): Lookio is a platform for building intelligent assistants that leverage your organization's documents as a dedicated knowledge base. First, sign up at Lookio. You'll get 50 free credits to get started. Upload the documents you want to use as your knowledge base. Create a new assistant and then generate an API key. Copy your Assistant ID and your API Key for the next step. 2. Configure the workflow: Connect your AI provider (e.g., OpenAI) credentials to the two Language Model nodes. In the Query Lookio Assistant (HTTP Request) node, paste your Assistant ID in the body and add your Lookio API Key for authentication (we recommend using a Bearer Token credential). 3. Activate the workflow: Toggle the workflow to "Active" and use the built-in form to generate your first fact-checked article! Taking it further Automate publishing:* Connect the final *Article result* node to a *Webflow* or *WordPress** node to automatically create a draft post in your CMS. Generate content in bulk:* Replace the *Form Trigger* with an *Airtable* or *Google Sheet** trigger to automatically generate a whole batch of articles from your content calendar. Customize the writing style:* Tweak the system prompt in the final *New content - Generate the AI output** node to match your brand's specific tone of voice, add SEO keywords, or include specific calls-to-action.
by Amine ARAGRAG
This n8n template automates the collection and enrichment of Product Hunt posts using AI and Google Sheets. It fetches new tools daily, translates content, categorizes them intelligently, and saves everything into a structured spreadsheet—ideal for building directories, research dashboards, newsletters, or competitive intelligence assets. Good to know Sticky notes inside the workflow explain each functional block and required configurations. Uses cursor-based pagination to safely fetch Product Hunt data. AI agent handles translation, documentation generation, tech extraction, and function area classification. Category translations are synced with a Google Sheets dictionary to avoid duplicates. All enriched entries are stored in a clean “Tools” sheet for easy filtering or reporting. How it works A schedule trigger starts the workflow daily. Product Hunt posts are retrieved via GraphQL and processed in batches. A code node restructures each product into a consistent schema. The workflow checks if a product already exists in Google Sheets. For new items, the AI agent generates metadata, translations, and documentation. Categories are matched or added to a Google Sheets dictionary. The final enriched product entry is appended or updated in the spreadsheet. Pagination continues until no next page remains. How to use Connect Product Hunt OAuth2, Google Sheets, and OpenAI credentials. Adjust the schedule trigger to your preferred frequency. Optionally expand enrichment fields (tags, scoring, custom classifications). Replace the trigger with a webhook or manual trigger if needed. Requirements Product Hunt OAuth2 credentials Google Sheets account OpenAI (or compatible) API access Customising this workflow Add Slack or Discord notifications for new tools. Push enriched data to Airtable, Notion, or a database. Extend AI enrichment with summaries or SEO fields. Use the Google Sheet as a backend for dashboards or frontend applications.
by Rahul Joshi
Description Keep your CRM pipeline clean and actionable by automatically archiving inactive deals, logging results to Google Sheets, and sending Slack summary reports. This workflow ensures your sales team focuses on active opportunities while maintaining full audit visibility. 🚀📈 What This Template Does Triggers daily at 9 AM to check all GoHighLevel CRM opportunities. ⏰ Filters deals that have been inactive for 10+ days using last activity or update date. 🔍 Automatically archives inactive deals to keep pipelines clutter-free. 📦 Formats and logs deal details into Google Sheets for record-keeping. 📊 Sends a Slack summary report with total archived count, value, and deal names. 💬 Key Benefits ✅ Keeps pipelines organized by removing stale opportunities. ✅ Saves time through fully automated archiving and reporting. ✅ Maintains a transparent audit trail in Google Sheets. ✅ Improves sales visibility with automated Slack summaries. ✅ Easily adjustable inactivity threshold and scheduling. Features Daily scheduled trigger (9 AM) with adjustable cron expression. GoHighLevel CRM integration for fetching and updating opportunities. Conditional logic to detect inactivity periods. Google Sheets logging with automatic updates. Slack integration for real-time reporting and team visibility. Requirements GoHighLevel API credentials (OAuth2) with opportunity access. Google Sheets OAuth2 credentials with edit permissions. Slack Bot token with chat:write permission. A connected n8n instance (cloud or self-hosted). Target Audience Sales and operations teams managing CRM hygiene. Business owners wanting automated inactive deal cleanup. Agencies monitoring client pipelines across teams. CRM administrators ensuring data accuracy and accountability. Step-by-Step Setup Instructions Connect your GoHighLevel OAuth2 credentials in n8n. 🔑 Link your Google Sheets document and replace the Sheet ID. 📋 Configure Slack credentials and specify your target channel. 💬 Adjust inactivity threshold (default: 10 days) as needed. ⚙️ Update the cron schedule (default: 9 AM daily). ⏰ Test the workflow manually to verify end-to-end automation. ✅
by Hugo
🤖 n8n AI Workflow Dashboard Template Overview This template is designed to collect execution data from your AI workflows and generate an interactive dashboard for easy monitoring. It's compatible with any AI Agent or RAG workflow in n8n. Main Objectives 💾 Collect Execution Data Track messages, tokens used (prompt/completion), session IDs, model names, and compute costs Designed to plug into any AI agent or RAG workflow in n8n 📊 Generate an Interactive Dashboard Visualize KPIs like total messages, unique sessions, tokens used, and costs Display daily charts, including stacked bars for prompt vs completion tokens Monitor AI activity, analyze usage, and track costs at a glance ✨ Key Features 💬 Conversation Data Collection Messages sent to the AI agent are recorded with: sessionId chatInput output promptTokens, completionTokens, totalTokens globalCost and modelName This allows detailed tracking of AI interactions across sessions. 💰 Model Pricing Management A sub-workflow with a Set node provides token prices for LLMs Data is stored in the Model price table for cost calculations 🗄️ Data Storage via n8n Data Tables Two tables need to be created: Model price { "id": 20, "createdAt": "2025-10-11T12:16:47.338Z", "updatedAt": "2025-10-11T12:16:47.338Z", "name": "claude-4.5-sonnet", "promptTokensPrice": 0.000003, "completionTokensPrice": 0.000015 } Messages [ { "id": 20, "createdAt": "2025-10-11T15:28:00.358Z", "updatedAt": "2025-10-11T15:31:28.112Z", "sessionId": "c297cdd4-7026-43f8-b409-11eb943a2518", "action": "sendMessage", "output": "Hey! \nHow's it going?", "chatInput": "yo", "completionTokens": 6, "promptTokens": 139, "totalTokens": 139, "globalCost": null, "modelName": "gpt-4.1-mini", "executionId": 245 } ] These tables store conversation data and pricing info to feed the dashboard and calculations. 📈 Interactive Dashboard KPIs Generated**: total messages, unique sessions, total/average tokens, total/average cost 💸 Charts Included**: daily messages, tokens used per day (prompt vs completion, stacked bar) Provides a visual summary of AI workflow performance ⚙️ Installation & Setup Follow these steps to set up and run the workflow in n8n: 1. Import the Workflow Download or copy the JSON workflow and import it into n8n. 2. Create the Data Tables Model price table**: stores token prices per model Messages table**: stores messages generated by the AI agent 3. Configure the Webhook The workflow is triggered via a webhook Use the webhook URL to send conversation data 4. Set Up the Pricing Sub-workflow Automatically generates price data for the models used Connect it to your main workflow to enrich cost calculations 5. Dashboard Visualization The workflow returns HTML code rendering the dashboard View it in a browser or embed it in your interface 🌐 Once configured, your workflow tracks AI usage and costs in real-time, providing a live dashboard for quick insights. 🔧 Adaptability The template is modular and can be adapted to any AI agent or RAG workflow KPIs, charts, colors, and metrics can be customized in the HTML rendering Ideal for monitoring, cost tracking, and reporting AI workflow performance
by Nguyen Thieu Toan
🤖 Facebook Messenger Smart Chatbot – Batch, Format & Notify with n8n Data Table by Nguyen Thieu Toan 🌟 What Is This Workflow? This is a smart chatbot solution built with n8n, designed to integrate seamlessly with Facebook Messenger. It batches incoming messages, formats them for clarity, tracks conversation history, and sends natural replies using AI. Perfect for businesses, customer support, or personal AI agents. ⚙️ Key Features 🔄 Smart batching: Groups consecutive user messages to process them in one go, avoiding fragmented replies. 🧠 Context formatting: Automatically formats messages to fit Messenger’s structure and length limits. 📋 Conversation history tracking: Stores and retrieves chat logs between user and bot using n8n Data Table. 👀 Seen & Typing effects: Adds human-like responsiveness with Messenger’s sender actions. 🧩 AI Agent integration: Easily connects to GPT, Gemini, or any LLM for natural replies, scheduling, or business logic. 🚀 How It Works Connects to your Facebook Page via webhook to receive and send messages. Stores incoming messages in a Data Table called Batch_messages, including fields like user_text, bot_rep, processed, etc. Collects unprocessed messages, sorts them by id, and creates a merged_message and full history. Sends the history to an AI Agent for contextual response generation. Sends the AI reply back to Messenger with Seen/Typing effects. Updates the message status to processed = true to prevent duplicate handling. 🛠️ Setup Guide Create a Facebook App and Messenger webhook, link it to your Page. Set up the Batch_messages Data Table in n8n with required columns. Import the workflow or build nodes manually using the tutorial. Configure your API tokens, webhook URLs, and AI Agent endpoint. Deploy the workflow on a public n8n server. 📘 Full tutorial available at: 👉 Smart Chatbot Workflow Guide by Nguyen Thieu Toan 💡 Pro Tips Customize the AI prompt and persona to match your business tone. Add scheduling, lead capture, or CRM integration using n8n’s flexible nodes. Monitor your Data Table regularly to ensure clean message flow and batching. 👤 About the Creator Nguyen Thieu Toan (Nguyễn Thiệu Toàn/Jay Nguyen) is an expert in AI automation, business optimization, and chatbot development. With a background in marketing and deep knowledge of n8n workflows, Jay helps businesses harness AI to save time, boost performance, and deliver smarter customer experiences. Website: https://nguyenthieutoan.com
by Feedspace
Who is this for? This template is for teams who collect customer testimonials on feedpsace (via forms) and want to automatically convert them into professional case studies using AI and publish them to WordPress. What this workflow does This workflow listens for incoming testimonial data via a webhook, extracts the relevant fields, uses an AI agent to generate a complete case study (including title, sections, and structure), and publishes the final content directly to WordPress. The AI is instructed to vary tone, angle, and structure across case studies to avoid repetitive content and improve SEO value. Requirements: Feedspace account with webhook integration enabled Access to a WordPress site with REST API enabled An AI API key (Google Gemini or compatible model) Setup steps Connect to Feedspace Activate the workflow and copy the Production webhook URL Go to Feedspace → Automations → Webhooks Paste the webhook URL and activate it See https://www.feedspace.io/help/automation/ for more information Add your AI API credentials to the AI model node Connect your WordPress account in the WordPress node Send testimonial data to the webhook in this format: Reviewer name Rating Text feedback Event or feedback type Activate the workflow How it works Receives testimonial data through feedpsace webhook Extracts reviewer name, rating, feedback, and event type Filters for text-based testimonials Uses an AI agent to: Choose a unique case study angle and tone Generate structured HTML content Create an SEO-optimized title Parses and validates the AI output Publishes the generated case study to WordPress as a post
by Laiba
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How it works User Uploads PDF** : The workflow accepts a PDF via webhook. Extract Text** : n8n extracts the text content from the PDF. Summarize with AI** : The extracted text is passed to an AI model (Groq) with OpenAI model for summarization. Generate Audio** : The summary text is sent to a TTS (Text-to-Speech) API (Qwen-TTS-Demo), you can use other free alternatives. Serve Result** : The workflow outputs both Summary and Audio File URL (WAV link) which you can attached to your audioPlayer. This allows users to read or listen to the summary instantly. How to use / Requirements Import Workflow** : Copy/paste the workflow JSON into your n8n instance. Set Up Input Trigger** : If you want users to upload directly you can use webhook or any other trigger. Configure AI Node** : Add your own API key for (Groq / Open AI). Configure TTS Node** : Add credentials for your chosen TTS service. Run Workflow** : Upload a PDF and get back the summary and audio file url. n8n-smart pdf summarizer & voice generator Please reach out to me at Laiba Zubair if you need further assistance with you n8n workflows and automations!