by Ladies Build With AI
Who is it for This workflow is designed for anyone who wants to simplify email automation without leaving Google Sheets. You can also send out emails automatically, without even visiting Google Sheets. It’s especially useful for: Marketers sending bulk or personalized campaigns Recruiters managing outreach from candidate lists Small business owners who want automated follow-ups Anyone who wants to trigger emails directly from sheet updates, e.g. event updates. How it works The workflow connects Google Sheets with Gmail to let you send emails in either of two ways: Bulk emails (mail merge): Use data from your sheet to send an email to multiple email addresses, one by one. Triggered emails: Automatically send an email whenever specific values or conditions in your sheet are met. No need to manually copy, paste, or switch to Gmail, because the process is fully automated. How to set it up Copy this template into your personal n8n workspace: https://docs.google.com/spreadsheets/d/1fWg_GOU0m_2cQpah7foDiz1WqTRKjCbJJCLBGCvJlXc/edit?usp=sharing Connect your Google Sheets and Gmail accounts using this workflow in n8n. Select the spreadsheet and sheet you want to use. Customize the email nodes with your subject line, body text, and variables (e.g., names or links from your sheet). Test the workflow, then activate it to start sending emails automatically. For a step-by-step walkthrough, check out this video guide on YouTube: https://www.youtube.com/watch?v=XJQ0W3yWR-0 Requirements A Google Sheets account with your data organized in rows and columns A Gmail account for sending emails An active n8n account to run the workflow
by Le Nguyen
Description (How it works) This workflow keeps your Zalo Official Account access token valid and easy to reuse across other flows—no external server required. High-level steps Scheduled refresh runs on an interval to renew the access token before it expires. Static Data cache (global) stores access/refresh tokens + expiries for reuse by any downstream node. OAuth exchange calls Zalo OAuth v4 with your app_id and secret_key to get a fresh access token. Immediate output returns the current access token to the next nodes after each refresh. Operational webhooks include: A reset webhook to clear the cache when rotating credentials or testing. A token peek webhook to read the currently cached token for other services. Setup steps (estimated time ~8–15 minutes) Collect Zalo credentials (2–3 min): Obtain app_id, secret_key, and a valid refresh_token. Import & activate workflow (1–2 min): Import the JSON into n8n and activate it. Wire inputs (2–3 min): Point the “Set Refresh Token and App ID” node to your env vars (or paste values for a quick test). Adjust schedule & secure webhooks (2–3 min): Tune the run interval to your token TTL; protect the reset/peek endpoints (e.g., secret param or IP allowlist). Test (1–2 min): Execute once to populate Static Data; optionally try the token peek and reset webhooks to confirm behavior.
by Ali Khosravani
This workflow automatically generates natural product comments using AI and posts them to your WooCommerce store. It helps boost engagement and makes product pages look more active and authentic. How It Works Fetches all products from your WooCommerce store. Builds an AI prompt based on each product’s name and description. Uses OpenAI to generate a short, human-like comment (neutral, positive, negative, or questioning). Assigns a random reviewer name and email. Posts the comment back to WooCommerce as a product review. Requirements n8n version: 1.49.0 or later (recommended). Active OpenAI API key. WooCommerce installed and REST API enabled. WordPress API credentials (Consumer Key & Consumer Secret). Setup Instructions Import this workflow into n8n. Add your credentials in n8n > Credentials: OpenAI API (API key). WooCommerce API (consumer key & secret). Replace the sample URL https://example.com with your own WordPress/WooCommerce site URL. Execute manually or schedule it to run periodically. Categories AI & Machine Learning WooCommerce WordPress Marketing Engagement Tags ai, openai, woocommerce, comments, automation, reviews, n8n > Note: AI-generated comments should be reviewed periodically to ensure they align with your store’s policies and brand voice.
by Luca Olovrap
How it works This workflow provides a complete, automated backup solution for your n8n instance, running on a daily schedule to ensure your automations are always safe. Automatic cleanup:** It first connects to your Google Drive to find and delete old backup folders, keeping your storage clean and organized based on a retention number you set. Daily folder creation:** It then creates a new, neatly dated folder to store the current day's backup. Fetches & saves workflows:** Finally, it uses the n8n API to get a list of all your workflows, converts each one to a .json file, and uploads them to the newly created folder in Google Drive. Set up steps Setup time: ~3 minutes This template is designed to be as plug-and-play as possible. All configurations are grouped in a single node for quick setup. Connect your accounts:** Authenticate the Google Drive and n8n API nodes with your credentials. Configure main settings:* Open the Set node named *"CONFIG - Set your variables here"** and: Paste the ID of your main Google Drive folder where backups will be stored. Adjust the number of recent backups you want to keep. Activate workflow:** Turn on the workflow. Your automated backup system is now active. For more detailed instructions, including how to find your Google Drive folder ID, please refer to the sticky notes inside the workflow.
by Elodie Tasia
Automatically create branded social media graphics, certificates, thumbnails, or marketing visuals using Bannerbear's template-based image generation API. Bannerbear's API is primarily asynchronous by default: this workflow shows you how to use both asynchronous (webhook-based) and synchronous modes depending on your needs. What it does This workflow connects to Bannerbear's API to generate custom images based on your pre-designed templates. You can modify text, colors, and other elements programmatically. By default, Bannerbear works asynchronously: you submit a request, receive an immediate 202 Accepted response, and get the final image via webhook or polling. This workflow demonstrates both the standard asynchronous approach and an alternative synchronous method where you wait for the image to be generated before proceeding. How it works Set parameters - Configure your Bannerbear API key, template ID, and content (title, subtitle) Choose mode - Select synchronous (wait for immediate response) or asynchronous (standard webhook delivery) Generate image - The workflow calls Bannerbear's API with your modifications Receive result - Get the image URL, dimensions, and metadata in PNG or JPG format Async mode (recommended): The workflow receives a pending status immediately, then a webhook delivers the completed image when ready. Sync mode: The workflow waits for the image generation to complete before proceeding. Setup requirements A Bannerbear account (free tier available) A Bannerbear template created in your dashboard Your API key and template ID from Bannerbear For async mode: ability to receive webhooks (production n8n instance) How to set up Get Bannerbear credentials: Sign up at bannerbear.com Create a project and design a template Copy your API key from Settings > API Key Copy your template ID from the API Console Configure the workflow: Open the "SetParameters" node Replace the API key and template ID with yours Customize the title and subtitle text Set call_mode to "sync" or "async" For async mode (recommended): Activate the "Webhook_OnImageCreated" node Copy the production webhook URL Add it to Bannerbear via Settings > Webhooks > Create a Webhook Set event type to "image_created" Customize the workflow Modify the template parameters to match your Bannerbear template fields Add additional modification objects for more dynamic elements (colors, backgrounds, images) Connect to databases, CRMs, or other tools to pull content automatically Chain multiple image generations for batch processing Store generated images in Google Drive, S3, or your preferred storage Use async mode for high-volume generation without blocking your workflow
by Nalin
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Qualify and enrich inbound leads with contextual insights and Slack alerts Who is this for? Sales teams, account executives, and RevOps professionals who need more than just basic lead scoring. Built for teams that want deep contextual insights about qualified prospects to enable truly relevant conversations from the first touchpoint. What problem does this solve? Most qualification stops at "good fit" or "bad fit" - but that leaves sales teams flying blind when it comes to actually engaging the prospect. You know they're qualified, but what are their specific pain points? What value propositions resonate? Which reference customers should you mention? This workflow uses Octave's context engine to not only qualify leads but enrich them with actionable insights that turn cold outreach into warm, contextualized conversations. What this workflow does Inbound Lead Processing: Receives lead information via webhook (firstName, companyName, companyDomain, profileURL, jobTitle) Processes leads from website forms, demo requests, content downloads, or trial signups Validates and structures lead data for intelligent qualification and enrichment Contextualized Lead Qualification: Leverages Octave's context engine to score leads against your specific ICP Analyzes company fit, role relevance, and timing indicators Generates qualification scores (1-10) with detailed rationale Filters out low-scoring leads (configurable threshold - default >5) Deep Lead Enrichment: Uses Octave's enrichment engine to generate contextual insights about qualified leads Identifies primary responsibilities, pain points, and relevant value propositions Suggests appropriate reference customers and use cases to mention Provides sales teams with conversation starters grounded in your business context Enhanced Sales Alerts: Sends enriched Slack alerts with qualification score plus actionable insights Includes suggested talking points, pain points, and reference customers Enables sales teams to have contextualized conversations from first contact Setup Required Credentials: Octave API key and workspace access Slack OAuth credentials with channel access Access to your lead source system (website forms, CRM, etc.) Step-by-Step Configuration: Set up Octave Qualification Agent: Add your Octave API credentials in n8n Replace your-octave-qualification-agent-id with your actual qualification agent ID Configure your qualification agent with your ICP criteria and business context Set up Octave Enrichment Agent: Replace your-octave-enrichment-agent-id with your actual enrichment agent ID Configure enrichment outputs based on the insights most valuable to your sales process Test enrichment quality with sample leads from your target market Configure Slack Integration: Add your Slack OAuth credentials to n8n Replace your-slack-channel-id with the channel for enriched lead alerts Customize the Slack message template with the enrichment fields most useful for your sales team Set up Lead Source: Replace your-webhook-path-here with a unique, secure path Configure your website forms, CRM, or lead source to send data to the webhook Ensure consistent data formatting across lead sources Customize Qualification Filter: Adjust the Filter node threshold (default: score > 5) Modify based on your lead volume and qualification standards Test with sample leads to calibrate scoring Required Webhook Payload Format: { "body": { "firstName": "Sarah", "lastName": "Johnson", "companyName": "ScaleUp Technologies", "companyDomain": "scaleuptech.com", "profileURL": "https://linkedin.com/in/sarahjohnson", "jobTitle": "VP of Engineering" } } How to customize Qualification Criteria: Customize scoring in your Octave qualification agent: Product Level:** Define "good fit" and "bad fit" questions that determine if someone needs your core offering Persona Level:** Set criteria for specific buyer personas and their unique qualification factors Segment Level:** Configure qualification logic for different market segments or use cases Multi-Level Qualification:** Qualify against Product + Persona, Product + Segment, or all three levels combined Enrichment Insights: Configure your Octave enrichment agent to surface the most valuable insights: Primary Responsibilities:** What this person actually does day-to-day Pain Points:** Specific challenges they face that your solution addresses Value Propositions:** Which benefits resonate most with their role and situation Reference Customers:** Similar companies/roles that have succeeded with your solution Conversation Starters:** Contextual talking points for outreach Slack Alert Format: Customize the enrichment data included in alerts: Add or remove enrichment fields based on sales team preferences Modify message formatting for better readability Include additional webhook data if needed Scoring Threshold: Adjust the Filter node to match your qualification standards Integration Channels: Replace Slack with email, CRM updates, or other notification systems Use Cases High-value enterprise lead qualification and research automation Demo request enrichment for contextual sales conversations Event lead processing with immediate actionable insights Website visitor qualification and conversation preparation Trial signup enrichment for targeted sales outreach Content download lead scoring with context-aware follow-up preparation
by Rodrigo
How it works This workflow automatically responds to incoming emails identified as potential leads using AI-generated text. It connects to your email inbox via IMAP, classifies incoming messages with an AI model, filters out non-leads, and sends a personalized reply to relevant messages. Steps Email Trigger (IMAP): Watches your inbox for new emails in real time. Is Lead? (Message Model): Uses AI to determine whether the sender is a lead. Filter: Passes only lead emails to the next step. Write Customized Reply (Message Model): Generates a personalized response using AI. Get Message: Retrieves original email details to ensure correct threading. Reply to Message: Sends the AI-generated reply to the sender. Setup Instructions Connect your IMAP Email credentials to the first node and set the folder to watch (e.g., INBOX). In the "Filter leads" node, adjust the AI prompt to match your lead qualification criteria. In the "Reply with customized message" node, edit the AI prompt to reflect your product, service, or business tone. Connect your Gmail (or other email provider) credentials in the Get Message and Reply to Message nodes. Test with a few sample emails before activating. Requirements IMAP-enabled email account (for receiving messages) Gmail API access (or modify to your email provider) OpenAI or other AI model credentials for message analysis and reply generation This template is ready to use, with all steps documented inside sticky notes for easy customization.
by Yang
📄 What this workflow does This workflow automatically scrapes product information from any website URL entered into a Google Sheet and stores the extracted product details into another sheet. It uses Dumpling AI to extract product data such as name, price, description, and reviews. 👤 Who is this for This is ideal for: Lead generation specialists capturing product info from prospect websites eCommerce researchers collecting data on competitor product listings Sales teams building enriched product databases from lead URLs Anyone who needs to automate product scraping from multiple websites ✅ Requirements A Google Sheet with a column labeled Website where URLs will be added A second sheet (e.g., product details) where extracted data will be saved Dumpling AI** API access to perform the extraction Connected Google Sheets credentials in n8n ⚙️ How to set up Replace the Google Sheet and tab IDs in the workflow with your own. Make sure your source sheet includes a Website column. Connect your Dumpling AI and Google Sheets credentials. Make sure the output sheet has the following headers: productName price productDescription (The workflow supports review, but it’s optional.) Activate the workflow to start processing new rows. 🔁 How it works (Workflow Steps) Watch New Website URL in Google Sheets: Triggers when a new row is added with a website URL. Extract Product Info with Dumpling AI: Sends the URL to Dumpling AI’s extract endpoint using a defined schema for product details. Split Extracted Products: Separates multiple products into individual items if the page contains more than one. Append Product Info to Google Sheets: Adds the structured results to the specified product details sheet. 🛠️ Customization Ideas Add a column to store the original source URL alongside each product Use OpenAI to generate short SEO summaries for each product Add filters to ignore pages without valid product details Send Slack or email notifications when new products are added to the sheet
by Ziad Adel
What this workflow does This workflow sends a daily Slack report with the current number of subscribers in your Mailchimp list. It’s a simple way to keep your marketing or growth team informed without logging into Mailchimp. How it works Cron Trigger starts the workflow once per day (default: 09:00). Mailchimp node retrieves the total number of subscribers for a specific list. Slack node posts a formatted message with the subscriber count into your chosen Slack channel. Pre-conditions / Requirements A Mailchimp account with API access enabled. At least one Mailchimp audience list created (you’ll need the List ID). A Slack workspace with permission to post to your chosen channel. n8n connected to both Mailchimp and Slack via credentials. Setup Cron Trigger Default is set to 09:00 AM daily. Adjust the time or frequency as needed. Mailchimp: Get Subscribers Connect your Mailchimp account in n8n credentials. Replace {{MAILCHIMP_LIST_ID}} with the List ID of the audience you want to monitor. To find the List ID: Log into Mailchimp → Audience → All contacts → Settings → Audience name and defaults. Slack: Send Summary Connect your Slack account in n8n credentials. Replace {{SLACK_CHANNEL}} with the name of the channel where the summary should appear (e.g., #marketing). The message template can be customized, e.g., include emojis, or additional Mailchimp stats. Customization Options Multiple lists:** Duplicate the Mailchimp node for different audience lists and send combined stats. Formatting:** Add more details like new subscribers in the last 24h by comparing with previous runs (using Google Sheets or a database). Notifications:** Instead of Slack, send the update to email or Microsoft Teams by swapping the output node. Benefits Automation:** Removes the need for manual Mailchimp checks. Visibility:** Keeps the whole team updated on subscriber growth in real time. Motivation:** Celebrate growth milestones directly in team channels. Use Cases Daily subscriber growth tracking for newsletters. Sharing metrics with leadership without giving Mailchimp access. Monitoring the effectiveness of campaigns in near real time.
by Vigh Sandor
Workflow Overview This n8n workflow provides automated monitoring of YouTube channels and sends real-time notifications to RocketChat when new videos are published. It supports all YouTube URL formats, uses dual-source video fetching for reliability, and intelligently filters videos to prevent duplicate notifications. Key Features Multi-Format URL Support**: Handles @handle, /user/, and /channel/ URL formats Dual Fetching Strategy**: Uses both RSS feeds and HTML scraping for maximum reliability Smart Filtering**: Only notifies about videos published in the last hour Shorts Exclusion**: Automatically excludes YouTube Shorts from notifications Rate Limiting**: 30-second delay between notifications to prevent spam Batch Processing**: Processes multiple channels sequentially Error Handling**: Continues execution even if one channel fails Customizable Schedule**: Default hourly checks, adjustable as needed Use Cases Monitor competitor channels, track favorite creators, aggregate content from multiple channels, build content curation workflows, stay updated on educational channels, monitor brand mentions, track news channels for breaking updates. Setup Instructions Prerequisites n8n instance (self-hosted or cloud) version 1.0+ RocketChat server with admin or bot access RocketChat API credentials Internet connectivity for YouTube access Step 1: Obtain RocketChat Credentials Create Bot User: Log in to RocketChat as administrator Navigate to Administration → Users → New Fill in details: Name (YouTube Monitor Bot), Username (youtube-bot), Email, Password, Roles (bot) Click Save Get API Credentials: Log in as bot user Navigate to My Account → Personal Access Tokens Click Generate New Token Enter token name: n8n YouTube Monitor Copy generated token immediately Note User ID from account settings Step 2: Configure RocketChat in n8n Open n8n web interface Navigate to Credentials section Click Add Credential → RocketChat API Fill in: Domain: Your RocketChat URL (e.g., https://rocket.yourdomain.com) User: Bot username (e.g., youtube-bot) Password: Bot password or personal access token Click Save and test connection Step 3: Prepare RocketChat Channel Create new channel in RocketChat: youtube-notifications Add bot user to channel: Click channel menu → Members → Add Users Search for bot username Click Add Step 4: Collect YouTube Channel URLs Handle Format: https://www.youtube.com/@ChannelHandle User Format: https://www.youtube.com/user/Username Channel ID Format: https://www.youtube.com/channel/UCxxxxxxxxxx All formats supported. Find channel ID in page source or use browser extension. Step 5: Import Workflow Copy workflow JSON In n8n: Workflows → Import from File/URL Paste JSON or upload file Click Import Step 6: Configure Channel List Locate Channel List node Enter YouTube URLs in channel_urls field, one per line: https://www.youtube.com/@NoCopyrightSounds/videos https://www.youtube.com/@chillnation/videos Include /videos suffix or workflow adds it automatically Step 7: Configure RocketChat Notification Locate RocketChat Notification node Replace YOUR-CHANNEL-NAME with your channel name Select RocketChat credential Customize message template if needed Step 8: Configure Schedule (Optional) Default: Every 1 hour To change: Open Hourly Check node Modify interval (Minutes, Hours, Days) Recommended Intervals: Every hour (default): Good balance Every 30 minutes: More frequent Every 2 hours: Less frequent Avoid intervals less than 15 minutes Important: YouTube RSS updates every 15 minutes. Hourly checks match 1-hour filter window. Step 9: Test the Workflow Click Execute Workflow button Monitor execution (green = success, red = errors) Check node outputs: Channel List: Shows URLs Filter New Videos: Shows found videos (may be empty) RocketChat Notification: Shows sent messages Verify notifications in RocketChat No notifications is normal if no videos posted in last hour. Step 10: Activate Workflow Toggle Active switch in top-right Workflow runs on schedule automatically Monitor RocketChat channel for notifications How to Use Understanding Workflow Execution Default Schedule: Hourly Executes every hour Checks all channels Processes videos from last 60 minutes Prevents duplicate notifications Execution Duration: 1-5 minutes for 10 channels. Rate limiting adds 30 seconds per video. Adding New Channels Open Channel List node Add new URL on new line Save (Ctrl+S) Change takes effect on next run Removing Channels Open Channel List node Delete line or comment out with # at start Save changes Changing Check Frequency Open Hourly Check node Modify interval If changing from hourly, update Filter New Videos node: Find: cutoffDate.setHours(cutoffDate.getHours() - 1); Change -1 to match interval (-2 for 2 hours, -6 for 6 hours) Important: Time window should match or exceed check interval. Understanding Video Sources RSS Feed (Primary): Official YouTube RSS Fast and reliable 5-15 minute delay for new videos Structured data HTML Scraping (Fallback): Immediate results Works when RSS unavailable More fragile Benefits of dual approach: Reliability: If one fails, other works Speed: Scraping catches videos immediately Completeness: RSS ensures nothing missed Videos are deduplicated automatically Excluding YouTube Shorts Shorts are filtered by checking URL for /shorts/ path. To include Shorts: Open Filter New Videos node Find: if (videoUrl && !videoUrl.includes('/shorts/')) Remove the !videoUrl.includes('/shorts/') check Rate Limiting 30-second wait between notifications: Prevents flooding RocketChat Allows users to read each notification Avoids rate limits Impact: 5 videos = 2.5 minutes, 10 videos = 5 minutes To adjust: Open Wait 30 sec node, change amount field (15-60 seconds recommended) Handling Multiple Channels Channels processed sequentially: Prevents overwhelming workflow Ensures reliable execution One failed channel doesn't stop others Recommend 20-50 channels per workflow FAQ Q: How many channels can I monitor? A: Recommend 20-50 per workflow. Split into multiple workflows for more. Q: Why use both RSS and scraping? A: RSS is reliable but delayed. Scraping is immediate but fragile. Both ensures no videos missed. Q: Can I exclude specific video types? A: Yes, add filtering logic in Filter New Videos node. Already excludes Shorts. Q: Will this get my IP blocked? A: Unlikely with hourly checks. Don't check more than every 15 minutes. Q: How do I prevent duplicate notifications? A: Ensure time window matches schedule interval. Already implemented. Q: What if channel changes handle? A: Update URL in Channel List node. YouTube maintains redirects. Q: Can I monitor playlists? A: Not directly. Would need modifications for playlist RSS feeds. Technical Reference YouTube URL Formats Handle: https://www.youtube.com/@handlename User: https://www.youtube.com/user/username Channel ID: https://www.youtube.com/channel/UCxxxxxx RSS Feed Format https://www.youtube.com/feeds/videos.xml?channel_id=UCxxxxxx Contains up to 15 recent videos with title, link, publish date, thumbnail. APIs Used: YouTube RSS (public), RocketChat API (requires auth) License: Open for modification and commercial use
by Shahrear
Process Physician Orders into Google Sheets with VLM Run AI Extraction What this workflow does Monitors Google Drive for new physician order files in a target folder Downloads the file automatically inside n8n for processing Sends the file to VLM Run for AI transcription and structured data extraction Parses healthcare-specific details from the healthcare.physician-order domain as JSON Appends normalized attributes to a Google Sheet as a new row Setup Prerequisites: Google account, VLM Run API credentials, Google Sheets access, n8n. Install the verified VLM Run node from the n8n node list, then click Install. Once installed, you can integrate it directly in your workflow. Quick Setup: Create the Drive folder you want to watch and copy its Folder ID Create a Google Sheet with headers such as: timestamp, file_name, file_id, mime_type, size_bytes, uploader_email, patient_name, patient_dob, patient_gender, patient_address, patient_phone_no, physician_name, physician_phone_no, physician_email, referring_clinic, diagnosis, exam_date, form_signed_in, …other physician order fields as needed Configure Google Drive OAuth2 for the trigger and download nodes Add your VLM Run API credentials from VLM Run Dashboard to the VLM Run node Configure Google Sheets OAuth2 and set Spreadsheet ID + target tab Upload a sample physician order file to the Drive folder to test, then activate Perfect for Converting physician order documents into structured, machine-readable text Automating extraction of patient, physician, and clinical details with VLM Run Creating a centralized archive of orders for compliance, auditing, or reporting Reducing manual data entry and ensuring consistent formatting Key Benefits End-to-end automation** from Drive upload to structured Google Sheets row AI-powered accuracy** using VLM Run’s healthcare-specific extraction models Standardized attribute mapping** for patient and physician records Instantly searchable archive** directly in Google Sheets Hands-free processing** once the workflow is activated How to customize Extend by adding: Attribute-specific parsing (e.g., ICD/CPT diagnosis codes, insurance details) Automatic classification of orders by specialty, urgency, or exam type Slack, Teams, or email notifications when new physician orders are logged Keyword tagging for fast filtering and downstream workflows Error-handling rules that move failed conversions into a review folder or error sheet
by Amir Safavi-Naini
LLM Cost Monitor & Usage Tracker for n8n 🎯 What This Workflow Does This workflow provides comprehensive monitoring and cost tracking for all LLM/AI agent usage across your n8n workflows. It extracts detailed token usage data from any workflow execution and calculates precise costs based on current model pricing. The Problem It Solves When running LLM nodes in n8n workflows, the token usage and intermediate data are not directly accessible within the same workflow. This monitoring workflow bridges that gap by: Retrieving execution data using the execution ID Extracting all LLM usage from any nested structure Calculating costs with customizable pricing Providing detailed analytics per node and model WARNING: it works after the full execution of the workflow (i.e. you can't get this data before completion of all tasks in the workflow) ⚙️ Setup Instructions Prerequisites Experience Required: Basic familiarity with n8n LLM nodes and AI agents Agent Configuration: In your monitored workflows, go to agent settings and enable "Return Intermediate Steps" For getting execution data, you need to set upthe n8n API in your instance (also available onthe free version) Installation Steps Import this monitoring workflow into your n8n instance Go to Settings >> select n8n API from left bar >> define an API. Now you can add this as the credential for your "Get an Execution" node Configure your model name mappings in the "Standardize Names" node Update model pricing in the "Model Prices" node (prices per 1M tokens) To monitor a workflow: Add an "Execute Workflow" node at the end of your target workflow Select this monitoring workflow Important: Turn OFF "Wait For Sub-Workflow Completion" Pass the execution ID as input 🔧 Customization When You See Errors If the workflow enters the error path, it means an undefined model was detected. Simply: Add the model name to the standardize_names_dic Add its pricing to the model_price_dic Re-run the workflow Configurable Elements Model Name Mapping**: Standardize different model name variations (e.g., "gpt-4-0613" → "gpt-4") Pricing Dictionary**: Set costs per million tokens for input/output Extraction Depth**: Captures tokens from any nesting level automatically 📊 Output Data Per LLM Call Cost Breakdown**: Prompt, completion, and total costs in USD Token Metrics**: Prompt tokens, completion tokens, total tokens Performance**: Execution time, start time, finish reason Content Preview**: First 100 chars of input/output for debugging Model Parameters**: Temperature, max tokens, timeout, retry count Execution Context**: Workflow name, node name, execution status Flow Tracking**: Previous nodes chain Summary Statistics Total executions and costs Breakdown by model type Breakdown by node Average cost per call Total execution time ✨ Key Benefits No External Dependencies**: Everything runs within n8n Universal Compatibility**: Works with any workflow structure Automatic Detection**: Finds LLM usage regardless of nesting Real-time Monitoring**: Track costs as workflows execute Debugging Support**: Preview actual prompts and responses Scalable**: Handles multiple models and complex workflows 📝 Example Use Cases Cost Optimization**: Identify expensive nodes and optimize prompts Usage Analytics**: Track token consumption across teams/projects Budget Monitoring**: Set alerts based on cost thresholds Performance Analysis**: Find slow-running LLM calls Debugging**: Review actual inputs/outputs without logs Compliance**: Audit AI usage across your organization 🚀 Quick Start Import workflow Update model prices (if needed) Add monitoring to any workflow with the Execute Workflow node View detailed cost breakdowns instantly Note: Prices are configured per million tokens. Default includes GPT-4, GPT-3.5, Claude, and other popular models. Add custom models as needed.