by PDF Vector
Overview Organizations dealing with high-volume document processing face challenges in efficiently handling diverse document types while maintaining quality and tracking performance metrics. This enterprise-grade workflow provides a scalable solution for batch processing documents including PDFs, scanned documents, and images (JPG, PNG) with comprehensive analytics, error handling, and quality assurance. What You Can Do Process thousands of documents in parallel batches efficiently Monitor performance metrics and success rates in real-time Handle diverse document formats with automatic format detection Generate comprehensive analytics dashboards and reports Implement automated quality assurance and error handling Who It's For Large organizations, document processing centers, digital transformation teams, enterprise IT departments, and businesses that need to process thousands of documents reliably with detailed performance tracking and analytics. The Problem It Solves High-volume document processing without proper monitoring leads to bottlenecks, quality issues, and inefficient resource usage. Organizations struggle to track processing success rates, identify problematic document types, and optimize their workflows. This template provides enterprise-grade batch processing with comprehensive analytics and automated quality assurance. Setup Instructions: Configure Google Drive credentials for document folder access Install the PDF Vector community node from the n8n marketplace Configure PDF Vector API credentials with appropriate rate limits Set up batch processing parameters (batch size, retry logic) Configure quality thresholds and validation rules Set up analytics dashboard and reporting preferences Configure error handling and notification systems Key Features: Parallel batch processing for maximum throughput Support for mixed document formats (PDFs, Word docs, images) OCR processing for handwritten and scanned documents Comprehensive analytics dashboard with success rates and performance metrics Automatic document prioritization based on size and complexity Intelligent error handling with automatic retry logic Quality assurance checks and validation Real-time processing monitoring and alerts Customization Options: Configure custom document categories and processing rules Set up specific extraction templates for different document types Implement automated workflows for documents that fail quality checks Configure credit usage optimization to minimize costs Set up custom analytics and reporting dashboards Add integration with existing document management systems Configure automated notifications for processing completion or errors Implementation Details: The workflow uses intelligent batching to process documents efficiently while monitoring performance metrics in real-time. It automatically handles different document formats, applies OCR when needed, and provides detailed analytics to help organizations optimize their document processing operations. The system includes sophisticated error recovery and quality assurance mechanisms. Note: This workflow uses the PDF Vector community node. Make sure to install it from the n8n community nodes collection before using this template.
by vinci-king-01
Creative Asset Manager with ScrapeGraphAI Analysis and Brand Compliance ๐ฏ Target Audience Creative directors and design managers Marketing teams managing brand assets Digital asset management (DAM) administrators Brand managers ensuring compliance Content creators and designers Marketing operations teams Creative agencies managing client assets Brand compliance officers ๐ Problem Statement Managing creative assets manually is inefficient and error-prone, often leading to inconsistent branding, poor organization, and compliance issues. This template solves the challenge of automatically analyzing, organizing, and ensuring brand compliance for creative assets using AI-powered analysis and automated workflows. ๐ง How it Works This workflow automatically processes uploaded creative assets using ScrapeGraphAI for intelligent analysis, generates comprehensive tags, checks brand compliance, organizes files systematically, and maintains a centralized dashboard for creative teams. Key Components Asset Upload Trigger - Webhook endpoint that activates when new creative assets are uploaded ScrapeGraphAI Asset Analyzer - Uses AI to extract detailed information from visual assets Tag Generator - Creates comprehensive, searchable tags based on asset analysis Brand Compliance Checker - Evaluates assets against brand guidelines and standards Asset Organizer - Creates organized folder structures and standardized naming Creative Team Dashboard - Updates Google Sheets with organized asset information ๐ Google Sheets Column Specifications The template creates the following columns in your Google Sheets: | Column | Data Type | Description | Example | |--------|-----------|-------------|---------| | asset_id | String | Unique asset identifier | "asset_1703123456789_abc123def" | | name | String | Standardized filename | "image-social-media-2024-01-15T10-30-00.jpg" | | path | String | Storage location path | "/creative-assets/2024/01/image/social-media" | | asset_type | String | Type of creative asset | "image" | | dimensions | String | Asset dimensions | "1920x1080" | | file_format | String | File format | "jpg" | | primary_colors | Array | Extracted color palette | ["#FF6B35", "#004E89"] | | content_description | String | AI-generated content description | "Modern office workspace with laptop" | | text_content | String | Any text visible in asset | "Welcome to our workspace" | | style_elements | Array | Detected style characteristics | ["modern", "minimalist"] | | generated_tags | Array | Comprehensive tag list | ["high-resolution", "brand-logo", "social-media"] | | usage_context | String | Suggested usage context | "social-media" | | brand_elements | Array | Detected brand elements | ["logo", "typography"] | | compliance_score | Number | Brand compliance score (0-100) | 85 | | compliance_status | String | Approval status | "approved-with-warnings" | | compliance_issues | Array | List of compliance problems | ["Non-brand colors detected"] | | upload_date | DateTime | Asset upload timestamp | "2024-01-15T10:30:00Z" | | searchable_keywords | String | Search-optimized keywords | "image social-media modern brand-logo" | ๐ ๏ธ Setup Instructions Estimated setup time: 25-30 minutes Prerequisites n8n instance with community nodes enabled ScrapeGraphAI API account and credentials Google Sheets account with API access File upload system or DAM integration Brand guidelines document (for compliance configuration) Step-by-Step Configuration 1. Install Community Nodes Install required community nodes npm install n8n-nodes-scrapegraphai 2. Configure ScrapeGraphAI Credentials Navigate to Credentials in your n8n instance Add new ScrapeGraphAI API credentials Enter your API key from ScrapeGraphAI dashboard Test the connection to ensure it's working 3. Set up Google Sheets Connection Add Google Sheets OAuth2 credentials Grant necessary permissions for spreadsheet access Create a new spreadsheet for creative asset management Configure the sheet name (default: "Creative Assets Dashboard") 4. Configure Webhook Trigger Set up the webhook endpoint for asset uploads Configure the webhook URL in your file upload system Ensure asset_url parameter is passed in webhook payload Test webhook connectivity 5. Customize Brand Guidelines Update the Brand Compliance Checker node with your brand colors Configure approved file formats and size limits Set required brand elements and fonts Define resolution standards and quality requirements 6. Configure Asset Organization Customize folder structure preferences Set up naming conventions for different asset types Configure metadata extraction preferences Set up search optimization parameters 7. Test and Validate Upload a test asset to trigger the workflow Verify all analysis steps complete successfully Check Google Sheets for proper data formatting Validate brand compliance scoring ๐ Workflow Customization Options Modify Analysis Parameters Adjust ScrapeGraphAI prompts for specific asset types Customize tag generation algorithms Modify color analysis sensitivity Add industry-specific analysis criteria Extend Brand Compliance Add more sophisticated brand guideline checks Implement automated correction suggestions Include legal compliance verification Add accessibility compliance checks Customize Organization Structure Modify folder hierarchy based on team preferences Implement custom naming conventions Add version control and asset history Configure backup and archiving rules Output Customization Add integration with DAM systems Implement asset approval workflows Create automated reporting and analytics Add team collaboration features ๐ Use Cases Brand Asset Management**: Automatically organize and tag brand assets Compliance Monitoring**: Ensure all assets meet brand guidelines Creative Team Collaboration**: Centralized asset management and sharing Marketing Campaign Management**: Organize assets by campaign and context Asset Discovery**: AI-powered search and recommendation system Quality Control**: Automated quality and compliance checks ๐จ Important Notes Respect ScrapeGraphAI API rate limits and terms of service Implement appropriate delays between requests to avoid rate limiting Regularly review and update brand guidelines in the compliance checker Monitor API usage to manage costs effectively Keep your credentials secure and rotate them regularly Consider data privacy and copyright compliance for creative assets Ensure proper backup and version control for important assets ๐ง Troubleshooting Common Issues: ScrapeGraphAI connection errors: Verify API key and account status Webhook trigger failures: Check webhook URL and payload format Google Sheets permission errors: Check OAuth2 scope and permissions Asset analysis errors: Review the ScrapeGraphAI prompt configuration Brand compliance false positives: Adjust guideline parameters File organization issues: Check folder permissions and naming conventions Support Resources: ScrapeGraphAI documentation and API reference n8n community forums for workflow assistance Google Sheets API documentation for advanced configurations Digital asset management best practices Brand compliance and governance guidelines
by vinci-king-01
Influencer Content Monitor with ScrapeGraphAI Analysis and ROI Tracking ๐ฏ Target Audience Marketing managers and brand managers Influencer marketing agencies Social media managers Digital marketing teams Brand partnerships coordinators Marketing analysts and strategists Campaign managers ROI and performance analysts ๐ Problem Statement Manual monitoring of influencer campaigns is time-consuming and often misses critical performance insights, brand mentions, and ROI calculations. This template solves the challenge of automatically tracking influencer content, analyzing engagement metrics, detecting brand mentions, and calculating campaign ROI using AI-powered analysis and automated workflows. ๐ง How it Works This workflow automatically monitors influencer profiles and content using ScrapeGraphAI for intelligent analysis, tracks brand mentions and sponsored content, calculates performance metrics, and provides comprehensive ROI analysis for marketing campaigns. Key Components Daily Schedule Trigger - Runs automatically every day at 9:00 AM to monitor influencer campaigns ScrapeGraphAI - Influencer Profiles - Uses AI to extract profile data and recent posts from Instagram Content Analyzer - Analyzes post content for engagement rates and quality scoring Brand Mention Detector - Identifies brand mentions and sponsored content indicators Campaign Performance Tracker - Tracks campaign metrics and KPIs Marketing ROI Calculator - Calculates return on investment for campaigns ๐ Data Analysis Specifications The template analyzes and tracks the following metrics: | Metric Category | Data Points | Description | Example | |----------------|-------------|-------------|---------| | Profile Data | Username, Followers, Following, Posts Count, Bio, Verification Status | Basic influencer profile information | "@influencer", "100K followers", "Verified" | | Post Analysis | Post URL, Caption, Likes, Comments, Date, Hashtags, Mentions | Individual post performance data | "5,000 likes", "150 comments" | | Engagement Metrics | Engagement Rate, Content Quality Score, Performance Tier | Calculated performance indicators | "3.2% engagement rate", "High performance" | | Brand Detection | Brand Mentions, Sponsored Content, Mention Count | Brand collaboration tracking | "Nike mentioned", "Sponsored post detected" | | Campaign Performance | Total Reach, Total Engagement, Average Engagement, Performance Score | Overall campaign effectiveness | "50K total reach", "85.5 performance score" | | ROI Analysis | Total Investment, Estimated Value, ROI Percentage, Cost per Engagement | Financial performance metrics | "$2,500 investment", "125% ROI" | ๐ ๏ธ Setup Instructions Estimated setup time: 20-25 minutes Prerequisites n8n instance with community nodes enabled ScrapeGraphAI API account and credentials Instagram accounts to monitor (influencer usernames) Campaign budget and cost data for ROI calculations Step-by-Step Configuration 1. Install Community Nodes Install required community nodes npm install n8n-nodes-scrapegraphai 2. Configure ScrapeGraphAI Credentials Navigate to Credentials in your n8n instance Add new ScrapeGraphAI API credentials Enter your API key from ScrapeGraphAI dashboard Test the connection to ensure it's working 3. Set up Schedule Trigger Configure the daily schedule (default: 9:00 AM UTC) Adjust timezone to match your business hours Set appropriate frequency for your monitoring needs 4. Configure Influencer Monitoring Update the websiteUrl parameter with target influencer usernames Customize the user prompt to extract specific profile data Set up monitoring for multiple influencers if needed Configure brand keywords for mention detection 5. Customize Brand Detection Update brand keywords in the Brand Mention Detector node Add sponsored content indicators (#ad, #sponsored, etc.) Configure brand mention sensitivity levels Set up competitor brand monitoring 6. Configure ROI Calculations Update cost estimates in the Marketing ROI Calculator Set value per engagement and reach metrics Configure campaign management costs Adjust ROI calculation parameters 7. Test and Validate Run the workflow manually with test data Verify all analysis steps complete successfully Check data accuracy and calculation precision Validate ROI calculations with actual campaign data ๐ Workflow Customization Options Modify Monitoring Parameters Adjust monitoring frequency (hourly, daily, weekly) Add more social media platforms (TikTok, YouTube, etc.) Customize engagement rate calculations Modify content quality scoring algorithms Extend Brand Detection Add more sophisticated brand mention detection Implement sentiment analysis for brand mentions Include competitor brand monitoring Add automated alert systems for brand mentions Customize Performance Tracking Modify performance tier thresholds Add more detailed engagement metrics Implement trend analysis and forecasting Include audience demographic analysis Output Customization Add integration with marketing dashboards Implement automated reporting systems Create alert systems for performance drops Add campaign comparison features ๐ Use Cases Influencer Campaign Monitoring**: Track performance of influencer partnerships Brand Mention Detection**: Monitor brand mentions across influencer content ROI Analysis**: Calculate return on investment for marketing campaigns Competitive Intelligence**: Monitor competitor brand mentions Performance Optimization**: Identify top-performing content and influencers Campaign Reporting**: Generate automated reports for stakeholders ๐จ Important Notes Respect Instagram's terms of service and rate limits Implement appropriate delays between requests to avoid rate limiting Regularly review and update brand keywords and detection parameters Monitor API usage to manage costs effectively Keep your credentials secure and rotate them regularly Consider data privacy and compliance requirements Ensure accurate cost data for ROI calculations ๐ง Troubleshooting Common Issues: ScrapeGraphAI connection errors: Verify API key and account status Instagram access issues: Check account accessibility and rate limits Brand detection false positives: Adjust keyword sensitivity ROI calculation errors: Verify cost and value parameters Schedule trigger failures: Check timezone and cron expression Data parsing errors: Review the Code node's JavaScript logic Support Resources: ScrapeGraphAI documentation and API reference n8n community forums for workflow assistance Instagram API documentation and best practices Influencer marketing analytics best practices ROI calculation methodologies and standards
by Dariusz Koryto
Get automated weather updates delivered directly to your Telegram chat at scheduled intervals. This workflow fetches current weather data from OpenWeatherMap and sends formatted weather reports via a Telegram bot. Use Cases Daily morning weather briefings Regular weather monitoring for outdoor activities Automated weather alerts for specific locations Personal weather assistant for travel planning Prerequisites Before setting up this workflow, ensure you have: An OpenWeatherMap API account (free tier available) A Telegram bot token Your Telegram chat ID n8n instance (cloud or self-hosted) Setup Instructions Step 1: Create OpenWeatherMap Account Go to OpenWeatherMap and sign up for a free account Navigate to the API keys section in your account Copy your API key (you'll need this for the workflow configuration) Step 2: Create Telegram Bot Open Telegram and search for @BotFather Start a chat and use the /newbot command Follow the prompts to create your bot and get the bot token Save the bot token securely Step 3: Get Your Telegram Chat ID Start a conversation with your newly created bot Send any message to the bot Visit https://api.telegram.org/bot<YourBOTToken>/getUpdates in your browser Look for your chat ID in the response (it will be a number like 123456789) Step 4: Configure the Workflow Import this workflow into your n8n instance Configure each node with your credentials: Schedule Trigger Node Set your preferred schedule (default: daily at 8:00 AM) Use cron expression format (e.g., 0 8 * * * for 8 AM daily) Get Weather Node Add your OpenWeatherMap credentials Update the cityName parameter to your desired location Format: "CityName,CountryCode" (e.g., "London,UK") Send a text message Node Add your Telegram bot credentials (bot token) Replace XXXXXXX in the chatId field with your actual chat ID Customization Options Location Settings In the "Get Weather" node, modify the cityName parameter to change the location. You can specify: City name only: "Paris" City with country: "Paris,FR" City with state and country: "Miami,FL,US" Schedule Frequency In the "Schedule Trigger" node, adjust the cron expression: Every 6 hours: 0 */6 * * * Twice daily (8 AM & 6 PM): 0 8,18 * * * Weekly on Mondays at 9 AM: 0 9 * * 1 Message Format In the "Format Weather" node, you can customize the message template by modifying the message variable in the function code. Current format includes: Current temperature with "feels like" temperature Min/max temperatures for the day Weather description and precipitation Wind speed and direction Cloud coverage percentage Sunrise and sunset times Language Support In the "Get Weather" node, change the language parameter to get weather descriptions in different languages: English: "en" Spanish: "es" French: "fr" German: "de" Polish: "pl" Troubleshooting Common Issues Weather data not updating: Verify your OpenWeatherMap API key is valid and active Check if you've exceeded your API rate limits Ensure the city name format is correct Messages not being sent: Confirm your Telegram bot token is correct Verify the chat ID is accurate (should be a number, not username) Make sure you've started a conversation with your bot Workflow not triggering: Check if the workflow is activated (toggle switch should be ON) Verify the cron expression syntax is correct Ensure your n8n instance is running continuously Testing the Workflow Use the "Test workflow" button to run manually Check each node's output for errors Verify the final message format in Telegram Node Descriptions Schedule Trigger Automatically starts the workflow based on a cron schedule. Runs at specified intervals to fetch fresh weather data. Get Weather Connects to OpenWeatherMap API to retrieve current weather conditions for the specified location. Format Weather Processes the raw weather data and creates a user-friendly message with emojis and organized information. Send a text message Delivers the formatted weather report to your Telegram chat using the configured bot. Additional Features You can extend this workflow by: Adding weather alerts for specific conditions (temperature thresholds, rain warnings) Including weather forecasts for multiple days Sending reports to multiple chat recipients Adding location-based emoji selection Integrating with other notification channels (email, Slack, Discord) Security Notes Keep your API keys and bot tokens secure Don't share your chat ID publicly Consider using n8n's credential system for storing sensitive information Regularly rotate your API keys for better security Special thanks to Arkadiusz, the only person who supports me in n8n mission to make automation great again.
by Milan Vasarhelyi - SmoothWork
Video Introduction Want to automate your inbox or need a custom workflow? ๐ Book a Call | ๐ฌ DM me on Linkedin Overview This workflow automates sending personalized SMS messages directly from a Google Sheet using Twilio. Simply update a row's status to "To send" and the workflow automatically sends the text message, then updates the status to "Success" or "Error" based on delivery results. Perfect for event reminders, bulk notifications, appointment confirmations, or any scenario where you need to send customized messages to multiple recipients. Key Features Simple trigger mechanism**: Change the status column to "To send" to queue messages Personalization support**: Use [First Name] and [Last Name] placeholders in message templates Automatic status tracking**: The workflow updates your spreadsheet with delivery results Error handling**: Failed deliveries are clearly marked, making it easy to identify issues like invalid phone numbers Runs every minute**: The workflow polls your sheet continuously when active Setup Instructions Step 1: Copy the Template Spreadsheet Make a copy of the Google Sheets template by going to File โ Make a copy. You must use your own copy so the workflow has permission to update status values. Step 2: Connect Your Accounts Google Sheets: Add your Google account credentials to the 'Monitor Google Sheet for SMS Queue' trigger node Twilio: Sign up for a free Twilio account (trial works for testing). From your Twilio dashboard, get your Account SID, Auth Token, and Twilio phone number, then add these credentials to the 'Send SMS via Twilio' node Step 3: Configure the Workflow In the Config node, update: sheet_url: Paste the URL of your copied Google Sheet from_number: Enter your Twilio phone number (include country code, e.g., +1234567890) Step 4: Activate and Test Activate the workflow using the toggle in the top right corner. Add a row to your sheet with the required information (ID, First Name, Phone Number, Message Template) and set the Status to "To send". Within one minute, the workflow will process the message and update the status accordingly.
by Sparsh From Automation Jinn
Automated SEO Data Engine using DataForSEO & Airtable This workflow automatically pulls SERP rankings, competitor keywords, and related keyword ideas from DataForSEO and stores structured results in Airtable โ making SEO tracking and keyword research streamlined and automated. ๐๏ธ What this automation does | Step | Component | Purpose | |------|-----------|---------| | 1 | Trigger (Manual: โExecute workflowโ) | Starts the workflow on demand โ optionally replaceable with a schedule or webhook. | | 2 | Read seed keywords from Airtable (SERP Keywords table) | Fetches the list of keywords for which to track SERP. | | 3 | Post SERP task to DataForSEO API | Requests Google organic SERP results (depth up to 10) for each keyword. | | 4 | Wait + Poll for results (after ~1 min) | Gives DataForSEO time to process, then retrieves the completed task results. | | 5 | Parse & store SERP results into Airtable (SERP rankings table) | Records rank, URL, domain, title, description, breadcrumb, etc. for each result. | | 6 | Read competitor list from Airtable (Competitor Research table) | Fetches competitors (domains/sites) marked for keyword research. | | 7 | Post competitor-site keywords task to DataForSEO | Fetches keywords used by competitor sites. | | 8 | Wait + Poll + Store competitor keywords into Airtable (Competitor Keywords Research) | Captures keyword, competition level, search volume, CPC, monthly volume trends. | | 9 | Aggregate seed keywords โ request related keywords via DataForSEO | Retrieves related / similar keyword ideas for seed list (keyword expansion). | | 10 | Store related keywords into Airtable (Similar Keywords table) | Saves keyword data for long-tail / expansion analysis. | ๐ Key Integrations & Tools n8n** โ Workflow automation and orchestration Airtable** โ Storage for seed keywords, competitor list, and all result tables (SERP Rankings, Competitor Keywords, Similar Keywords) DataForSEO API** โ For SERP data, competitor-site keywords, and related keyword suggestions Core n8n nodes: Trigger, HTTP Request, Wait, Split Out, Aggregate, Airtable (search & create) ๐ Data Output / Stored Fields SERP Rankings type, rank_group, rank_absolute, page, domain, title, description, url, breadcrumb Linked to original seed keyword via SERP Keywords reference Competitor Keywords & Similar Keywords Keyword Competition, Competition_Index Search_Volume, CPC, Low_Top_Of_Page_Bid, High_Top_Of_Page_Bid (if available) Monthly search-volume fields: Jan_2025, Feb_2025, โฆ, Dec_2025 (mapped from API's monthly_searches) For competitor keywords: linked to competitor (company/domain) For similar keywords: linked to seed keyword ๐ Important Notes Month-volume mapping:** Ensure the index mapping from APIโs monthly_searches to months is correct โ wrong indices will mislabel month data. Fixed wait time:** Current 1-minute wait may not always suffice โ for large workloads or slow API responses, increase wait or implement polling/backoff logic. No deduplication:** Running repeatedly may produce duplicate Airtable records. Consider adding search-or-update logic to avoid duplicates. Rate limits / quotas:** Airtable and DataForSEO have limits โ batch carefully, throttle requests or add spacing to avoid hitting limits. Credentials security:** Store Airtable and DataForSEO API credentials securely in n8nโs credentials manager โ avoid embedding tokens directly in workflow JSON. ๐ Why this Workflow is Useful Fully automates SERP tracking and competitor keyword research โ no manual work needed after setup Maintains structured, historical data in Airtable โ ideal for tracking rank changes, discovering competitor moves, and keyword expansion over time Great for SEO teams, agencies, content owners, or anyone needing systematic keyword intelligence and monitoring ๐ Recommended Next Steps Replace manual trigger with a Schedule Trigger (daily/weekly) for automated runs Add deduplication (upsert) logic to prevent duplicate records and keep Airtable clean Improve robustness: add retry logic for API failures, rate-limit handling, and error notifications (Slack / email) Add logging of API response data (task IDs, raw responses) for debugging and audit trails (Optional) Build a reporting dashboard (Airtable Interface / BI tool) to visualise rank trends, keyword growth, and competitor comparisons ๐ Usage / Setup Checklist Configure Airtable base / tables: SERP Keywords, Competitor Research, SERP rankings, Competitor Keywords Research, Similar Keywords. Add credentials in n8n: Airtable API token; DataForSEO API credentials (HTTP Basic / Header auth). Import this workflow JSON into your n8n instance. Update any base/table/field IDs if different. (Optional) Replace Manual Trigger with Schedule Trigger, enable workflow. Run once with a small seed list โ verify outputs, schema, and month-volume mapping. Enable periodic runs and monitor for rate limits or API errors.
by Pauline
This workflow allows you to have a Slack alert when one of your n8n workflows gets an issue. Error trigger**: This node launched the workflow when one of your active workflows gets an issue Slack node**: This node sends you a customized message to alert you and to check the error โ ๏ธ You don't have to activate this workflow for it to be effective
by Samir Saci
Tags*: AI Agent, MCP Server, n8n API, Monitoring, Debugging, Workflow Analytics, Automation Context Hi! Iโm Samir โ a Supply Chain Engineer and Data Scientist based in Paris, and founder of LogiGreen Consulting. This workflow is part of my latest project: an AI assistant that automatically analyses n8n workflow executions, detects failures, and identifies root causes through natural conversation with Claude Desktop. > Turn your automation logs into intelligent conversations with an AI that understands your workflows. The idea is to use Claude Desktop to help monitor and debug your workflows deployed in production. The workflow shared here is part of the setup. ๐ฌ For business inquiries, you can find me on LinkedIn Who is this template for? This template is designed for automation engineers, data professionals, and AI enthusiasts who manage multiple workflows in n8n and want a smarter way to track errors or performance without manually browsing execution logs. If youโve ever discovered a failed workflow hours after it happened โ this is for you. What does this workflow do? This workflow acts as the bridge between your n8n instance and the Claude MCP Server. It exposes three main routes that can be triggered via a webhook: get_active_workflows โ Fetches all currently active workflows get_workflow_executions โ Retrieves the latest executions and calculates health KPIs get_execution_details โ Extracts detailed information about failed executions for debugging Each request is automatically routed and processed, providing Claude with structured execution data for real-time analysis. How does it fit in the overall setup? Hereโs the complete architecture: Claude Desktop โโ MCP Server โโ n8n Monitor Webhook โโ n8n API The MCP Server (Python-based) communicates with your n8n instance through this workflow. The Claude Desktop app can then query workflow health, execution logs, and error patterns using natural language. The n8n workflow aggregates, cleans, and returns the relevant metrics (failures, success rates, timing, alerts). ๐ The full concept and architecture are explained in my article published on my blog: ๐ Deploy your AI Assistant to Monitor and Debug n8n Workflows using Claude and MCP ๐ฅ Tutorial The full setup tutorial (with source code and demo) is available on YouTube: How does it work? ๐ Webhook Trigger receives the MCP server requests ๐ Switch node routes actions based on "action" parameter โ๏ธ HTTP Request nodes fetch execution and workflow data via the n8n API ๐งฎ A Code node calculates KPIs (success/failure rates, timing, alerts) ๐ค The processed results are returned as JSON for Claude to interpret Example use cases Once connected, you can ask Claude questions like: โShow me all workflows that failed in the last 25 executions.โ โWhy is my Bangkok Meetup Scraper workflow failing?โ โGive me a health report of my n8n instance.โ Claude will reply with structured insights, including failure patterns, node diagnostics, and health status indicators (๐ข๐ก๐ด). What do I need to get started? Youโll need: A self-hosted n8n instance Claude Desktop** app installed The MCP server source code (shared in the tutorial description) The webhook URL from this workflow is configured in your .env file Follow the tutorial for more details, don't hesitate to leave your questions in the comment section. Next Steps ๐๏ธ Use the sticky notes inside the workflow to: Replace <YOUR_N8N_INSTANCE> with your own URL Test the webhook routes individually using the โExecute Workflowโ button Connect the MCP server and Claude Desktop to start monitoring This template was built using n8n v.116.2 Submitted: November 2025
by Elodie Tasia
Create centralized, structured logs directly from your n8n workflows, using Supabase as your scalable log database. Whether you're debugging a workflow, monitoring execution status, or tracking error events, this template makes it easy to log messages in a consistent, structured format inspired by Log4j2 levels (DEBUG, INFO, WARN, ERROR, FATAL). Youโll get a reusable sub-workflow that lets you log any message with optional metadata, tied to a workflow execution and a specific node. What this template does Provides a sub-workflow that inserts log entries into Supabase. Each log entry supports the following fields: workflow_name: Your n8n workflow identifier node_name: last executed node execution_id: n8n execution ID for correlation log_level: One of DEBUG, INFO, WARN, ERROR, FATAL message: Textual message for the log metadata: Optional JSON metadata (flexible format) Comes with examples for diffrerent log levels: Easily call the sub-workflow from any step with a Execute Workflow node and pass dynamic parameters. Use Cases Debug complex workflows without relying on internal n8n logs. Catch and trace errors with contextual metadata. Integrate logs into external dashboards or monitoring tools via Supabase SQL or APIs. Analyze logs by level, time, or workflow. Requirements To use this template, you'll need: A Supabase project with: A log_level_type enum A logs table matching the expected structure A service role key or supabase credentials available in n8n. The table shema and SQL scripts are given in the template file. How to Use This Template Clone the sub-workflow into your n8n instance. Set up Supabase credentials (in the Supabase node). Call the sub-workflow using the Execute Workflow node. Provide input values like: { "workflow_name": "sync_crm_to_airtable", "execution_id": {{$execution.id}}, "node_name": "Airtable Insert", "log_level": "INFO", "message": "New contact pushed to Airtable successfully", "metadata": { "recordId": "rec123", "fields": ["email", "firstName"] } } Repeat anywhere you need to log custom events.
by Amir Safavi-Naini
LLM Cost Monitor & Usage Tracker for n8n ๐ฏ What This Workflow Does This workflow provides comprehensive monitoring and cost tracking for all LLM/AI agent usage across your n8n workflows. It extracts detailed token usage data from any workflow execution and calculates precise costs based on current model pricing. The Problem It Solves When running LLM nodes in n8n workflows, the token usage and intermediate data are not directly accessible within the same workflow. This monitoring workflow bridges that gap by: Retrieving execution data using the execution ID Extracting all LLM usage from any nested structure Calculating costs with customizable pricing Providing detailed analytics per node and model WARNING: it works after the full execution of the workflow (i.e. you can't get this data before completion of all tasks in the workflow) โ๏ธ Setup Instructions Prerequisites Experience Required: Basic familiarity with n8n LLM nodes and AI agents Agent Configuration: In your monitored workflows, go to agent settings and enable "Return Intermediate Steps" For getting execution data, you need to set upthe n8n API in your instance (also available onthe free version) Installation Steps Import this monitoring workflow into your n8n instance Go to Settings >> select n8n API from left bar >> define an API. Now you can add this as the credential for your "Get an Execution" node Configure your model name mappings in the "Standardize Names" node Update model pricing in the "Model Prices" node (prices per 1M tokens) To monitor a workflow: Add an "Execute Workflow" node at the end of your target workflow Select this monitoring workflow Important: Turn OFF "Wait For Sub-Workflow Completion" Pass the execution ID as input ๐ง Customization When You See Errors If the workflow enters the error path, it means an undefined model was detected. Simply: Add the model name to the standardize_names_dic Add its pricing to the model_price_dic Re-run the workflow Configurable Elements Model Name Mapping**: Standardize different model name variations (e.g., "gpt-4-0613" โ "gpt-4") Pricing Dictionary**: Set costs per million tokens for input/output Extraction Depth**: Captures tokens from any nesting level automatically ๐ Output Data Per LLM Call Cost Breakdown**: Prompt, completion, and total costs in USD Token Metrics**: Prompt tokens, completion tokens, total tokens Performance**: Execution time, start time, finish reason Content Preview**: First 100 chars of input/output for debugging Model Parameters**: Temperature, max tokens, timeout, retry count Execution Context**: Workflow name, node name, execution status Flow Tracking**: Previous nodes chain Summary Statistics Total executions and costs Breakdown by model type Breakdown by node Average cost per call Total execution time โจ Key Benefits No External Dependencies**: Everything runs within n8n Universal Compatibility**: Works with any workflow structure Automatic Detection**: Finds LLM usage regardless of nesting Real-time Monitoring**: Track costs as workflows execute Debugging Support**: Preview actual prompts and responses Scalable**: Handles multiple models and complex workflows ๐ Example Use Cases Cost Optimization**: Identify expensive nodes and optimize prompts Usage Analytics**: Track token consumption across teams/projects Budget Monitoring**: Set alerts based on cost thresholds Performance Analysis**: Find slow-running LLM calls Debugging**: Review actual inputs/outputs without logs Compliance**: Audit AI usage across your organization ๐ Quick Start Import workflow Update model prices (if needed) Add monitoring to any workflow with the Execute Workflow node View detailed cost breakdowns instantly Note: Prices are configured per million tokens. Default includes GPT-4, GPT-3.5, Claude, and other popular models. Add custom models as needed.
by Samir Saci
Tags*: Supply Chain, Inventory Management, ABC Analysis, Pareto Principle, Demand Variability, Automation, Google Sheets Context Hi! Iโm Samir โ a Supply Chain Engineer and Data Scientist based in Paris, and founder of LogiGreen Consulting. I help companies optimise inventory and logistics operations by combining data analytics and workflow automation. This workflow is part of our inventory optimisation toolkit, allowing businesses to perform ABC classification and Pareto analysis directly from their transactional sales data. > Automate inventory segmentation with n8n! ๐ฌ For business inquiries, feel free to connect with me on LinkedIn Who is this template for? This workflow is designed for supply chain analysts, demand planners, or inventory managers who want to: Identify their top-performing items (Pareto 80/20 principle) Classify products into ABC categories based on sales contribution Evaluate demand variability (XYZ classification support) Imagine you have a Google Sheet where daily sales transactions are stored: The workflow aggregates sales by item, calculates cumulative contribution, and assigns A, B, or C classes. It also computes mean, standard deviation, and coefficient of variation (CV) to highlight demand volatility. How does it work? This workflow automates the process of ABC & Pareto analysis from raw sales data: ๐ Google Sheets input provides daily transactional sales ๐งฎ Aggregation & code nodes compute sales, turnover, and cumulative shares ๐ง ABC class mapping assigns items into A/B/C buckets ๐ Demand variability metrics (XYZ) are calculated ๐ Results are appended into dedicated Google Sheets tabs for reporting ๐ฅ Watch My Tutorial Steps: ๐ Load daily sales records from Google Sheets ๐ Filter out items with zero sales ๐ Aggregate sales by store, item, and day ๐ Perform Pareto analysis to calculate cumulative turnover share ๐งฎ Compute demand variability (mean, stdev, CV) ๐ง Assign ABC classes based on cumulative share thresholds ๐ฅ Append results into ABC XYZ and Pareto output sheets What do I need to get started? Youโll need: A Google Sheet with sales transactions (date, item, quantity, turnover) that is available here: Test Sheet A Google Sheets account connected in n8n Basic knowledge of inventory analysis (ABC/XYZ) Next Steps ๐๏ธ Use the sticky notes in the n8n canvas to: Add your Google Sheets credentials Replace the Sheet ID with your own sales dataset Run the workflow and check the output tabs: ABC XYZ, Pareto, and Store Sales This template was built using n8n v1.107.3 Submitted: September 15, 2025
by Luis Hernandez
GLPI Pending Tickets Notification to Microsoft Teams ๐ Overview Automate daily notifications for pending GLPI tickets directly to Microsoft Teams. Never miss critical support cases with this workflow that monitors assigned tickets and sends personal alerts. ๐ง How It Works Connect to GLPI - Authenticates and searches for your assigned tickets Filter Results - Finds tickets in "In Progress" status within your entity Send Notifications - Delivers formatted alerts to your Teams chat Clean Up - Properly closes GLPI session for security ๐ What Gets Monitored Tickets assigned to specific technician (configurable) Status: "In Progress/Assigned" Entity: Your organization (customizable) Date range: Tickets after specified date โก Key Benefits Never Miss Deadlines - Daily automated reminders Personal Focus - Only your assigned tickets Time Savings - Eliminates manual checking (15-30 min daily) Rich Details - Shows ticket title, ID, and due date โ๏ธ Setup Steps Time Required: ~30 minutes Import Template - Add workflow to your n8n instance Configure GLPI - Set server URL, credentials, and app token Set Technician ID - Update to your GLPI user ID Connect Teams - Link your Microsoft Teams account Customize Filters - Adjust entity name and date range Test & Schedule - Verify notifications and set daily trigger ๐จ Easy Customization Change technician ID for different users Adjust notification schedule (default: 8 AM daily) Modify entity filters for your organization Add multiple technicians by duplicating workflow ๐ Prerequisites GLPI instance with API enabled GLPI user account with ticket read permissions Microsoft Teams account (basic license) n8n with Microsoft Teams integration Perfect for support technicians who want automated reminders about their pending GLPI tickets without manual daily checks.