by Rahul Joshi
Automatically detect, classify, and document GitHub API errors using AI. This workflow connects GitHub, OpenAI (GPT-4o), Airtable, Notion, and Slack to build a real-time, searchable API error knowledge base β helping engineering and support teams respond faster, stay aligned, and maintain clean documentation. βοΈππ¬ π What This Template Does 1οΈβ£ Triggers on new or updated GitHub issues (API-related). πͺ 2οΈβ£ Extracts key fields (title, body, repo, and link). π 3οΈβ£ Classifies issues using OpenAI GPT-4o, identifying error type, category, root cause, and severity. π€ 4οΈβ£ Validates & parses AI output into structured JSON format. β 5οΈβ£ Creates or updates organized FAQ-style entries in Airtable for quick lookup. ποΈ 6οΈβ£ Logs detailed entries into Notion, maintaining an ongoing issue knowledge base. π 7οΈβ£ Notifies the right Slack team channel (DevOps, Backend, API, Support) with concise summaries. π¬ 8οΈβ£ Tracks & prevents duplicates, keeping your error catalog clean and auditable. π π‘ Key Benefits β Converts unstructured GitHub issues into AI-analyzed documentation β Centralizes API error intelligence across teams β Reduces time-to-resolution for recurring issues β Maintains synchronized records in Airtable & Notion β Keeps DevOps and Support instantly informed through Slack alerts β Fully automated, scalable, and low-cost using GPT-4o βοΈ Features Real-time GitHub trigger for API or backend issues GPT-4o-based AI classification (error type, cause, severity, confidence) Smart duplicate prevention logic Bi-directional sync to Airtable + Notion Slack alerts with contextual AI insights Modular design β easy to extend with Jira, Teams, or email integrations π§° Requirements GitHub OAuth2 credentials OpenAI API key (GPT-4o recommended) Airtable Base & Table IDs (with fields like Error Code, Category, Severity, Root Cause) Notion integration with database access Slack Bot token with chat:write scope π₯ Target Audience Engineering & DevOps teams managing APIs Customer support & SRE teams maintaining FAQs Product managers tracking recurring API issues SaaS orgs automating documentation & error visibility πͺ Step-by-Step Setup Instructions 1οΈβ£ Connect your GitHub account and enable the βissuesβ webhook event. 2οΈβ£ Add OpenAI credentials (GPT-4o model for classification). 3οΈβ£ Create an Airtable base with fields: Error Code, Category, Root Cause, Severity, Confidence. 4οΈβ£ Configure your Notion database with matching schema and access. 5οΈβ£ Set up Slack credentials and choose your alert channels. 6οΈβ£ Test with a sample GitHub issue to validate AI classification. 7οΈβ£ Enable the workflow β enjoy continuous AI-powered issue documentation!
by Avkash Kakdiya
How it works This workflow runs daily to review all active deals and evaluate their likelihood of closing successfully. It enriches deal data with recent engagement activity and applies AI-based behavioral scoring to predict conversion probability. High-risk or stalled deals are flagged automatically. Actionable alerts are sent to the sales team, and all analysis is logged for forecasting and tracking. Step-by-step Trigger and fetch deals** Schedule Trigger β Runs the workflow automatically at a fixed time each day. Get Active Deals from HubSpot β Retrieves all open, non-closed deals with key properties. Formatting Data β Normalizes deal fields such as value, stage, age, contacts, and activity dates. Enrich deals with engagement data** If β Filters only active deals for further processing. Loop Over Items β Processes each deal individually. HTTP Request β Fetches engagement associations for the current deal. Get an engagement β Retrieves detailed engagement records from HubSpot. Extracts Data β Structures engagement content, timestamps, and metadata for analysis. Analyze risk, alert, and store results** OpenAI Chat Model β Provides the language model used for analysis. AI Agent β Evaluates behavioral signals, predicts conversion probability, and recommends actions. Format Data β Parses AI output into structured, machine-readable fields. Filter Alerts Needed β Identifies deals that need immediate attention. Send Slack Alert β Sends detailed alerts for high-risk or stalled deals. Append or update row in sheet β Logs analysis results into Google Sheets for reporting. Why use this? Automatically identify high-risk deals before they stall or fail Give sales teams clear, data-driven next actions instead of raw CRM data Improve forecasting accuracy with AI-powered probability scoring Maintain a historical deal health log for audits and performance reviews Reduce manual pipeline reviews while increasing response speed
by vinci-king-01
Meeting Notes Distributor β Mailchimp and MongoDB This workflow automatically converts raw meeting recordings or written notes into concise summaries, stores them in MongoDB for future reference, and distributes the summaries to all meeting participants through Mailchimp. It is ideal for teams that want to keep everyone aligned without manual copy-and-paste or email chains. Pre-conditions/Requirements Prerequisites n8n instance (self-hosted or cloud) Audio transcription service or written notes available via HTTP endpoint MongoDB database (cloud or self-hosted) Mailchimp account with an existing Audience list Required Credentials MongoDB** β Connection string with insert permission Mailchimp API Key** β To send campaigns (Optional) HTTP Service Auth** β If your transcription/notes endpoint is secured Specific Setup Requirements | Component | Example Value | Notes | |------------------|--------------------------------------------|-----------------------------------------------------| | MongoDB Database | meeting_notes | Database in which summaries will be stored | | Collection Name | summaries | Collection automatically created if it doesnβt exist| | Mailchimp List | Meeting Participants | Audience list containing participant email addresses| | Notes Endpoint | https://example.com/api/meetings/{id} | Returns raw transcript or note text (JSON) | How it works This workflow automatically converts raw meeting recordings or written notes into concise summaries, stores them in MongoDB for future reference, and distributes the summaries to all meeting participants through Mailchimp. It is ideal for teams that want to keep everyone aligned without manual copy-and-paste or email chains. Key Steps: Schedule Trigger**: Fires daily (or on-demand) to check for new meeting notes. HTTP Request**: Downloads raw notes or transcript from your endpoint. Code Node**: Uses an AI or custom function to generate a concise summary. If Node**: Skips processing if the summary already exists in MongoDB. MongoDB**: Inserts the new summary document. Split in Batches**: Splits participants into Mailchimp-friendly batch sizes. Mailchimp**: Sends personalized summary emails to each participant. Wait**: Ensures rate limits are respected between Mailchimp calls. Merge**: Consolidates success/failure results for logging or alerting. Set up steps Setup Time: 15-25 minutes Clone the workflow: Import or copy the JSON into your n8n instance. Configure Schedule Trigger: Set the cron expression (e.g., every weekday at 18:00). Set HTTP Request URL: Replace placeholder with your transcription/notes endpoint. Add auth headers if needed. Add MongoDB Credentials: Enter your connection string in the MongoDB node. Customize Summary Logic: Open the Code node to tweak summarization length, language, or model. Mailchimp Credentials: Supply your API key and select the correct Audience list. Map Email Fields: Ensure participant emails are supplied from transcription metadata or external source. Test Run: Execute once manually to verify MongoDB insert and email delivery. Activate Workflow: Enable the workflow so it runs on its defined schedule. Node Descriptions Core Workflow Nodes: Schedule Trigger** β Initiates the workflow at predefined intervals. HTTP Request** β Retrieves the latest meeting data (transcript or notes). Code** β Generates a summarized version of the meeting content. If** β Checks MongoDB for duplicates to avoid re-sending. MongoDB** β Stores finalized summaries for archival and audit. SplitInBatches** β Breaks participant list into manageable chunks. Mailchimp** β Sends summary emails via campaigns or transactional messages. Wait** β Pauses between batches to honor Mailchimp rate limits. Merge** β Aggregates success/failure responses for logging. Data Flow: Schedule Trigger β HTTP Request β Code β If If summary is new: MongoDB β SplitInBatches β Mailchimp β Wait Merge collates all results Customization Examples 1. Change Summary Length // Inside the Code Node const rawText = items[0].json.text; const maxSentences = 5; // adjust to 3, 7, etc. items[0].json.summary = summarize(rawText, maxSentences); return items; 2. Personalize Mailchimp Subject // In the Set node before Mailchimp items[0].json.subject = Recap: ${items[0].json.meetingTitle} β ${new Date().toLocaleDateString()}; return items; Data Output Format The workflow outputs structured JSON data: { "meetingId": "abc123", "meetingTitle": "Quarterly Planning", "summary": "Key decisions on roadmap, budget approvals...", "participants": [ "alice@example.com", "bob@example.com" ], "mongoInsertId": "65d9278fa01e3f94b1234567", "mailchimpBatchIds": ["2024-01-01T12:00:00Z#1", "2024-01-01T12:01:00Z#2"] } Troubleshooting Common Issues Mailchimp rate-limit errors β Increase Wait node delay or reduce batch size. Duplicate summaries β Ensure the If node correctly queries MongoDB using meeting ID as a unique key. Performance Tips Keep batch sizes under 500 to stay well within Mailchimp limits. Offload AI summarization to external services if Code node execution time is high. Pro Tips: Store full transcripts in MongoDB GridFS for future reference. Use environment variables in n8n for all API keys to simplify workflow export/import. Add a notifier (e.g., Slack node) after Merge to alert admins on failures. This is a community template provided βas-isβ without warranty. Always validate the workflow in a test environment before using it in production.
by Khairul Muhtadin
Decodo Amazon Product Recommender delivers instant, AI-powered shopping recommendations directly through Telegram. Send any product name and receive Amazon product analysis featuring price comparisons, ratings, sales data, and categorized recommendations (budget, premium, best value) in under 40 secondsβeliminating hours of manual research. Why Use This Workflow? Time Savings: Reduce product research from 45+ minutes to under 30 seconds Decision Quality: Compare 20+ products automatically with AI-curated recommendations Zero Manual Work: Complete automation from message input to formatted recommendations Ideal For E-commerce Entrepreneurs:** Quickly research competitor products, pricing strategies, and market trends for inventory decisions Smart Shoppers & Deal Hunters:** Get instant product comparisons with sales volume data and discount tracking before purchasing Product Managers & Researchers:** Analyze Amazon marketplace positioning, customer sentiment, and pricing ranges for competitive intelligence How It Works Trigger: User sends product name via Telegram (e.g., "iPhone 15 Pro Max case") AI Validation: Gemini 2.5 Flash extracts core product keywords and validates input authenticity Data Collection: Decodo API scrapes Amazon search results, extracting prices, ratings, reviews, sales volume, and product URLs Processing: JavaScript node cleans data, removes duplicates, calculates value scores, and categorizes products (top picks, budget, premium, best value, most popular) Intelligence Layer: AI generates personalized recommendations with Telegram-optimized markdown formatting, shortened product names, and clean Amazon URLs Output & Delivery: Formatted recommendations sent to user with categorized options and direct purchase links Error Handling: Admin notifications via separate Telegram channel for workflow monitoring Setup Guide Prerequisites | Requirement | Type | Purpose | |-------------|------|---------| | n8n instance | Essential | Workflow execution platform | | Decodo Account | Essential | Amazon product data scraping | | Telegram Bot Token | Essential | Chat interface for user interactions | | Google Gemini API | Essential | AI-powered product validation and recommendations | | Telegram Account | Optional | Admin error notifications | Installation Steps Import the JSON file to your n8n instance Configure credentials: Decodo API: Sign up at decodo.com β Dashboard β Scraping APIs β Web Advanced β Copy BASIC AUTH TOKEN Telegram Bot: Message @BotFather on Telegram β /newbot β Copy HTTP API token (format: 123456789:ABCdefGHI...) Google Gemini: Obtain API key from Google AI Studio for Gemini 2.5 Flash model Update environment-specific values: Replace YOUR-CHAT-ID in "Notify Admin" node with your Telegram chat ID for error notifications Verify Telegram webhook IDs are properly configured Customize settings: Adjust AI prompt in "Generate Recommendations" node for different output formats Set character limits (default: 2500) for Telegram message length Test execution: Send test message to your Telegram bot: "iPhone 15 Pro" Verify processing status messages appear Confirm recommendations arrive with properly formatted links Customization Options Basic Adjustments: Character Limit**: Modify 2500 in AI prompt to adjust response length (Telegram max: 4096) Advanced Enhancements: Multi-language Support**: Add language detection and translation nodes for international users Price Tracking**: Integrate Google Sheets to log historical prices and trigger alerts on drops Image Support**: Enable Telegram photo messages with product images from scraping results Troubleshooting Common Issues: | Problem | Cause | Solution | |---------|-------|----------| | "No product detected" for valid inputs | AI validation too strict or ambiguous query | Add specific product details (model number, brand) in user input | | Empty recommendations returned | Decodo API rate limit or Amazon blocking | Wait 60 seconds between requests; verify Decodo account status | | Telegram message formatting broken | Special characters in product names | Ensure Telegram markdown mode is set to "Markdown" (legacy) not "MarkdownV2" | Use Case Examples Scenario 1: E-commerce Store Owner Challenge: Needs to quickly assess competitor pricing and product positioning for new inventory decisions without spending hours browsing Amazon Solution: Sends "wireless earbuds" to bot, receives categorized analysis of 20+ products with price ranges ($15-$250), top sellers, and discount opportunities Result: Identifies $35-$50 price gap in market, sources comparable product, achieves 40% profit margin Scenario 2: Smart Shopping Enthusiast Challenge: Wants to buy a laptop backpack but overwhelmed by 200+ Amazon options with varying prices and unclear value propositions Solution: Messages "laptop backpack" to bot, gets AI recommendations sorted by budget ($30), premium ($50+), best value (highest discount + good ratings), and most popular (by sales volume) Result: Purchases "Best Value" recommendation with 35% discount, saves $18 and 45 minutes of research time Created by: Khaisa Studio Category: AI | Productivity | E-commerce | Tags: amazon, telegram, ai, product-research, shopping, automation, gemini Need custom workflows? Contact us Connect with the creator: Portfolio β’ Workflows β’ LinkedIn β’ Medium β’ Threads
by TakatoYamada
Analyze error logs with AI and auto-create GitHub issues with fix suggestions Who is this for DevOps engineers, SREs, and development teams who want to automate error monitoring and reduce mean time to resolution (MTTR). Ideal for teams using GitHub for issue tracking and Slack for incident response. What this workflow does This workflow automates the entire error management lifecycle - from log ingestion to GitHub issue creation and Slack notification - using GPT-4o-powered root cause analysis. When an application error log is received, it parses the payload, checks for duplicates against existing GitHub issues, generates a structured root cause analysis with fix suggestions, creates a formatted GitHub Issue, and routes Slack notifications by severity. A 30-minute Wait node prevents notification flooding. How to set up Add your GitHub Personal Access Token (repo scope) credential Add your OpenAI API credential to the AI analysis node Add your Slack OAuth2 credential (chat:write scope) to all Slack nodes Configure n8n Variables: GITHUB_OWNER and GITHUB_REPO Update Slack channel names (#incident / #dev-alerts) to match your workspace Activate the workflow and copy the webhook URL for your application logger Requirements GitHub repository with Personal Access Token (repo scope) OpenAI API account with GPT-4o access Slack workspace with OAuth2 app installed Two Slack channels: one for critical incidents, one for general dev alerts How to customize Adjust the duplicate detection score threshold (default 60) in the Code node. Modify the GPT-4o prompt to focus on specific error categories. The Wait node duration (30 minutes) can be tuned to match your alerting policy. Key features Scoring-based duplicate detection (no extra API calls required) GPT-4o structured JSON output with graceful fallback parser Severity-based Slack routing (#incident vs #dev-alerts) Dynamic GitHub labels: bug, auto-generated, environment, and critical n8n Variables used for GitHub owner and repo - no hardcoded values Node List | # | Node Name | Type | Purpose | |---|-----------|------|---------| | 1 | Webhook for Error Logs | Webhook | Accepts error log payload via HTTP POST | | 2 | Parse and Enrich Log | Code | Normalizes level, extracts error type, builds duplicate search keyword | | 3 | Search GitHub Issues | GitHub | Fetches open issues labeled bug,auto-generated from the target repo | | 4 | Score GitHub Issue Duplicates | Code | Scores each issue for similarity; flags duplicates at threshold β₯ 60 | | 5 | Check for Duplicates | If | Routes to skip path (true) or analysis path (false) | | 6 | Notify Duplicate to Slack | Slack | Posts link to existing issue when duplicate detected | | 7 | Respond with Duplicate Status | Respond to Webhook | Returns 200 OK JSON acknowledgment for duplicate path | | 8 | OpenAI Error Analysis | OpenAI | GPT-4o analyzes root cause and returns structured JSON fix suggestions | | 9 | Build GitHub Issue Body | Code | Parses AI JSON and builds Markdown issue body with tables and code blocks | | 10 | Create GitHub Issue | GitHub | Creates GitHub issue with title, body, and dynamic labels | | 11 | If Critical Error | If | Checks isCritical flag to route Slack notification channel | | 12 | Post Critical Alert to Slack | Slack | Posts @here alert to #incident with full error details | | 13 | Post Error Summary to Slack | Slack | Posts summary to #dev-alerts with GitHub issue link | | 14 | Wait 30 Minutes | Wait | Enforces 30-minute cooldown to prevent notification flooding | Total: 14 nodes (+ 6 Sticky Notes) Sticky Note Compliance | # | Sticky Note Title | Color | Role | |---|-------------------|-------|------| | 1 | Main Sticky Note (Overview) | Yellow | Workflow overview, How it works, Setup steps, Customization | | 2 | Receive and parse log | White | Covers webhook reception and log parsing | | 3 | Check for duplicate GitHub issues | White | Covers GitHub search and duplicate scoring | | 4 | Handle duplicates and notify | White | Covers duplicate branch (notification + webhook response) | | 5 | Analyze error and suggest fixes | White | Covers AI analysis and issue creation | | 6 | Send alerts and summarize | White | Covers severity check, Slack notifications, and wait | All sticky notes use H2 headings (## ) and follow n8n public guidelines. Webhook payload schema { "service": "payment-api", "level": "CRITICAL", "message": "NullPointerException at PaymentProcessor.java:142", "stack_trace": "java.lang.NullPointerException...", "environment": "production", "timestamp": "2025-01-15T09:23:45Z", "trace_id": "abc-123-xyz", "endpoint": "/api/v2/payments", "http_method": "POST", "status_code": 500, "user_id": "user_98765" } Required fields: service, level, message Optional fields: All others - missing values are handled gracefully with fallbacks. How duplicate detection works | Match condition | Points | |-----------------|--------| | Service name found in issue title | +40 | | Error type found in issue title | +40 | | Keyword overlap (words > 4 chars) | +5 per word (max +20) | Threshold: Score β₯ 60 β duplicate detected β skip issue creation, notify Slack. Tags ai gpt-4 openai github slack error-monitoring devops automation
by Rohit Dabra
Jira MCP Server Integration with n8n Overview Transform your Jira project management with the power of AI and automation! This n8n workflow template demonstrates how to create a seamless integration between chat interfaces, AI processing, and Jira Software using MCP (Model Context Protocol) server architecture. What This Workflow Does Chat-Driven Automation**: Trigger Jira operations through simple chat messages AI-Powered Issue Creation**: Automatically generate detailed Jira issues with descriptions and acceptance criteria Complete Jira Management**: Get issue status, changelogs, comments, and perform full CRUD operations Memory Integration**: Maintain context across conversations for smarter automations Zero Manual Entry**: Eliminate repetitive data entry and human errors Key Features β Natural Language Processing: Use Google Gemini to understand and process chat requests β MCP Server Integration: Secure, efficient communication with Jira APIs β Comprehensive Jira Operations: Create, read, update, delete issues and comments β Smart Memory: Context-aware conversations for better automation β Multi-Action Workflow: Handle multiple Jira operations from a single trigger Demo Video π₯ Watch the Complete Demo: Automate Jira Issue Creation with n8n & AI | MCP Server Integration Prerequisites Before setting up this workflow, ensure you have: n8n instance** (cloud or self-hosted) Jira Software** account with appropriate permissions Google Gemini API** credentials MCP Server** configured and accessible Basic understanding of n8n workflows Setup Guide Step 1: Import the Workflow Copy the workflow JSON from this template In your n8n instance, click Import > From Text Paste the JSON and click Import Step 2: Configure Google Gemini Open the Google Gemini Chat Model node Add your Google Gemini API credentials Configure the model parameters: Model: gemini-pro (recommended) Temperature: 0.7 for balanced creativity Max tokens: As per your requirements Step 3: Set Up MCP Server Connection Configure the MCP Client node: Server URL: Your MCP server endpoint Authentication: Add required credentials Timeout: Set appropriate timeout values Ensure your MCP server supports Jira operations: Issue creation and retrieval Comment management Status updates Changelog access Step 4: Configure Jira Integration Set up Jira credentials in n8n: Go to Credentials > Add Credential Select Jira Software API Add your Jira instance URL, email, and API token Configure each Jira node: Get Issue Status: Set project key and filters Create Issue: Define issue type and required fields Manage Comments: Set permissions and content rules Step 5: Memory Configuration Configure the Simple Memory node: Set memory key for conversation context Define memory retention duration Configure memory scope (user/session level) Step 6: Chat Trigger Setup Configure the When Chat Message Received trigger: Set up webhook URL or chat platform integration Define message filters if needed Test the trigger with sample messages Usage Examples Creating a Jira Issue Chat Input: Can you create an issue in Jira for Login Page with detailed description and acceptance criteria? Expected Output: New Jira issue created with structured description Automatically generated acceptance criteria Proper labeling and categorization Getting Issue Status Chat Input: What's the status of issue PROJ-123? Expected Output: Current issue status Last updated information Assigned user details Managing Comments Chat Input: Add a comment to issue PROJ-123: "Ready for testing in staging environment" Expected Output: Comment added to specified issue Notification sent to relevant team members Customization Options Extending Jira Operations Add more Jira operations (transitions, watchers, attachments) Implement custom field handling Create multi-project workflows AI Enhancement Fine-tune Gemini prompts for better issue descriptions Add custom validation rules Implement approval workflows Integration Expansion Connect to Slack, Discord, or Teams Add email notifications Integrate with time tracking tools Troubleshooting Common Issues MCP Server Connection Failed Verify server URL and credentials Check network connectivity Ensure MCP server is running and accessible Jira API Errors Validate Jira credentials and permissions Check project access rights Verify issue type and field configurations AI Response Issues Review Gemini API quotas and limits Adjust prompt engineering for better results Check model parameters and settings Performance Tips Optimize memory usage for long conversations Implement rate limiting for API calls Use error handling and retry mechanisms Monitor workflow execution times Best Practices Security: Store all credentials securely using n8n's credential system Testing: Test each node individually before running the complete workflow Monitoring: Set up alerts for workflow failures and API limits Documentation: Keep track of custom configurations and modifications Backup: Regular backup of workflow configurations and credentials Happy Automating! π This workflow template is designed to boost productivity and eliminate manual Jira management tasks. Customize it according to your team's specific needs and processes.
by WeblineIndia
Real-Time WooCommerce Return Surge Detection with Slack Alerts & Airtable Logging This n8n workflow monitors WooCommerce refund activity to detect unusual spikes in product returns at the SKU level. It compares return volumes across rolling 24-hour windows, alerts teams in Slack when defined thresholds are exceeded and logs all detected events into Airtable for tracking and analysis. π Quick Start β Get This Running Fast Import the workflow into n8n. Connect your WooCommerce API credentials. Configure Slack and Airtable credentials. Set your preferred schedule interval. Activate the workflow and start monitoring returns automatically. What It Does This workflow is designed to automatically detect abnormal return behavior in a WooCommerce store. On every scheduled run, it fetches recent orders and refunds directly from the WooCommerce REST API. Refund records are mapped back to their original orders to accurately identify affected SKUs. Using a rolling time-window comparison, the workflow calculates current versus previous return counts per SKU. It identifies significant increasesβeither large percentage spikes or unusually high absolute return volumes. This ensures early detection of potential product quality, packaging or fulfillment issues. When a return surge is detected, the workflow sends a structured alert to a Slack channel and stores the alert data in Airtable. This creates a searchable, historical log that supports investigations, trend analysis and operational decision-making. Whoβs It For This workflow is ideal for: eCommerce operations teams. Quality assurance and product managers. Customer support leads. Supply chain and fulfillment teams. Store owners running WooCommerce at scale. Requirements to Use This Workflow To use this workflow, you will need: An active WooCommerce store with REST API access. WooCommerce API credentials** (Consumer Key & Secret). An active Slack workspace with permission to post messages. An Airtable base and table for logging alerts. An n8n instance (self-hosted or cloud). How It Works & How To Set Up Workflow Execution Flow Schedule Trigger runs the workflow at a fixed interval. Time Window node defines current and previous 24-hour comparison windows. HTTP Orders fetches recent WooCommerce orders. HTTP Refunds fetches refund records. Orders_Fetch (Code) maps refunds to parent orders and extracts SKU-level data. Refund_details (Code) aggregates returns, compares windows, and calculates increases. IF Node checks surge conditions: β₯100% increase OR β₯25 current returns Set Fields enriches data with status, run date, and cooldown key. Slack Node sends a formatted alert message. Code Node normalizes Slack output into structured fields. Airtable Node stores alert records for future reference. Setup Instructions Replace {your_woocommerce_domain} with your actual store domain. Verify WooCommerce API permissions allow order and refund access. Select the correct Slack channel in the Slack node. Ensure Airtable column names match the workflow mappings. How To Customize Nodes You can easily adapt this workflow by: Changing the schedule frequency in the Schedule Trigger. Adjusting WINDOW_HOURS in the Code nodes. Modifying alert thresholds in the IF node. Customizing the Slack message format. Adding or removing Airtable fields for reporting needs. Add-ons (Optional Enhancements) This workflow can be extended with: Email or Microsoft Teams notifications. Jira or Linear ticket creation. Product auto-pause for extreme return spikes. Dashboard reporting using BI tools. Cooldown logic to prevent repeated alerts per SKU. Use Case Examples Common use cases include: Detecting defective product batches early. Identifying packaging or shipping damage trends. Monitoring supplier quality issues. Supporting refund root-cause analysis. Improving customer satisfaction metrics. There can be many more operational and analytical use cases based on your business needs. Troubleshooting Guide | Issue | Possible Cause | Solution | |------|---------------|----------| | No Slack alerts | Threshold not met | Lower IF condition limits | | Empty SKU values | Missing SKU in WooCommerce | Use product name or ID fallback | | No data in Airtable | Column mismatch | Verify field names and types | | API errors | Invalid credentials | Re-authorize WooCommerce API | | Duplicate alerts | Frequent schedule | Add cooldown or deduplication logic | Need Help? Need assistance setting this up or customizing it for your business? WeblineIndia can help you implement, extend or build similar automation workflows tailored to your operational needs. Whether you want advanced alerting, deeper analytics or cross-system integrations, our team is ready to help you get the most out of n8n automation.
by PDF Vector
Overview Transform your accounts payable department with this enterprise-grade invoice processing solution. This workflow automates the entire invoice lifecycle - from document ingestion through payment processing. It handles invoices from multiple sources (Google Drive, email attachments, API submissions), extracts data using AI, validates against purchase orders, routes for appropriate approvals based on amount thresholds, and integrates seamlessly with your ERP system. The solution includes vendor master data management, duplicate invoice detection, real-time spend analytics, and complete audit trails for compliance. What You Can Do This comprehensive workflow creates an intelligent invoice processing pipeline that monitors multiple input channels (Google Drive, email, webhooks) for new invoices and automatically extracts data from PDFs, images, and scanned documents using AI. It validates vendor information against your master database, matches invoices to purchase orders, and detects discrepancies. The workflow implements multi-level approval routing based on invoice amount and department, prevents duplicate payments through intelligent matching algorithms, and integrates with QuickBooks, SAP, or other ERP systems. Additionally, it generates real-time dashboards showing processing metrics and cash flow insights while sending automated reminders for pending approvals. Who It's For Perfect for medium to large businesses, accounting departments, and financial service providers processing more than 100 invoices monthly across multiple vendors. Ideal for organizations that need to enforce approval hierarchies and spending limits, require integration with existing ERP/accounting systems, want to reduce processing time from days to minutes, need audit trails and compliance reporting, and seek to eliminate manual data entry errors and duplicate payments. The Problem It Solves Manual invoice processing creates significant operational challenges including data entry errors (3-5% error rate), processing delays (8-10 days per invoice), duplicate payments (0.1-0.5% of invoices), approval bottlenecks causing late fees, lack of visibility into pending invoices and cash commitments, and compliance issues from missing audit trails. This workflow reduces processing time by 80%, eliminates data entry errors, prevents duplicate payments, and provides complete visibility into your payables process. Setup Instructions Google Drive Setup: Create dedicated folders for invoice intake and configure access permissions PDF Vector Configuration: Set up API credentials with appropriate rate limits for your volume Database Setup: Deploy the provided schema for vendor master and invoice tracking tables Email Integration: Configure IMAP credentials for invoice email monitoring (optional) ERP Connection: Set up API access to your accounting system (QuickBooks, SAP, etc.) Approval Rules: Define approval thresholds and routing rules in the configuration node Notification Setup: Configure Slack/email for approval notifications and alerts Key Features Multi-Channel Invoice Ingestion**: Automatically collect invoices from Google Drive, email attachments, and API uploads Advanced OCR and AI Extraction**: Process any invoice format including handwritten notes and poor quality scans Vendor Master Integration**: Validate and enrich vendor data, maintaining a clean vendor database 3-Way Matching**: Automatically match invoices to purchase orders and goods receipts Dynamic Approval Routing**: Route based on amount, department, vendor, or custom rules Duplicate Detection**: Prevent duplicate payments using fuzzy matching algorithms Real-Time Analytics**: Track KPIs like processing time, approval delays, and early payment discounts Exception Handling**: Intelligent routing of problematic invoices for manual review Audit Trail**: Complete tracking of all actions, approvals, and system modifications Payment Scheduling**: Optimize payment timing to capture discounts and manage cash flow Customization Options This workflow can be customized to add industry-specific extraction fields, implement GL coding rules based on vendor or amount, create department-specific approval workflows, add currency conversion for international invoices, integrate with additional systems (banks, expense management), configure custom dashboards and reporting, set up vendor portals for invoice status inquiries, and implement machine learning for automatic GL coding suggestions. Note: This workflow uses the PDF Vector community node. Make sure to install it from the n8n community nodes collection before using this template.
by vinci-king-01
Error Alert Aggregator β Email and Jira This workflow aggregates error logs arriving from multiple sources, deduplicates identical events within a configurable time-window, and sends a single consolidated notification via Email and Jira. It prevents alert fatigue by batching similar errors and guarantees that responsible teams are informed through both channels. Pre-conditions/Requirements Prerequisites n8n instance (self-hosted β₯ v1.0 or n8n.cloud account) Basic understanding of your log sourceβs payload structure SMTP server or n8n Email credentials configured Jira Cloud or Jira Server account with API access Required Credentials Email (SMTP/IMAP or n8n Email node credential)** β to dispatch alert emails Jira** β Create issues automatically in the chosen project HTTP Request Auth (optional)** β If your log endpoint requires authentication Specific Setup Requirements | Setting | Recommended Value | Notes | |-----------------------------|----------------------------------------|-----------------------------------------------------------| | Batch window (Wait node) | 10 minutes | Time allowed to collect & deduplicate errors | | Deduplication key (Code) | error_id or message field | Choose a unique attribute representing the same incident | | Email recipients | Security & DevOps distribution list | Use semicolons for multiple addresses | | Jira project key | SEC | Project where alert tickets should be filed | How it works This workflow aggregates error logs arriving from multiple sources, deduplicates identical events within a configurable time-window, and sends a single consolidated notification via Email and Jira. It prevents alert fatigue by batching similar errors and guarantees that responsible teams are informed through both channels. Key Steps: Schedule Trigger**: Runs every X minutes to poll/collect new log items. HTTP Request**: Pulls error events from your monitoring or log system. IF Node**: Quickly filters out non-error or resolved events. Code Node (Deduplicator)**: Hashes & stores unique error signatures, skipping already-seen items. Wait Node**: Holds processing for the batching period (e.g., 10 min). Merge Node**: Combines all unique errors gathered during the window. Set Node**: Formats the consolidated message for Email & Jira. Email Send**: Dispatches the summary email. Jira Node**: Creates (or updates) an issue with the same summary. Sticky Notes**: Provide inline documentation right inside the workflow for easier maintenance. Set up steps Setup Time: 15-20 minutes Import template: Download the JSON template and drag & drop it into your n8n editor. Configure Schedule Trigger: Set polling interval (e.g., every 5 minutes). HTTP Request Node: Enter the URL of your log endpoint. Add authentication if required. Adjust IF filter: Modify the condition to match your logβs error severity field (status === "error"). Customize Code Node: Replace error_id with the field that uniquely identifies an error. Optionally tweak deduplication TTL. Wait Node: Set the batch time (e.g., 600 seconds). Set Node: Edit the email subject/body and Jira issue summary/description placeholders. Credentials: Add or select your Email credential in Email Send. Add or select your Jira credential in Jira node. Test run the workflow to verify that: Duplicate events are collapsed. Email and Jira tickets show combined information. Activate the workflow to start production monitoring. Node Descriptions Core Workflow Nodes: Schedule Trigger** β Initiates workflow on a fixed interval. HTTP Request** β Retrieves fresh error logs from an external API. IF** β Only lets true error events proceed. Code (Deduplicator)** β Uses JavaScript to remove already-known errors via n8n static data. Wait** β Creates a batching window for aggregation. Merge (Queue mode)** β Joins events accumulated during the wait. Set** β Crafts a human-readable report for Email & Jira. Email Send** β Dispatches the consolidated message to stakeholders. Jira** β Opens/updates an issue containing the same error digest. Sticky Note** β Provides inline explanations for future maintainers. Data Flow: Schedule Trigger β HTTP Request β IF β Code Code β Wait β Merge β Set Set β Email Send & Jira Customization Examples Change Deduplication Strategy // Code Node snippet // Use error 'stacktrace' + 'service' for uniqueness const signature = ${item.json.stacktrace}_${item.json.service}; if ($workflow.staticData.signatureCache?.includes(signature)) { // duplicate, skip return []; } $workflow.staticData.signatureCache = [ ...( $workflow.staticData.signatureCache || [] ), signature ]; return item; Update Existing Jira Issue Instead of Creating New // Jira Node settings // Search for an open ticket with the same summary // If found, add a comment instead of creating { "operation": "comment", "issueKey": "={{$node['Set'].json['jiraIssueKey']}}", "comment": "New occurrences: {{$json.errorCount}}" } Data Output Format The workflow outputs structured JSON data: { "errors": [ { "id": "ERR123", "message": "Database timeout", "count": 5, "firstSeen": "2024-03-14T10:12:00Z", "lastSeen": "2024-03-14T10:22:00Z" } ], "emailStatus": "success", "jiraStatus": "issue_created" } Troubleshooting Common Issues No data returned from HTTP Request β Verify endpoint URL, authentication headers, and that your monitoring tool actually has recent error events. Duplicate alerts still coming through β Increase the Wait nodeβs batching window or refine the deduplication key in the Code node. Performance Tips Cache HTTP responses if the log API supports it to reduce bandwidth. Use selective fields in the HTTP Requestβs query parameters to limit payload size. Pro Tips: Store a rolling hash list in external Redis or DB for large-scale deduplication. Add a second IF branch to auto-resolve Jira tickets when an error disappears for X hours. Use Slack or Microsoft Teams nodes in parallel to broaden alert coverage. This is a community-contributed n8n workflow template provided βas-is.β Thoroughly test in a non-production environment before deploying to production.
by Khairul Muhtadin
Decodo Amazon Product Recommender delivers instant, AI-powered shopping recommendations directly through Telegram. Send any product name and receive Amazon product analysis featuring price comparisons, ratings, sales data, and categorized recommendations (budget, premium, best value) in under 40 secondsβeliminating hours of manual research. Why Use This Workflow? Time Savings: Reduce product research from 45+ minutes to under 30 seconds Decision Quality: Compare 20+ products automatically with AI-curated recommendations Zero Manual Work: Complete automation from message input to formatted recommendations Ideal For E-commerce Entrepreneurs:** Quickly research competitor products, pricing strategies, and market trends for inventory decisions Smart Shoppers & Deal Hunters:** Get instant product comparisons with sales volume data and discount tracking before purchasing Product Managers & Researchers:** Analyze Amazon marketplace positioning, customer sentiment, and pricing ranges for competitive intelligence How It Works Trigger: User sends product name via Telegram (e.g., "iPhone 15 Pro Max case") AI Validation: Gemini 2.5 Flash extracts core product keywords and validates input authenticity Data Collection: Decodo API scrapes Amazon search results, extracting prices, ratings, reviews, sales volume, and product URLs Processing: JavaScript node cleans data, removes duplicates, calculates value scores, and categorizes products (top picks, budget, premium, best value, most popular) Intelligence Layer: AI generates personalized recommendations with Telegram-optimized markdown formatting, shortened product names, and clean Amazon URLs Output & Delivery: Formatted recommendations sent to user with categorized options and direct purchase links Error Handling: Admin notifications via separate Telegram channel for workflow monitoring Setup Guide Prerequisites | Requirement | Type | Purpose | |-------------|------|---------| | n8n instance | Essential | Workflow execution platform | | Decodo Account | Essential | Amazon product data scraping | | Telegram Bot Token | Essential | Chat interface for user interactions | | Google Gemini API | Essential | AI-powered product validation and recommendations | | Telegram Account | Optional | Admin error notifications | Installation Steps Import the JSON file to your n8n instance Configure credentials: Decodo API: Sign up at decodo.com β Dashboard β Scraping APIs β Web Advanced β Copy BASIC AUTH TOKEN Telegram Bot: Message @BotFather on Telegram β /newbot β Copy HTTP API token (format: 123456789:ABCdefGHI...) Google Gemini: Obtain API key from Google AI Studio for Gemini 2.5 Flash model Update environment-specific values: Replace YOUR-CHAT-ID in "Notify Admin" node with your Telegram chat ID for error notifications Verify Telegram webhook IDs are properly configured Customize settings: Adjust AI prompt in "Generate Recommendations" node for different output formats Set character limits (default: 2500) for Telegram message length Test execution: Send test message to your Telegram bot: "iPhone 15 Pro" Verify processing status messages appear Confirm recommendations arrive with properly formatted links Customization Options Basic Adjustments: Character Limit**: Modify 2500 in AI prompt to adjust response length (Telegram max: 4096) Advanced Enhancements: Multi-language Support**: Add language detection and translation nodes for international users Price Tracking**: Integrate Google Sheets to log historical prices and trigger alerts on drops Image Support**: Enable Telegram photo messages with product images from scraping results Troubleshooting Common Issues: | Problem | Cause | Solution | |---------|-------|----------| | "No product detected" for valid inputs | AI validation too strict or ambiguous query | Add specific product details (model number, brand) in user input | | Empty recommendations returned | Decodo API rate limit or Amazon blocking | Wait 60 seconds between requests; verify Decodo account status | | Telegram message formatting broken | Special characters in product names | Ensure Telegram markdown mode is set to "Markdown" (legacy) not "MarkdownV2" | Use Case Examples Scenario 1: E-commerce Store Owner Challenge: Needs to quickly assess competitor pricing and product positioning for new inventory decisions without spending hours browsing Amazon Solution: Sends "wireless earbuds" to bot, receives categorized analysis of 20+ products with price ranges ($15-$250), top sellers, and discount opportunities Result: Identifies $35-$50 price gap in market, sources comparable product, achieves 40% profit margin Scenario 2: Smart Shopping Enthusiast Challenge: Wants to buy a laptop backpack but overwhelmed by 200+ Amazon options with varying prices and unclear value propositions Solution: Messages "laptop backpack" to bot, gets AI recommendations sorted by budget ($30), premium ($50+), best value (highest discount + good ratings), and most popular (by sales volume) Result: Purchases "Best Value" recommendation with 35% discount, saves $18 and 45 minutes of research time Created by: Khaisa Studio Category: AI | Productivity | E-commerce | Tags: amazon, telegram, ai, product-research, shopping, automation, gemini Need custom workflows? Contact us Connect with the creator: Portfolio β’ Workflows β’ LinkedIn β’ Medium β’ Threads
by Yaron Been
Description This workflow automatically scans companies for signs of financial distress across filings, insolvency registers, and financial news. It helps procurement, credit, and risk teams detect early warning signals before a supplier or partner defaults. Overview This workflow uses Bright Data to scrape financial filings, insolvency registers, and news sources for distress signals like bankruptcy, restructuring, or payment defaults. AI classifies the type and severity of distress, applies probability weighting and confidence guardrails, then generates structured business decisions β including: Supplier Monitoring risk status Onboarding Approval recommendations Portfolio Exposure classifications All outputs are logged into Google Sheets for tracking and auditability. Tools Used n8n**: Automation platform orchestrating the workflow Bright Data**: Scrapes filings, insolvency registers, and financial news without getting blocked OpenRouter**: AI-powered distress classification, risk scoring, and business decision generation Google Sheets**: Logs supplier risk status, onboarding decisions, portfolio exposure, and errors How to Install 1. Import the Workflow Download the .json file and import it into your n8n instance. 2. Configure Bright Data Add your Bright Data API credentials to all Bright Data nodes. 3. Configure OpenRouter Add your OpenRouter API key for AI distress classification and decision generation. 4. Set Up Google Sheets Create a spreadsheet following the "Google Sheets Setup" sticky note inside the workflow. Connect each Google Sheets node to your document. 5. Customize Edit the configuration node to define: Target company Country Risk indicators Monitoring scope Use Cases Procurement Teams Monitor supplier financial health and get alerts before disruptions hit your supply chain. Credit Risk Analysts Screen new vendors or partners for bankruptcy signals and insolvency red flags. Onboarding Workflows Automate go/no-go decisions for new supplier or partner approvals. Portfolio Managers Track financial exposure across your vendor or investment portfolio. Finance Teams Detect early signs of distress in key business relationships before they become critical. Connect with Me Website: https://www.nofluff.online YouTube: https://www.youtube.com/@YaronBeen/videos LinkedIn: https://www.linkedin.com/in/yaronbeen/ Get Bright Data: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) Tags #n8n #automation #brightdata #webscraping #creditrisk #financialdistress #riskmanagement #suppliermonitoring #supplychainrisk #insolvency #bankruptcy #duediligence #vendorscreening #portfoliorisk #financialanalysis #n8nworkflow #workflow #nocode #businessintelligence #riskassessment #creditanalysis #procurementautomation #supplierrisk #financialmonitoring #earlywarning
by Trung Tran
Automated AWS IAM Compliance Workflow for MFA Enforcement and Access Key Deactivation > This workflow leverages AWS IAM APIs and n8n automation to ensure strict security compliance by continuously monitoring IAM users for MFA (Multi-Factor Authentication) enforcement. .jpg) Whoβs it for This workflow is designed for DevOps, Security, or Cloud Engineers responsible for maintaining IAM security compliance in AWS accounts. It's ideal for teams who want to enforce MFA usage and automatically disable access for non-compliant IAM users. How it works / What it does This automated workflow performs a daily check to detect IAM users without an MFA device and deactivate their access keys. Step-by-step: Daily scheduler: Triggers the workflow once a day. Get many users: Retrieves a list of all IAM users in the account. Get IAM User MFA Devices: Calls AWS API to get MFA device info for each user. Filter out IAM users with MFA: Keeps only users without any MFA device. Send warning message(s): Sends Slack alerts for users who do not have MFA enabled. Get User Access Key(s): Fetches access keys for each non-MFA user. Parse the list of user access key(s): Extracts and flattens key information like AccessKeyId, Status, and UserName. Filter out inactive keys: Keeps only active access keys for further action. Deactivate Access Key(s): Calls AWS API to deactivate each active key for non-MFA users. How to set up Configure AWS credentials in your environment (IAM role or AWS access key with required permissions). Connect Slack via the Slack node for alerting (set channel and credentials). Set the scheduler to your preferred frequency (e.g., daily at 9AM). Adjust any Slack message template or filtering conditions as needed. Requirements IAM user or role credentials with the following AWS IAM permissions: iam:ListUsers iam:ListMFADevices iam:ListAccessKeys iam:UpdateAccessKey Slack credentials (Bot token with chat:write permission). n8n environment with: Slack integration AWS credentials (set via environment or credentials manager) How to customize the workflow Alert threshold**: Instead of immediate deactivation, you can delay action (e.g., alert first, wait 24h, then disable). Change notification channel**: Modify the Slack node to send alerts to a different channel or add email integration. Whitelist exceptions**: Add a Set or IF node to exclude specific usernames (e.g., service accounts). Add audit logging**: Use Google Sheets, Airtable, or a database to log which users were flagged or had access disabled. Extend access checks**: Include console password check (GetLoginProfile) if needed.