by Anna Bui
Automatically analyze n8n workflow errors with AI, create support tickets, and send detailed Slack notifications Perfect for development teams and businesses that need intelligent error handling with automated support workflows. Never miss critical workflow failures again! How it works Error Trigger captures any workflow failure in your n8n instance AI Debugger analyzes the error using structured reasoning to identify root causes Clean Data transforms AI analysis into organized, actionable information Create Support Ticket automatically generates a detailed ticket in FreshDesk Merge combines ticket data with AI analysis for comprehensive reporting Generate Slack Alert creates rich, formatted notifications with all context Send to Team delivers instant alerts to your designated Slack channel How to use Replace FreshDesk credentials with your helpdesk system API Configure Slack channel for your team notifications Customize AI analysis prompts for your specific error types Set up as global error handler for all your critical workflows Requirements FreshDesk account (or compatible ticketing system) Slack workspace with bot permissions OpenAI API access for AI analysis n8n Cloud or self-hosted with AI nodes enabled Good to know OpenAI API calls cost approximately $0.01-0.03 per error analysis Works with any ticketing system that supports REST API Can be triggered by webhooks from external monitoring tools Slack messages use rich formatting for mobile-friendly alerts Need Help? Join the Discord or ask in the Forum! Happy Monitoring!
by Shayan Ali Bakhsh
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Try It Out! Automatically generate Linkedin Carousal and Upload to Linkedin Use case : Linkedin Content Creation, specifically carousal. But could be adjusted for many other creations as well. How it works It will run automatically every 6:00 AM Get latest News from TechRadar Parse it into readable JSON AI will decide, which news resonates with your profile Then give the title and description of that news to generate the final linkedin carousal content. This step is also trigerred by Form trigger After carousal generation, it will give it to Post Nitro to create images on that content. Post Nitro provides the PDF file. We Upload the PDf file to Linkedin and get the file ID, in next step, it will be used. Finally create the Post description and Post it to Linkedin How to use It will run every 6:00 AM automatically. Just make it Live Submit the form, with correct title and description ( i did not added tests for that so must give that correct 😅 ) Requirements Install Post Nitro community Node @postnitro/n8n-nodes-postnitro-ai We need the following API keys to make it work Google Gemini ( for Gemini 2.5-Flash Usage ) Docs Google Gemini Key Post Nitro credentials ( API key + Template id + Brand id ) Docs Post Nitro Linkedin API key Docs Linkedin API Need Help? Message on Linkedin the Linkedin Happy Automation!
by Sridevi Edupuganti
Try It Out! Use n8n to extract medical test data from diagnostic reports uploaded to Google Drive, automatically detect abnormal values, and generate personalized health advice. How it works Upload a medical report (PDF or image) to a monitored Google Drive folder Mistral AI extracts text using OCR while preserving document structure GPT-4 parses the extracted text into structured JSON (patient info, test names, results, units, reference ranges) All test results are saved to the "All Values" sheet in Google Sheets JavaScript code compares each result against its reference range to detect abnormalities For out-of-range values, GPT-4 generates personalized dietary, lifestyle, and exercise advice based on patient age and gender Abnormal results with recommendations are saved to the "Out of Range Values" sheet How to use Set up Google Drive folder monitoring and Google Sheets with two tabs: "All Values" and "Out of Range Values" Configure API credentials for Google Drive, Mistral AI, and OpenAI (GPT-4) Upload medical reports to your monitored folder Review extracted data and personalized health advice in Google Sheets Requirements Google Drive and Sheets with OAuth2 authentication Mistral AI API key for OCR OpenAI API key (GPT-4 access required) for intelligent extraction and advice generation Need Help? See the detailed Read Me file at https://drive.google.com/file/d/1Wv7dfcBLsHZlPcy1QWPYk6XSyrS3H534/view?usp=sharing Join the n8n community forum for support
by vinci-king-01
Public Transport Delay Tracker with Microsoft Teams and Todoist ⚠️ COMMUNITY TEMPLATE DISCLAIMER: This is a community-contributed template that uses ScrapeGraphAI (a community node). Please ensure you have the ScrapeGraphAI community node installed in your n8n instance before using this template. This workflow continuously monitors public-transportation websites and apps for real-time schedule changes and delays, then posts an alert to a Microsoft Teams channel and creates a follow-up task in Todoist. It is ideal for commuters or travel coordinators who need instant, actionable updates about transit disruptions. Pre-conditions/Requirements Prerequisites An n8n instance (self-hosted or n8n cloud) ScrapeGraphAI community node installed Microsoft Teams account with permission to create an Incoming Webhook Todoist account with at least one project Access to target transit authority websites or APIs Required Credentials ScrapeGraphAI API Key** – Enables scraping of transit data Microsoft Teams Webhook URL** – Sends messages to a specific channel Todoist API Token** – Creates follow-up tasks (Optional) Transit API key if you are using a protected data source Specific Setup Requirements | Resource | What you need | |--------------------------|---------------------------------------------------------------| | Teams Channel | Create a channel → Add “Incoming Webhook” → copy the URL | | Todoist Project | Create “Transit Alerts” project and note its Project ID | | Transit URLs/APIs | Confirm the URLs/pages contain the schedule & delay elements | How it works This workflow continuously monitors public-transportation websites and apps for real-time schedule changes and delays, then posts an alert to a Microsoft Teams channel and creates a follow-up task in Todoist. It is ideal for commuters or travel coordinators who need instant, actionable updates about transit disruptions. Key Steps: Webhook (Trigger)**: Starts the workflow on a schedule or via HTTP call. Set Node**: Defines target transit URLs and parsing rules. ScrapeGraphAI Node**: Scrapes live schedule and delay data. Code Node**: Normalizes scraped data, converts times, and flags delays. IF Node**: Determines if a delay exceeds the user-defined threshold. Microsoft Teams Node**: Sends formatted alert message to the selected Teams channel. Todoist Node**: Creates a “Check alternate route” task with due date equal to the delayed departure time. Sticky Note Node**: Holds a blueprint-level explanation for future editors. Set up steps Setup Time: 15–20 minutes Install community node: In n8n, go to “Manage Nodes” → “Install” → search for “ScrapeGraphAI” → install and restart n8n. Create Teams webhook: In Microsoft Teams, open target channel → “Connectors” → “Incoming Webhook” → give it a name/icon → copy the URL. Create Todoist API token: Todoist → Settings → Integrations → copy your personal API token. Add credentials in n8n: Settings → Credentials → create new for ScrapeGraphAI, Microsoft Teams, and Todoist. Import workflow template: File → Import Workflow JSON → select this template. Configure Set node: Replace example transit URLs with those of your local transit authority. Adjust delay threshold: In the Code node, edit const MAX_DELAY_MINUTES = 5; as needed. Activate workflow: Toggle “Active”. Monitor executions to ensure messages and tasks are created. Node Descriptions Core Workflow Nodes: Webhook** – Triggers workflow on schedule or external HTTP request. Set** – Supplies list of URLs and scraping selectors. ScrapeGraphAI** – Scrapes timetable, status, and delay indicators. Code** – Parses results, converts to minutes, and builds payloads. IF** – Compares delay duration to threshold. Microsoft Teams** – Posts formatted adaptive-card-style message. Todoist** – Adds a task with priority and due date. Sticky Note** – Internal documentation inside the workflow canvas. Data Flow: Webhook → Set → ScrapeGraphAI → Code → IF a. IF (true branch) → Microsoft Teams → Todoist b. IF (false branch) → (workflow ends) Customization Examples Change alert message formatting // In the Code node const message = `⚠️ Delay Alert: Route: ${item.route} Expected: ${item.scheduled} New Time: ${item.newTime} Delay: ${item.delay} min Link: ${item.url}`; return [{ json: { message } }]; Post to multiple Teams channels // Duplicate the Microsoft Teams node and reference a different credential items.forEach(item => { item.json.webhookUrl = $node["Set"].json["secondaryChannelWebhook"]; }); return items; Data Output Format The workflow outputs structured JSON data: { "route": "Blue Line", "scheduled": "2024-12-01T14:25:00Z", "newTime": "2024-12-01T14:45:00Z", "delay": 20, "status": "Delayed", "url": "https://transit.example.com/blue-line/status" } Troubleshooting Common Issues Scraping returns empty data – Verify CSS selectors/XPath in the Set node and ensure the target site hasn’t changed its markup. Teams message not sent – Check that the stored webhook URL is correct and the connector is still active. Todoist task duplicated – Add a unique key (e.g., route + timestamp) to avoid inserting duplicates. Performance Tips Limit the number of URLs per execution when monitoring many routes. Cache previous scrape results to avoid hitting site rate limits. Pro Tips: Use n8n’s built-in Cron instead of Webhook if you only need periodic polling. Add a SplitInBatches node after scraping to process large route lists incrementally. Enable execution logging to an external database for detailed audit trails.
by Daniel Turgeman
How it works A webhook receives a form submission with an email address The email is validated, then Lusha enriches the contact If phone or email is missing, a fallback provider fills the gaps via HTTP request Data from both sources is merged, upserted into HubSpot, and an SDR alert is sent to Slack The webhook returns the enriched lead as a JSON response Set up steps Install the Lusha community node Add your Lusha API, HubSpot, and Slack credentials Configure the fallback HTTP node with your secondary provider's API endpoint and key Point your form's action URL to the webhook endpoint
by Cheng Siong Chin
How It Works This workflow automates legislative compliance analysis by coordinating multiple specialized OpenAI agents to interpret regulatory documents, evaluate organizational impact, and manage stakeholder communication with complete audit traceability. It is built for compliance officers, legal teams, and governance leaders who must process new or amended legislation quickly without the burden of manual document review. The template addresses the core challenge of staying compliant amid rapidly evolving regulations. When a legislative document is submitted, the workflow retrieves and extracts its full text, then passes it to a Policy Interpretation Agent powered by OpenAI for structured analysis. A Governance Orchestration Agent then activates three parallel specialist agents—Impact Assessment, Compliance Mapping, and Stakeholder Notification to generate standardized outputs. Decisions are routed based on review status: auto-approved items are logged directly into Google Sheets, while flagged items trigger legal review through Slack alerts, compliance tracker updates, and stakeholder notifications, ensuring every regulatory change is evaluated, documented, and acted upon promptly. Setup Steps Add OpenAI API key to all OpenAI Model nodes Connect Google Sheets OAuth2 credentials; set spreadsheet IDs for Auto-Approved Log Configure Slack OAuth2 token; set target channel in Notify Legal Team node Set up Gmail/SMTP credentials in Notify Stakeholders node; update recipient addresses Configure legislative document source URL or webhook endpoint in Fetch Legislative Document node Adjust routing thresholds in Route by Review Status node to match your approval criteria Prerequisites OpenAI API key, Google Sheets with OAuth2, Slack workspace with bot token Use Cases Regulatory change management, GDPR/financial compliance monitoring, policy impact assessment Customization Swap OpenAI for NVIDIA NIM models, add additional specialist agents Benefits Cuts manual compliance review time by 70%, ensures no legislation goes unassessed
by Madame AI
Repurpose white papers from URLs to LinkedIn PDFs and Blog Posts With BrowserAct Introduction This workflow automates the labor-intensive process of turning long-form white papers into ready-to-publish social media assets. It scrapes the content from a URL or PDF, uses AI to ghostwrite a LinkedIn carousel script and an SEO-optimized blog post, generates a downloadable PDF for the carousel using APITemplate.io, and archives all assets in Google Sheets. Target Audience Content marketers, social media managers, and agency copywriters looking to scale content repurposing efforts. How it works Input: The workflow retrieves a list of white paper URLs from a Google Sheet. Looping: It processes each URL individually to ensure stability. Extraction: The BrowserAct node uses the "White Paper to Social Media Converter" template to scrape the full text of the white paper . Content Generation: An AI Agent (OpenRouter/GPT-4o) acts as a ghostwriter. It analyzes the text and generates two distinct outputs: A viral-style LinkedIn post with a 5-slide carousel script. A full-length, HTML-formatted blog post with proper headers. PDF Creation: The APITemplate.io node takes the carousel script and generates a designed PDF file ready for LinkedIn upload. Storage: The workflow updates the original Google Sheet row with the generated blog HTML, the LinkedIn caption, and the direct link to the PDF. Notification: Once all items are processed, a Slack message notifies the team. How to set up Configure Credentials: Connect your BrowserAct, OpenRouter, Google Sheets, APITemplate.io, and Slack accounts in n8n. Prepare BrowserAct: Ensure the White Paper to Social Media Converter template is active in your BrowserAct library. Prepare APITemplate.io: Create a PDF template in APITemplate.io that accepts dynamic fields for slide titles and body text. Copy the Template ID into the Create a carousel PDF node. Prepare Google Sheet: Create a sheet with the headers listed below and add your target URLs. Google Sheet Headers To use this workflow, create a Google Sheet with the following headers: row_number (Must be populated, e.g., 1, 2, 3...) Target Page Url Blog Post Linkdin Post PDF Link Requirements BrowserAct Account:* Required for scraping. Template: *White Paper to Social Media Converter**. OpenRouter Account:** Required for GPT-4o processing. APITemplate.io Account:** Required for generating the visual PDF carousel. Google Sheets:** Used for input and output. Slack Account:** Used for completion notifications. How to customize the workflow Direct Publishing: Add a WordPress node to publish the Blog Post HTML directly to your CMS instead of saving it to the sheet. Design Variations: Create multiple templates in APITemplate.io (e.g., "Dark Mode", "Minimalist") and use a Random node to vary the visual style of your carousels. Tone Adjustment: Modify the System Message in the Convert whitepaper to carousel node to change the writing style (e.g., make it more academic or more casual). Need Help? How to Find Your BrowserAct API Key & Workflow ID How to Connect n8n to BrowserAct How to Use & Customize BrowserAct Templates Workflow Guidance and Showcase Video Automated LinkedIn Carousels: Turn White Papers into Content with n8n
by Daniel Shashko
This workflow automates the creation of user-generated-content-style product videos by combining Gemini's image generation with OpenAI's SORA 2 video generation. It accepts webhook requests with product descriptions, generates images and videos, stores them in Google Drive, and logs all outputs to Google Sheets for easy tracking. Main Use Cases Automate product video creation for e-commerce catalogs and social media. Generate UGC-style content at scale without manual design work. Create engaging video content from simple text prompts for marketing campaigns. Build a centralized library of product videos with automated tracking and storage. How it works The workflow operates as a webhook-triggered process, organized into these stages: Webhook Trigger & Input Accepts POST requests to the /create-ugc-video endpoint. Required payload includes: product prompt, video prompt, Gemini API key, and OpenAI API key. Image Generation (Gemini) Sends the product prompt to Google's Gemini 2.5 Flash Image model. Generates a product image based on the description provided. Data Extraction Code node extracts the base64 image data from Gemini's response. Preserves all prompts and API keys for subsequent steps. Video Generation (SORA 2) Sends the video prompt to OpenAI's SORA 2 API. Initiates video generation with specifications: 720x1280 resolution, 8 seconds duration. Returns a video generation job ID for polling. Video Status Polling Continuously checks video generation status via OpenAI API. If status is "completed": proceeds to download. If status is still processing: waits 1 minute and retries (polling loop). Video Download & Storage Downloads the completed video file from OpenAI. Uploads the MP4 file to Google Drive (root folder). Generates a shareable Google Drive link. Logging to Google Sheets Records all generation details in a tracking spreadsheet: Product description Video URL (Google Drive link) Generation status Timestamp Summary Flow: Webhook Request → Generate Product Image (Gemini) → Extract Image Data → Generate Video (SORA 2) → Poll Status → If Complete: Download Video → Upload to Google Drive → Log to Google Sheets → Return Response If Not Complete: Wait 1 Minute → Poll Status Again Benefits: Fully automated video creation pipeline from text to finished product. Scalable solution for generating multiple product videos on demand. Combines cutting-edge AI models (Gemini + SORA 2) for high-quality output. Centralized storage in Google Drive with automatic logging in Google Sheets. Flexible webhook interface allows integration with any application or service. Retry mechanism ensures videos are captured even with longer processing times. Created by Daniel Shashko
by nXsi
This n8n template builds an automated health monitoring dashboard for your homelab Docker host. It SSHs into your server, collects 30+ system and container metrics, analyzes trends with AI, and delivers a structured multi-embed dashboard to Discord -- plus real-time critical alerts when things go wrong. Stop SSHing into your server every morning to check if everything's still running. AI reads your metrics and tells you exactly what needs attention, with copy-paste fix commands. Good to know Estimated cost is ~$0.003 per daily run using GPT-4o-mini. Compatible with Claude or any OpenAI-compatible LLM -- swap the model sub-node to switch providers. See the setup notes inside the workflow for Claude configuration. Uses Google Sheets for 7-day metric history and trend analysis. A one-click setup trigger auto-creates a formatted tracking spreadsheet with frozen headers and conditional formatting. The critical alert path runs every 5 minutes with a lightweight check (configurable). The daily digest runs once in the morning. Both schedules are adjustable. How it works Daily schedule trigger SSHs into your Docker host and collects system metrics (real CPU % from /proc/stat, memory, all filesystems, swap, network I/O, top processes, zombie processes, failed services) and Docker metrics (container status, CPU, memory, restarts, health checks, disk usage, dangling images) in ~2 seconds A 100-point health score is calculated from weighted metrics across CPU, memory, disk, swap, containers, and system health 7 days of historical data is loaded from Google Sheets for trend comparison AI analyzes current vs. historical metrics and returns structured JSON with severity-tagged findings, CLI fix commands, trend analysis, and a top recommendation A 4-embed Discord dashboard is delivered: status header with inline metrics, actionable findings, Docker ecosystem overview with trends, and a footer with timing and API cost Today's metrics are stored in Google Sheets for future trend tracking A separate lightweight path runs every 5 minutes checking critical thresholds (disk > 90%, memory > 95%, inodes > 90%, containers down) and fires immediate alerts How to use Click "Test workflow" on the first-time setup trigger to auto-create your Google Sheets tracking dashboard Copy the Sheet ID into the configuration node, add your Discord webhook URL, wire your SSH and OpenAI credentials, and activate Full setup guides linked inside the workflow for SSH keys, API keys, and Discord webhooks Requirements SSH access to a Linux Docker host (key-based authentication) (ssh key setup) OpenAI API key or Anthropic API key (OpenAI setup guide | Claude setup guide) Google Sheets OAuth2 credential (n8n docs) Discord webhook URL (setup guide) Customizing this workflow Adjust alert thresholds in the configuration node (disk warning/critical, memory warning/critical, inode critical, restart threshold) Change the daily digest and critical alert schedules in the trigger nodes Swap OpenAI for Claude or Ollama by replacing the LLM sub-node Replace Discord with Slack, Telegram, or ntfy by modifying the webhook payload format Add additional SSH metrics by editing the collection commands
by Cheng Siong Chin
How It Works This workflow provides automated Chinese text translation with high-quality audio synthesis for language learning platforms, content creators, and international communication teams. It addresses the challenge of converting Chinese text into accurate multilingual translations with natural-sounding voiceovers. The system receives Chinese text via webhook, validates input formatting, and processes it through an AI translation agent that generates multiple language versions. Each translation is converted to speech using ElevenLabs' neural voice models, then formatted into professional audio responses. A quality review agent evaluates translation accuracy, cultural appropriateness, and audio clarity against predefined criteria. High-scoring outputs are returned via webhook for immediate use, while low-quality results trigger review processes, ensuring consistent delivery of publication-ready multilingual audio content. Setup Steps Obtain OpenAI API key and configure in "Translation Agent" Set up ElevenLabs account, generate API key Configure webhook URL and update in source applications to trigger workflow Customize target languages and voice settings in translation and ElevenLabs nodes Adjust quality thresholds in "Check Quality Score" Update output webhook endpoint in "Return Audio Files" node Prerequisites Active accounts: OpenAI API access, ElevenLabs subscription. Use Cases Chinese language learning apps, international marketing content localization Customization Add additional target languages, modify voice characteristics and speaking rates Benefits Automates 95% of translation workflow, delivers publication-ready audio in minutes
by Edson Encinas
🧩 Template Description IP Enrichment & Country Attribution is a lightweight cybersecurity automation that enriches IP addresses with geographic and network intelligence. It validates incoming IPs, filters out private or invalid addresses, and enriches public IPs using an open-source IP enrichment service. 🔄 How It Works Receives an IP address via webhook (API or Slack). Validates the IP format and rejects invalid input. Checks for private or internal IP ranges. Ignores private IPs with a clear response. Enriches public IPs using an open-source IP intelligence service. Normalizes country, ISP, and ASN data and applies a severity label. Slack notifications are sent for enriched public IPs. Returns a structured JSON response. ⚙️ Setup Steps Import & Activate Workflow Import the JSON template into n8n Actvate the workflow Set Up Webhook Copy the webhook URL Send a POST request with the IP in the body, e.g.: { "text" : "8.8.8.8" } Using curl: `curl -X POST https://YOUR_N8N_WEBHOOK_URL \ -H "Content-Type: application/json" \ -d '{"text":"8.8.8.8"}'` Configure Slack (Slack Alert) Create or select Slack credentials in n8n Make sure the bot is in your target channel Update the Slack node with correct channel. Slack Slash Command Setup (Optional) Enable Slash Commands and create new command (for example /ip-enrich). Set the Request URL to your n8n webhook endpoint. Choose POST as the request method. Install the app to your workspace. Usage example: /ip-enrich 8.8.8.8 🎛️ Customization Options Enrichment source: Replace or extend the IP intelligence API with additional providers (for example reputation or abuse scoring). Slack formatting: Customize the Slack message text, emojis, or use threads for better alert grouping. Input sources: Reuse the webhook for other integrations such as SIEM alerts or security tools.
by Pixcels Themes
AI Assignment Grader with Automated Reporting Who’s it for This workflow is designed for educators, professors, academic institutions, coaching centers, and edtech platforms that want to automate the grading of written assignments or test papers. It’s ideal for scenarios where consistent evaluation, detailed feedback, and structured result storage are required without manual effort. What it does / How it works This workflow automates the end-to-end grading process for student assignments submitted as PDFs. A student’s test paper is uploaded via a webhook endpoint. The workflow extracts text from the uploaded PDF file. Student metadata (name, assignment title) is prepared and combined with the extracted answers. A predefined answer script (model answers with marking scheme) is loaded into the workflow. An AI grading agent powered by Gemini compares the student’s responses against the answer script. The AI: Evaluates each question Assigns marks based on correctness and completeness Generates per-question feedback Calculates total marks, percentage, and grade The structured grading output is converted into: An HTML grading report A CSV file for records The final CSV grading report is automatically uploaded to Google Drive for storage and sharing. All grading logic runs automatically once the test paper is submitted. Requirements Google Gemini (PaLM) API credentials Google Drive OAuth2 credentials A webhook endpoint configured in n8n PDF test papers submitted in a supported format A predefined answer script with marks per question How to set up Connect your Google Gemini credentials in n8n. Connect your Google Drive account and select the destination folder. Enable and copy the webhook URL for test paper uploads. Customize the Load Answer Script node with your assignment’s correct answers and marking scheme. (Optional) Adjust grading instructions or output format in the AI Agent prompt. Test the workflow by uploading a sample PDF assignment. How to customize the workflow Update the AI grading rubric to be stricter or more lenient. Modify feedback style (short comments vs detailed explanations). Change grading scales, total marks, or grade boundaries. Store results in additional systems (LMS, database, email notifications). Add plagiarism checks or similarity scoring before grading. Generate PDF reports instead of CSV/HTML if required. This workflow enables fast, consistent, and scalable assignment grading while giving students clear, structured feedback and educators reliable records.