by Mychel Garzon
Reduce MTTR with context-aware AI severity analysis and automated SLA enforcement Know that feeling when a "low priority" ticket turns into a production fire? Or when your on-call rotation starts showing signs of serious burnout from alert overload? This workflow handles that problem. Two AI agents do the triage work—checking severity, validating against runbooks, triggering the right response. What This Workflow Does Incident comes in through webhook → two-agent analysis kicks off: Agent 1 (Incident Analyzer) checks the report against your Google Sheets runbook database. Looks for matching known issues, evaluates risk signals, assigns a confidence-scored severity (P1/P2/P3). Finally stops you from trusting "CRITICAL URGENT!!!" subject lines. Agent 2 (Response Planner) builds the action plan: what to do first, who needs to know, investigation steps, post-incident tasks. Like having your most experienced engineer review every single ticket. Then routing happens: P1 incidents** → PagerDuty goes off + war room gets created + 15-min SLA timer starts P2 incidents** → Gmail alert + you've got 1 hour to acknowledge P3 incidents** → Standard email notification Nobody responds in time? Auto-escalates to management. Everything logs to Google Sheets for the inevitable post-mortem. What Makes This Different | Feature | This Workflow | Typical AI Triage | |---------|--------------|-------------------| | Architecture | Two specialized agents (analyze + coordinate) | Single generic prompt | | Reliability | Multi-LLM fallback (Gemini → Groq) | Single model, fails if down | | SLA Enforcement | Auto-waits, checks, escalates autonomously | Sends alert, then done | | Learning | Feedback webhook improves accuracy over time | Static prompts forever | | Knowledge Source | Your runbooks (Google Sheets) | Generic templates | | War Room Creation | Automatic for P1 incidents | Manual | | Audit Trail | Every decision logged to Sheets | Often missing | How It Actually Works: Real Example Scenario: Your monitoring system detects database errors. Webhook receives this messy alert: { "title": "DB Connection Pool Exhausted", "description": "user-service reporting 503 errors", "severity": "P3", "service": "user-service" } Agent 1 (Incident Analyzer) reasoning: Checks Google Sheets runbook → finds entry: "Connection pool exhaustion typically P2 if customer-facing" Scans description for risk signals → detects "503 errors" = customer impact Cross-references service name → confirms user-service is customer-facing Decision: Override P3 → P2 (confidence score: 0.87) Reasoning logged: "Customer-facing service returning errors, matches known high-impact pattern from runbook" Agent 2 (Response Coordinator) builds the plan: Immediate actions:** "Check active DB connections via monitoring dashboard, restart service if pool usage >90%, verify connection pool configuration" Escalation tier:** "team" (not manager-level yet) SLA target:** 60 minutes War room needed:** No (P2 doesn't require it) Recommended assignee:** "Database team" (pulled from runbook escalation contact) Notification channels:** #incidents (not #incidents-critical) What happens next (autonomously): Slack alert posted to #incidents with full context 60-minute SLA timer starts automatically Workflow waits, then checks Google Sheets "Acknowledged By" column If still empty after 60 min → escalates to #engineering-leads with "SLA BREACH" tag Everything logged to both Incidents and AI_Audit_Log sheets Human feedback loop (optional but powerful): On-call engineer reviews the decision and submits: POST /incident-feedback { "incidentId": "INC-20260324-143022-a7f3", "feedback": "Correct severity upgrade - good catch", "correctSeverity": "P2" } → This correction gets logged to AI_Audit_Log. Over time, Agent 1 learns which patterns justify severity overrides. Key Benefits Stop manual triage:** What took your on-call engineer 5-10 minutes now takes 3 seconds. Agent 1 checks the runbook, Agent 2 builds the response plan. Severity validation = fewer false alarms:** The workflow cross-checks reported severity against runbook patterns and risk signals. That "P1 URGENT" email from marketing? Gets downgraded to P3 automatically. SLAs enforce themselves:** P1 gets 15 minutes. P2 gets 60. Timers run autonomously. If nobody acknowledges, management gets paged. No more "I forgot to check Slack." Uses YOUR runbooks, not generic templates:** Agent 1 pulls context from your Google Sheets runbook database — known issues, escalation contacts, SLA targets. It knows your systems. Multi-LLM fallback = 99.9% uptime:** Primary: Gemini 2.0. Fallback: Groq. Each agent retries 3x with 5-sec intervals. Basically always works. Self-improving feedback loop:** Engineers can submit corrections via /incident-feedback webhook. The workflow logs every decision + human feedback to AI_Audit_Log. Track accuracy over time, identify patterns where AI needs tuning. Complete audit trail:** Every incident, every AI decision, every escalation — all in Google Sheets. Perfect for post-mortems and compliance. Required APIs & Credentials Google Gemini API** (main LLM, free tier is fine) Groq API** (backup LLM, also has free tier) Google Sheets** (stores runbooks and audit trail) Gmail** (handles P2/P3 notifications) Slack OAuth2 API** (creates war rooms) PagerDuty** (P1 alerts—optional, you can just use Slack/Gmail) Setup Complexity This is not a 5-minute setup. You'll need: Google Sheets structure: 3 tabs: Runbooks, Incidents, AI_Audit_Log Pre-populated runbook data (services, known issues, escalation contacts) Slack configuration: 4 channels: #incidents-critical, #incidents, #management-escalation, #engineering-leads Slack OAuth2 with bot permissions Estimated setup time: 30-45 minutes Quick start option: Begin with just Slack + Google Sheets. Add PagerDuty later. Who This Is For DevOps engineers done being the human incident router SRE teams drowning in alert fatigue IT ops managers who need real accountability Security analysts triaging at high volume Platform engineers trying to automate the boring stuff
by explorium
Outbound Agent - AI-Powered Lead Generation with Natural Language Prospecting This n8n workflow transforms natural language queries into targeted B2B prospecting campaigns by combining Explorium's data intelligence with AI-powered research and personalized email generation. Simply describe your ideal customer profile in plain English, and the workflow automatically finds prospects, enriches their data, researches them, and creates personalized email drafts. DEMO Template Demo Credentials Required To use this workflow, set up the following credentials in your n8n environment: Anthropic API Type:** API Key Used for:** AI Agent query interpretation, email research, and email writing Get your API key at Anthropic Console Explorium API Type:** Generic Header Auth Header:** Authorization Value:** Bearer YOUR_API_KEY Used for:** Prospect matching, contact enrichment, professional profiles, and MCP research Get your API key at Explorium Dashboard Explorium MCP Type:** HTTP Header Auth Used for:** Real-time company and prospect intelligence research Connect to: https://mcp.explorium.ai/mcp Gmail Type:** OAuth2 Used for:** Creating email drafts Alternative options: Outlook, Mailchimp, SendGrid, Lemlist Go to Settings → Credentials, create these credentials, and assign them in the respective nodes before running the workflow. Workflow Overview Node 1: When chat message received This node creates an interactive chat interface where users can describe their prospecting criteria in natural language. Type:** Chat Trigger Purpose:** Accept natural language queries like "Get 5 marketing leaders at fintech startups who joined in the past year and have valid contact information" Example Prompts:** "Find SaaS executives in New York with 50-200 employees" "Get marketing directors at healthcare companies" "Show me VPs at fintech startups with recent funding" Node 2: Chat or Refinement This code node manages the conversation flow, handling both initial user queries and validation error feedback. Function:** Routes either the original chat input or validation error messages to the AI Agent Dynamic Input:** Combines chatInput and errorInput fields Purpose:** Creates a feedback loop for validation error correction Node 3: AI Agent The core intelligence node that interprets natural language and generates structured API calls. Functionality: Interprets user intent from natural language queries Maps concepts to Explorium API filters (job levels, departments, company size, revenue, location, etc.) Generates valid JSON requests with precise filter criteria Handles off-topic queries with helpful guidance Connected to MCP Client for real-time filter specifications AI Components: Anthropic Chat Model:** Claude Sonnet 4 for query interpretation Simple Memory:** Maintains conversation context (100 message window) Output Parser:** Structured JSON output with schema validation MCP Client:** Connected to https://mcp.explorium.ai/mcp for Explorium specifications System Instructions: Expert in converting natural language to Explorium API filters Can revise previous responses based on validation errors Strict adherence to allowed filter values and formats Default settings: mode: "full", size: 10000, page_size: 100, has_email: true Node 4: API Call Validation This code node validates the AI-generated API request against Explorium's filter specifications. Validation Checks: Filter key validity (only allowed filters from approved list) Value format correctness (enums, ranges, country codes) No duplicate values in arrays Proper range structure for experience fields (total_experience_months, current_role_months) Required field presence Allowed Filters: country_code, region_country_code, company_country_code, company_region_country_code company_size, company_revenue, company_age, number_of_locations google_category, naics_category, linkedin_category, company_name city_region_country, website_keywords has_email, has_phone_number job_level, job_department, job_title business_id, total_experience_months, current_role_months Output: isValid: Boolean validation status validationErrors: Array of specific error messages Node 5: Is API Call Valid? Conditional routing node that determines the next step based on validation results. If Valid:** Proceed to Explorium API: Fetch Prospects If Invalid:** Route to Validation Prompter for correction Node 6: Validation Prompter Generates detailed error feedback for the AI Agent when validation fails. This creates a self-correcting loop where the AI learns from validation errors and regenerates compliant requests by routing back to Node 2 (Chat or Refinement). Node 7: Explorium API: Fetch Prospects Makes the validated API call to Explorium's prospect database. Method:** POST Endpoint:** /v1/prospects/fetch Authentication:** Header Auth (Bearer token) Input:** JSON with filters, mode, size, page_size, page Returns:** Array of matched prospects with prospect IDs based on filter criteria Node 8: Pull Prospect IDs Extracts prospect IDs from the fetch response for bulk enrichment. Input:** Full fetch response with prospect data Output:** Array of prospect_id values formatted for enrichment API Node 9: Explorium API: Contact Enrichment Single enrichment node that enhances prospect data with both contact and profile information. Method:** POST Endpoint:** /v1/prospects/enrich Enrichment Types:** contacts, profiles Authentication:** Header Auth (Bearer token) Input:** Array of prospect IDs from Node 8 Returns: Contacts:** Professional emails (current, verified), phone numbers (mobile, work), email validation status, all available email addresses Profiles:** Full professional history, current role details, company information, skills and expertise, education background, experience timeline, job titles and seniority levels Node 10: Clean Output Data Transforms and structures the enriched data for downstream processing. Node 11: Loop Over Items Iterates through each prospect to generate individualized research and emails. Batch Size:** 1 (processes prospects one at a time) Purpose:** Enable personalized research and email generation for each prospect Loop Control:** Processes until all prospects are complete Node 12: Research Email AI-powered research agent that investigates each prospect using Explorium MCP. Input Data: Prospect name, job title, company name, company website LinkedIn URL, job department, skills Research Focus: Company automation tool usage (n8n, Zapier, Make, HubSpot, Salesforce) Data enrichment practices Tech stack and infrastructure (Snowflake, Segment, etc.) Recent company activity and initiatives Pain points related to B2B data (outdated CRM data, manual enrichment, static workflows) Public content (speaking engagements, blog posts, thought leadership) AI Components: Anthropic Chat Model1:** Claude Sonnet 4 for research Simple Memory1:** Maintains research context Explorium MCP1:** Connected to https://mcp.explorium.ai/mcp for real-time intelligence Output: Structured JSON with research findings including automation tools, pain points, personalization notes Node 13: Email Writer Generates personalized cold email drafts based on research findings. Input Data: Contact info from Loop Over Items Current experience and skills Research findings from Research Email agent Company data (name, website) AI Components: Anthropic Chat Model3:** Claude Sonnet 4 for email writing Structured Output Parser:** Enforces JSON schema with email, subject, message fields Output Schema: email: Selected prospect email address (professional preferred) subject: Compelling, personalized subject line message: HTML formatted email body Node 14: Create a draft (Gmail) Creates email drafts in Gmail for review before sending. Resource:** Draft Subject:** From Email Writer output Message:** HTML formatted email body Send To:** Selected prospect email address Authentication:** Gmail OAuth2 After Creation: Loops back to Node 11 (Loop Over Items) to process next prospect Alternative Output Options: Outlook:** Create drafts in Microsoft Outlook Mailchimp:** Add to email campaign SendGrid:** Queue for sending Lemlist:** Add to cold email sequence Workflow Flow Summary Input: User describes target prospects in natural language via chat interface Interpret: AI Agent converts query to structured Explorium API filters using MCP Validate: API call validation ensures filter compliance Refine: If invalid, error feedback loop helps AI correct the request Fetch: Retrieve matching prospect IDs from Explorium database Enrich: Parallel bulk enrichment of contact details and professional profiles Clean: Transform and structure enriched data Loop: Process each prospect individually Research: AI agent uses Explorium MCP to gather company and prospect intelligence Write: Generate personalized email based on research Draft: Create reviewable email drafts in preferred platform This workflow eliminates manual prospecting work by combining natural language processing, intelligent data enrichment, automated research, and personalized email generation—taking you from "I need marketing leaders at fintech companies" to personalized, research-backed email drafts in minutes. Customization Options Flexible Triggers The chat interface can be replaced with: Scheduled runs for recurring prospecting Webhook triggers from CRM updates Manual execution for ad-hoc campaigns Scalable Enrichment Adjust enrichment depth by: Adding more Explorium API endpoints (technographics, funding, news) Configuring prospect batch sizes Customizing data cleaning logic Output Destinations Route emails to your preferred platform: Email Platforms:** Gmail, Outlook, SendGrid, Mailchimp Sales Tools:** Lemlist, Outreach, SalesLoft CRM Integration:** Salesforce, HubSpot (create leads with research) Collaboration:** Slack notifications, Google Docs reports AI Model Flexibility Swap AI providers based on your needs: Default: Anthropic Claude (Sonnet 4) Alternatives: OpenAI GPT-4, Google Gemini Setup Notes Domain Filtering: The workflow prioritizes professional emails—customize email selection logic in the Clean Output Data node MCP Configuration: Explorium MCP requires Header Auth setup—ensure credentials are properly configured Rate Limits: Adjust Loop Over Items batch size if hitting API rate limits Memory Context: Simple Memory maintains conversation history—increase window length for longer sessions Validation: The AI self-corrects through validation loops—monitor early runs to ensure filter accuracy This workflow represents a complete AI-powered sales development representative (SDR) that handles prospecting, research, and personalized outreach with minimal human intervention.
by Avkash Kakdiya
How it works This workflow triggers when a HubSpot deal stage changes to Closed Won and automatically generates an invoice. It collects deal and contact data, builds a styled invoice, converts it into a PDF, and sends it to the client. The system logs all invoices and alerts the team, then monitors payment status with automated reminders. If payment is delayed, it escalates the issue and handles errors separately. Step-by-step Trigger and data collection** HubSpot - Deal Trigger – Starts workflow on deal stage change. IF - Is Deal Closed Won? – Filters only Closed Won deals. HTTP - Get Deal Details – Fetches deal information. HTTP - Get Deal Associations – Retrieves linked contacts. Code - Extract Contact ID – Extracts and formats data. HTTP - Get Contact Details – Gets customer details. Invoice generation** Code - Build Invoice + HTML – Creates invoice data and HTML layout. Send and store invoice** HTTP - Generate PDF – Converts HTML into PDF. Google Sheets - Log Invoice – Stores invoice records. Notion - Create Invoice Record – Tracks invoice internally. Gmail - Send Invoice Email – Sends invoice to client. Slack - Invoice Sent Alert – Notifies team. Payment tracking and follow-up** Wait - 7 Day Payment Window – Waits before checking payment. HTTP - Recheck Deal Stage – Checks payment status. IF - Payment Received? – Branches based on payment. Gmail - Follow-up Email #1 – Sends reminder if unpaid. Wait - 5 More Days – Adds extra delay. HTTP - Final Payment Check – Verifies final status. Slack - Payment Confirmed – Confirms successful payment. Escalation handling** IF - Still Unpaid? (Escalate) – Detects overdue invoices. Slack - Escalation Alert – Alerts team for action. Notion - Flag as Overdue – Updates record status. Slack - Late Payment Confirmed – Handles delayed payments. Error handling (separate flow)** Error Trigger – Captures workflow failures. Slack - Workflow Error Alert – Sends error notification. Why use this? Fully automates invoicing from deal closure to payment tracking Reduces manual billing work and human errors Improves payment collection with automated reminders Provides clear visibility with logs and team alerts Ensures reliability with built-in error monitoring
by isaWOW
Description Automatically re-engage old or inactive clients by sending AI-personalized follow-up emails using Claude 3.7 Sonnet, Gmail, and Google Sheets — with smart reply detection to avoid messaging clients who are already in active conversation. What this workflow does This workflow runs every day on a schedule and processes your entire old client database automatically. For each client, it checks whether they've replied to any of your emails in the last 30 days. If they have, it pauses automation and flags that client for manual reply. If they haven't, it fetches their last 10 email conversations, pulls scenario-based AI prompts and follow-up message direction templates from your Google Sheet, feeds everything into Claude 3.7 Sonnet to draft a highly personalized re-engagement email, sends it via Gmail, and updates your tracking sheet — all without any manual intervention. Perfect for agencies and freelancers who have past clients sitting idle in their database and want a fully automated, intelligent outreach system that feels human, not spammy. Key Features Smart 30-day reply detection: Before sending any email, the workflow checks the client's most recent email timestamp. If they replied within the last 30 days, automation is skipped and the sheet is flagged as "Reply Manually" so your team knows to handle it personally. Scenario-based AI prompting: Each client in your sheet is tagged with a Scenario. The workflow pulls the exact AI prompt and follow-up message direction that matches that scenario from your Google Sheet, so Claude always writes from the right context and angle. Full conversation context for Claude: Instead of drafting blindly, Claude receives the last 10 email conversations with the client, their original goal when hiring your agency, the detailed reason their contract ended, their industry, industry vertical, and the specific services they used — resulting in emails that feel genuinely personalized. Structured email output: Claude outputs a properly structured JSON with subject line, greeting, body content, and closing — ensuring the email is always cleanly formatted before sending. Live Google Sheets tracking: After every email sent, the workflow increments the email count in your sheet and updates the workflow status, giving you a live dashboard of where each client stands in the re-engagement sequence. Rate limiting built-in: A 1-minute wait between each client prevents Gmail API rate limit errors and ensures smooth processing even for large client lists. Loop-based batch processing: Every client in your database is processed one by one in a controlled loop — no skipped records, no duplicates. How it works Step 1 — Daily trigger fires: The workflow runs automatically every day using a Schedule Trigger. No manual action needed. Step 2 — Loads client database: Reads all rows from the "Database" sheet in your Google Sheet where the "Manually Stop Workflow" column is not flagged, so already-stopped clients are excluded automatically. Step 3 — Loops through each client: Passes each client record one by one into the processing loop using the Split In Batches node. Step 4 — Checks latest email from client: Fetches the single most recent email from the client's address using Gmail's filter by sender. Step 5 — 30-day window check: A JavaScript code node calculates how many days ago that email was sent, checks if it falls within the last 30 days, and formats the date cleanly (e.g., 21-Mar-2026). Step 6 — Routes based on reply status: A Switch node branches the flow: If replied within 30 days → Updates sheet with "Reply Manually" status and the latest email content, then loops back to next client. If no recent reply → Continues to AI email generation path. Step 7 — Parallel data fetching: Three nodes run in parallel — fetching the scenario-specific follow-up message template, the situation-based AI prompt, and the last 10 email conversations with the client from Gmail. Step 8 — Bundles email history: All 10 fetched emails are aggregated into a single text bundle to be passed into Claude as conversation context. Step 9 — Merges all inputs: A Merge node combines the follow-up template, situation prompt, and email conversation bundle into one unified data object. Step 10 — AI drafts the email: Claude 3.7 Sonnet receives the full context — prompt, follow-up direction, conversation history, client's goal, reason for contract ending, industry details, and services used — and drafts a re-engagement email tailored specifically to that client. Step 11 — Structured output parsing: The output is parsed into a clean JSON structure with subject, greeting, content, and closing fields using a Structured Output Parser. Step 12 — Sends email via Gmail: The formatted email is sent directly from your Gmail account to the client. Step 13 — Updates sheet and loops: The "Number of Emails Sent" counter is incremented in your sheet, the workflow waits 1 minute for rate limiting, then loops back to process the next client. Setup Requirements Tools you'll need: Active n8n instance (self-hosted or n8n Cloud) Google Sheets with OAuth access for client database management Gmail account with OAuth credentials Anthropic API key (for Claude 3.7 Sonnet) Estimated setup time: 20–30 minutes Configuration Steps Add credentials in n8n: Google Sheets OAuth API Gmail OAuth2 API Anthropic API (for Claude 3.7 Sonnet) Set up your Google Sheet with these tabs: Tab 1 — Database (main client list) Client Name Email Address Scenario Number of Emails Sent Followup Workflow (Running / Reply Manually) Latest Email from Client Date of Latest Email from Client The goal which client wanted to achieve by hiring an agency Detailed Reason Why Contract Got Ended Client is from Industry of Client's Industry Vertical Specific services they used Manually Stop Workflow (STOP) Tab 2 — Follow-up Messages (message templates) Scenario Number of Emails Sent Followup Message Direction Tab 3 — Situation (AI prompts per scenario) Scenario Prompt Update the Google Sheet ID: Replace all instances of YOUR_GOOGLE_SHEET_ID in the workflow nodes with your actual Google Sheet ID. Update the send email address: In the "Send Re-engagement Email" node, replace YOUR_EMAIL@yourdomain.com with the Gmail address you want to send from. Fill your client database: Add all old/inactive clients with their details, scenario tags, and goals into the Database tab. Create your scenarios and templates: Fill the Follow-up Messages and Situation tabs with the re-engagement angles and AI prompt instructions relevant to your business. Activate the workflow: Turn it on and let it run daily automatically. Use Cases Marketing & digital agencies: Re-engage a full database of past clients who stopped using your services — automatically, every single day, with zero manual effort. Freelancers: Keep past clients warm by sending intelligent, personalized check-in emails based on what they originally hired you for and why they left. SaaS companies: Run structured win-back campaigns for churned users by mapping scenarios to different churn reasons and tailoring AI messages accordingly. Consultants: Maintain long-term relationships with former clients by sending contextually relevant follow-ups that reference their original goals and show how you've improved. Sales teams: Use the 30-day reply detection to automatically filter out recently responsive leads and focus AI outreach only on truly cold contacts in your pipeline. Customization Options Change the AI model: Swap Claude 3.7 Sonnet for any other Anthropic model or replace with OpenAI GPT-4 in the LLM node — the agent and parser work with any LangChain-compatible model. Adjust the reply detection window: Change the 30 in the Date Checker code node to any number of days that fits your follow-up cadence (e.g., 14 days for more aggressive outreach). Add more scenario types: Simply add new rows to your Follow-up Messages and Situation sheets — the workflow dynamically fetches matching templates so no node changes are needed. Modify email structure: Edit the Structured Output Parser schema to add or remove fields like a PS section, CTA button text, or custom signature block. Add notifications: Connect a Slack, Discord, or webhook node after the Send Email node to notify your team every time a re-engagement email goes out. Expand tracking: Add more columns to your Google Sheet update nodes (e.g., last sent date, email subject used) to build a richer outreach history. Troubleshooting Gmail not fetching emails: Confirm your Gmail OAuth credentials are correctly connected and the sender filter is using the exact email address format from your sheet. Make sure Gmail API access is enabled in your Google Cloud Console. Claude not generating emails: Verify your Anthropic API key is active and has sufficient credits. Check that the Merge node is receiving all 3 inputs before passing data to the AI agent. Sheet not updating: Ensure the Google Sheets OAuth token has edit permissions on your spreadsheet. Confirm the "Email Address" column is set as the matching key in all update nodes. Emails sending to wrong address: Double-check that the sendTo field in the Send Email node is pointing to YOUR_EMAIL@yourdomain.com or the correct dynamic field reference. Loop not processing all clients: If some clients are being skipped, check the filter in the Old Client Database node — make sure the "Manually Stop Workflow (STOP)" column filter is only excluding rows where the value is explicitly set. Rate limit errors on Gmail: Increase the wait time in the Rate Limit Wait node from 1 minute to 2–3 minutes if you have a large client list or are hitting Gmail's sending limits. Resources n8n Documentation Anthropic Claude API Gmail API Reference Google Sheets API n8n LangChain Agent Node Important Notes This workflow is designed specifically for re-engagement outreach to past or inactive clients. It does not handle inbound replies — once a client responds, the workflow flags them for manual handling and stops automation for that contact. Make sure your Google Sheet is properly structured with all required columns before activating, as missing fields will cause the AI prompt to be incomplete and affect email quality. Always test with a small batch of 2–3 clients first before activating at full scale. Support Need help setting this up or want a custom version built for your specific use case? 📧 Email: info@isawow.com 🌐 Website: https://isawow.com/
by Feras Dabour
Who’s it for This template is for founders, finance teams, and solo operators who receive lots of invoices by email and want them captured automatically in a single, searchable source of truth. If you’re tired of hunting through your inbox for invoice PDFs or “that one receipt from three months ago,” this is for you. What it does / How it works The workflow polls your Gmail inbox on a schedule and fetches new messages including their attachments. A JavaScript Code node restructures all attachments, and a PDF extraction node reads any attached PDFs. An AI “Invoice Recognition Agent” then analyzes the email body and attachments to decide whether the email actually contains an invoice. If not, the workflow stops. If it is an invoice, a second AI “Invoice Data Extractor” pulls structured fields such as date_email, date_invoice, invoice_nr, description, provider, net_amount, vat, gross_amount, label (saas/hardware/other), and currency. Depending on whether the invoice is in an attachment or directly in the email text, the workflow either: uploads the invoice file to Google Drive, or document a direct link to the mail, then appends/updates a row in Google Sheets with all invoice parameters plus a Drive link, and finally marks the Gmail message as read. How to set up Add and authenticate: Gmail credentials Google Sheets credentials Google Drive credentials OpenAI (or compatible) credentials for the AI nodes Create or select a Google Sheet with the expected columns (date_email, date_invoice, invoice_nr, description, provider, net_amount, vat, gross_amount, label, currency, link). Create or select a Google Drive folder where invoices/docs should be stored. Adjust the Gmail Trigger filters (labels, search query, polling interval) to match the mailbox you want to process. Update node credentials and resource IDs (Sheet, Drive folder) via the node UIs, not hardcoded in HTTP nodes. Requirements n8n instance (cloud or self-hosted) Gmail account with OAuth2 setup Google Drive and Google Sheets access OpenAI (or compatible) API key configured in n8n Sufficient permissions to read emails, read/write Drive files, and edit the target Sheet How to customize the workflow Change invoice categories**: Extend the label enum (e.g., add “services”, “subscriptions”) in the extraction schema and adjust any downstream logic. Refine invoice detection**: Tweak the AI prompts to be more or less strict about what counts as an invoice or receipt. Add notifications**: After updating the Sheet, send a Slack/Teams message or email summary for high-value invoices. Filter by sender or subject**: Narrow the Gmail Trigger to specific vendors, labels, or keywords. Extend the data model**: Add fields (e.g., cost center, project code) to the extractor prompt and Sheet mapping to fit your bookkeeping setup.
by Vigh Sandor
Network Vulnerability Scanner (used NMAP as engine) with Automated CVE Report Workflow Overview This n8n workflow provides comprehensive network vulnerability scanning with automated CVE enrichment and professional report generation. It performs Nmap scans, queries the National Vulnerability Database (NVD) for CVE information, generates detailed HTML/PDF reports, and distributes them via Telegram and email. Key Features Automated Network Scanning**: Full Nmap service and version detection scan CVE Enrichment**: Automatic vulnerability lookup using NVD API CVSS Scoring**: Vulnerability severity assessment with CVSS v3.1/v3.0 scores Professional Reporting**: HTML reports with detailed findings and recommendations PDF Generation**: Password-protected PDF reports using Prince XML Multi-Channel Distribution**: Telegram and email delivery Multiple Triggers**: Webhook API, web form, manual execution, scheduled scans Rate Limiting**: Respects NVD API rate limits Comprehensive Data**: Service detection, CPE matching, CVE details with references Use Cases Regular security audits of network infrastructure Compliance scanning for vulnerability management Penetration testing reconnaissance phase Asset inventory with vulnerability context Continuous security monitoring Vulnerability assessment reporting for management DevSecOps integration for infrastructure testing Setup Instructions Prerequisites Before setting up this workflow, ensure you have: System Requirements n8n instance (self-hosted) with command execution capability Alpine Linux base image (or compatible Linux distribution) Minimum 2 GB RAM (4 GB recommended for large scans) 2 GB free disk space for dependencies Network access to scan targets Internet connectivity for NVD API Required Knowledge Basic networking concepts (IP addresses, ports, protocols) Understanding of CVE/CVSS vulnerability scoring Nmap scanning basics External Services Telegram Bot (optional, for Telegram notifications) Email server / SMTP credentials (optional, for email reports) NVD API access (public, no API key required but rate-limited) Step 1: Understanding the Workflow Components Core Dependencies Nmap: Network scanner Purpose: Port scanning, service detection, version identification Usage: Performs TCP SYN scan with service/version detection nmap-helper: JSON conversion tool Repository: https://github.com/net-shaper/nmap-helper Purpose: Converts Nmap XML output to JSON format Prince XML: HTML to PDF converter Website: https://www.princexml.com Version: 16.1 (Alpine 3.20) Purpose: Generates professional PDF reports from HTML Features: Password protection, print-optimized formatting NVD API: Vulnerability database Endpoint: https://services.nvd.nist.gov/rest/json/cves/2.0 Purpose: CVE information, CVSS scores, vulnerability descriptions Rate Limit: Public API allows limited requests per minute Documentation: https://nvd.nist.gov/developers Step 2: Telegram Bot Configuration (Optional) If you want to receive reports via Telegram: Create Telegram Bot Open Telegram and search for @BotFather Start a chat and send /newbot Follow prompts: Bot name: Network Scanner Bot (or your choice) Username: network_scanner_bot (must end with 'bot') BotFather will provide: Bot token: 123456789:ABCdefGHIjklMNOpqrsTUVwxyz (save this) Bot URL: https://t.me/your_bot_username Get Your Chat ID Start a chat with your new bot Send any message to the bot Visit: https://api.telegram.org/bot<YOUR_BOT_TOKEN>/getUpdates Find your chat ID in the response Save this chat ID (e.g., 123456789) Alternative: Group Chat ID For sending to a group: Add bot to your group Send a message in the group Check getUpdates URL Group chat IDs are negative: -1001234567890 Add Credentials to n8n Navigate to Credentials in n8n Click Add Credential Select Telegram API Fill in: Access Token: Your bot token from BotFather Click Save Test connection if available Step 3: Email Configuration (Optional) If you want to receive reports via email: Add SMTP Credentials to n8n Navigate to Credentials in n8n Click Add Credential Select SMTP Fill in: Host: SMTP server address (e.g., smtp.gmail.com) Port: SMTP port (587 for TLS, 465 for SSL, 25 for unencrypted) User: Your email username Password: Your email password or app password Secure: Enable for TLS/SSL Click Save Gmail Users: Enable 2-factor authentication Generate app-specific password: https://myaccount.google.com/apppasswords Use app password in n8n credential Step 4: Import and Configure Workflow Configure Basic Parameters Locate "1. Set Parameters" Node: Click the node to open settings Default configuration: network: Input from webhook/form/manual trigger timestamp: Auto-generated (format: yyyyMMdd_HHmmss) report_password: Almafa123456 (change this!) Change Report Password: Edit report_password assignment Set strong password: 12+ characters, mixed case, numbers, symbols This password will protect the PDF report Save changes Step 5: Configure Notification Endpoints Telegram Configuration Locate "14/a. Send Report in Telegram" Node: Open node settings Update fields: Chat ID: Replace -123456789012 with your actual chat ID Credentials: Select your Telegram credential Save changes Message customization: Current: Sends PDF as document attachment Automatic filename: vulnerability_report_<timestamp>.pdf No caption by default (add if needed) Email Configuration Locate "14/b. Send Report in Email with SMTP" Node: Open node settings Update fields: From Email: report.creator@example.com → Your sender email To Email: report.receiver@example.com → Your recipient email Subject: Customize if needed (default includes network target) Text: Email body message Credentials: Select your SMTP credential Save changes Multiple Recipients: Change toEmail field to comma-separated list: admin@example.com, security@example.com, manager@example.com Add CC/BCC: In node options, add: cc: Carbon copy recipients bcc: Blind carbon copy recipients Step 6: Configure Triggers The workflow supports 4 trigger methods: Trigger 1: Webhook API (Production) Locate "Webhook" Node: Path: /vuln-scan Method: POST Response: Immediate acknowledgment "Process started!" Async: Scan runs in background Trigger 2: Web Form (User-Friendly) Locate "On form submission" Node: Path: /webhook-test/form/target Method: GET (form display), POST (form submit) Form Title: "Add scan parameters" Field: network (required) Form URL: https://your-n8n-domain.com/webhook-test/form/target Users can: Open form URL in browser Enter target network/IP Click submit Receive confirmation Trigger 3: Manual Execution (Testing) Locate "Manual Trigger" Node: Click to activate Opens workflow with "Pre-Set-Target" node Default target: scanme.nmap.org (Nmap's official test server) To change default target: Open "Pre-Set-Target" node Edit network value Enter your test target Save changes Trigger 4: Scheduled Scans (Automated) Locate "Schedule Trigger" Node: Default: Daily at 1:00 AM Uses "Pre-Set-Target" for network To change schedule: Open node settings Modify trigger time: Hour: 1 (1 AM) Minute: 0 Day of week: All days (or select specific days) Save changes Schedule Examples: Every day at 3 AM: Hour: 3, Minute: 0 Weekly on Monday at 2 AM: Hour: 2, Day: Monday Twice daily (8 AM, 8 PM): Create two Schedule Trigger nodes Step 7: Test the Workflow Recommended Test Target Use Nmap's official test server for initial testing: Target**: scanme.nmap.org Purpose**: Official Nmap testing server Safe**: Designed for scanning practice Permissions**: Public permission to scan Important: Never scan targets without permission. Unauthorized scanning is illegal. Manual Test Execution Open workflow in n8n editor Click Manual Trigger node to select it Click Execute Workflow button Workflow will start with scanme.nmap.org as target Monitor Execution Watch nodes turn green as they complete: Need to Add Helper?: Checks if nmap-helper installed Add NMAP-HELPER: Installs helper (if needed, ~2-3 minutes) Optional Params Setter: Sets scan parameters 2. Execute Nmap Scan: Runs scan (5-30 minutes depending on target) 3. Parse NMAP JSON to Services: Extracts services (~1 second) 5. CVE Enrichment Loop: Queries NVD API (1 second per service) 8-10. Report Generation: Creates HTML/PDF reports (~5-10 seconds) 12. Convert to PDF: Generates password-protected PDF (~10 seconds) 14a/14b. Distribution: Sends reports Check Outputs Click nodes to view outputs: Parse NMAP JSON**: View discovered services CVE Enrichment**: See vulnerabilities found Prepare Report Structure**: Check statistics Read Report PDF**: Download report to verify Verify Distribution Telegram: Open Telegram chat with your bot Check for PDF document Download and open with password Email: Check inbox for report email Verify subject line includes target network Download PDF attachment Open with password How to Use Understanding the Scan Process Initiating Scans Method 1: Webhook API Use curl or any HTTP client and add "network" parameter in a POST request. Response: Process started! Scan runs asynchronously. You'll receive results via configured channels (Telegram/Email). Method 2: Web Form Open form URL in browser: https://your-n8n.com/webhook-test/form/target Fill in form: network: Enter target (IP, range, domain) Click Submit Receive confirmation Wait for report delivery Advantages: No command line needed User-friendly interface Input validation Good for non-technical users Method 3: Manual Execution For testing or one-off scans: Open workflow in n8n Edit "Pre-Set-Target" node: Change network value to your target Click Manual Trigger node Click Execute Workflow Monitor progress in real-time Advantages: See execution in real-time Debug issues immediately Test configuration changes View intermediate outputs Method 4: Scheduled Scans For regular, automated security audits: Configure "Schedule Trigger" node with desired time Configure "Pre-Set-Target" node with default target Activate workflow Scans run automatically on schedule Advantages: Automated security monitoring Regular compliance scans No manual intervention needed Consistent scheduling Scan Targets Explained Supported Target Formats Single IP Address: 192.168.1.100 10.0.0.50 CIDR Notation (Subnet): 192.168.1.0/24 # Scans 192.168.1.0-255 (254 hosts) 10.0.0.0/16 # Scans 10.0.0.0-255.255 (65534 hosts) 172.16.0.0/12 # Scans entire 172.16-31.x.x range IP Range: 192.168.1.1-50 # Scans 192.168.1.1 to 192.168.1.50 10.0.0.1-10.0.0.100 # Scans across range Multiple Targets: 192.168.1.1,192.168.1.2,192.168.1.3 Hostname/Domain: scanme.nmap.org example.com server.local Choosing Appropriate Targets Development/Testing: Use scanme.nmap.org (official test target) Use your own isolated lab network Never scan public internet without permission Internal Networks: Use CIDR notation for entire subnets Scan DMZ networks separately from internal Consider network segmentation in scan design Understanding Report Contents Report Structure The generated report includes: 1. Executive Summary: Total hosts discovered Total services identified Total vulnerabilities found Severity breakdown (Critical, High, Medium, Low, Info) Scan date and time Target network 2. Overall Statistics: Visual dashboard with key metrics Severity distribution chart Quick risk assessment 3. Detailed Findings by Host: For each discovered host: IP address Hostname (if resolved) List of open ports and services Service details: Port number and protocol Service name (e.g., http, ssh, mysql) Product (e.g., Apache, OpenSSH, MySQL) Version (e.g., 2.4.41, 8.2p1, 5.7.33) CPE identifier 4. Vulnerability Details: For each vulnerable service: CVE ID**: Unique vulnerability identifier (e.g., CVE-2021-44228) Severity**: CRITICAL / HIGH / MEDIUM / LOW / INFO CVSS Score**: Numerical score (0.0-10.0) Published Date**: When vulnerability was disclosed Description**: Detailed vulnerability explanation References**: Links to advisories, patches, exploits 5. Recommendations: Immediate actions (patch critical/high severity) Long-term improvements (security processes) Best practices Vulnerability Severity Levels CRITICAL (CVSS 9.0-10.0): Color: Red Characteristics: Remote code execution, full system compromise Action: Immediate patching required Examples: Log4Shell, EternalBlue, Heartbleed HIGH (CVSS 7.0-8.9): Color: Orange Characteristics: Significant security impact, data exposure Action: Patch within days Examples: SQL injection, privilege escalation, authentication bypass MEDIUM (CVSS 4.0-6.9): Color: Yellow Characteristics: Moderate security impact Action: Patch within weeks Examples: Information disclosure, denial of service, XSS LOW (CVSS 0.1-3.9): Color: Green Characteristics: Minor security impact Action: Patch during regular maintenance Examples: Path disclosure, weak ciphers, verbose error messages INFO (CVSS 0.0): Color: Blue Characteristics: No vulnerability found or informational Action: No action required, awareness only Examples: Service version detected, no known CVEs Understanding CPE CPE (Common Platform Enumeration): Standard naming scheme for IT products Used for CVE lookup in NVD database Workflow CPE Handling: Nmap detects service and version Nmap provides CPE (if in database) Workflow uses CPE to query NVD API NVD returns CVEs associated with that CPE Special case: nginx vendor fixed from igor_sysoev to nginx Working with Reports Accessing HTML Report Location: /tmp/vulnerability_report_<timestamp>.html Viewing: Open in web browser directly from n8n Click "11. Read Report for Output" node Download HTML file Open locally in any browser Advantages: Interactive (clickable links) Searchable text Easy to edit/customize Smaller file size Accessing PDF Report Location: /tmp/vulnerability_report_<timestamp>.pdf Password: Default: Almafa123456 (configured in "1. Set Parameters") Change in workflow before production use Required to open PDF Opening PDF: Receive PDF via Telegram or Email Open with PDF reader (Adobe, Foxit, Browser) Enter password when prompted View, print, or share Advantages: Professional appearance Print-optimized formatting Password protection Portable (works anywhere) Preserves formatting Report Customization Change Report Title: Open "8. Prepare Report Structure" node Find metadata object Edit title and subtitle fields Customize Styling: Open "9. Generate HTML Report" node Modify CSS in <style> section Change colors, fonts, layout Add Company Logo: Edit HTML generation code Add `` tag in header section Include base64-encoded logo or URL Modify Recommendations: Open "9. Generate HTML Report" node Find Recommendations section Edit recommendation text Scanning Ethics and Legality Authorization is Mandatory: Never scan networks without explicit written permission Unauthorized scanning is illegal in most jurisdictions Can result in criminal charges and civil liability Scope Definition: Document approved scan scope Exclude out-of-scope systems Maintain scan authorization documents Notification: Inform network administrators before scans Provide scan window and source IPs Have emergency contact procedures Safe Targets for Testing: scanme.nmap.org: Official Nmap test server Your own isolated lab network Cloud instances you own Explicitly authorized environments Compliance Considerations PCI DSS: Quarterly internal vulnerability scans required Scan all system components Re-scan after significant changes Document scan results HIPAA: Regular vulnerability assessments required Risk analysis and management Document remediation efforts ISO 27001: Vulnerability management process Regular technical vulnerability scans Document procedures NIST Cybersecurity Framework: Identify vulnerabilities (DE.CM-8) Maintain inventory Implement vulnerability management License and Credits Workflow: Created for n8n workflow automation Free for personal and commercial use Modify and distribute as needed No warranty provided Dependencies: Nmap**: GPL v2 - https://nmap.org nmap-helper**: Open source - https://github.com/net-shaper/nmap-helper Prince XML**: Commercial license required for production use - https://www.princexml.com NVD API**: Public API by NIST - https://nvd.nist.gov Third-Party Services: Telegram Bot API: https://core.telegram.org/bots/api SMTP: Standard email protocol Support For Nmap issues: Documentation: https://nmap.org/book/ Community: https://seclists.org/nmap-dev/ For NVD API issues: Status page: https://nvd.nist.gov Contact: https://nvd.nist.gov/general/contact For Prince XML issues: Documentation: https://www.princexml.com/doc/ Support: https://www.princexml.com/doc/help/ Workflow Metadata External Dependencies**: Nmap, nmap-helper, Prince XML, NVD API License**: Open for modification and commercial use Security Disclaimer This workflow is provided for legitimate security testing and vulnerability assessment purposes only. Users are solely responsible for ensuring they have proper authorization before scanning any network or system. Unauthorized network scanning is illegal and unethical. The authors assume no liability for misuse of this workflow or any damages resulting from its use. Always obtain written permission before conducting security assessments.
by Jitesh Dugar
Transform accounts payable from a manual bottleneck into an intelligent, automated system that reads invoices, detects fraud, and processes payments automatically—saving 20+ hours per week while preventing costly fraudulent payments. 🎯 What This Workflow Does Automates the complete invoice-to-payment cycle with advanced AI: 📧 Check Invoices from Jotform - Monitor Jotform for Invoice Submission 🤖 AI-Powered OCR - Extracts ALL data from PDFs and images (vendor, amounts, line items, dates, tax) 🚨 Fraud Detection Engine - Analyzes 15+ fraud patterns: duplicates, anomalies, suspicious vendors, document quality 🚦 Intelligent Routing - Auto-routes based on AI risk assessment: Critical Fraud (Risk 80-100): Block → Slack alert → CFO investigation Manager Review (>$5K or Medium Risk): Approval workflow with full analysis Auto-Approve (<$5K + Low Risk): Instant → QuickBooks → Vendor notification 📊 Complete Audit Trail - Every decision logged to Google Sheets with AI reasoning ✨ Key Features Advanced AI Capabilities Vision-Based OCR**: Reads any invoice format—PDF, scanned images, smartphone photos 99% Extraction Accuracy**: Vendor details, line items, amounts, dates, tax calculations, payment terms Multi-Dimensional Fraud Detection**: Duplicate invoice identification (same number, similar amounts) Amount anomalies (round numbers, threshold gaming, unusually high) Vendor verification (new vendors, mismatched domains, missing tax IDs) Document quality scoring (OCR confidence, missing fields, calculation errors) Timing anomalies (future dates, expired invoices, weekend submissions) Pattern-based detection (frequent small amounts, vague descriptions, no PO references) Intelligent Processing Risk-Based Scoring**: 0-100 risk score with detailed reasoning Vendor Trust Ratings**: Build vendor reputation over time Category Classification**: Auto-categorizes (software, consulting, office supplies, utilities, etc.) Amount Thresholds**: Configurable auto-approve limits Human-in-the-Loop**: Critical decisions escalated appropriately Fast-Track Low Risk**: Process safe invoices in under 60 seconds Security & Compliance Fraud Prevention**: Catch fraudulent invoices before payment Duplicate Detection**: Prevent double payments automatically Complete Audit Trail**: Every decision logged with timestamp and reasoning Role-Based Approvals**: Route to correct approver based on amount and risk Document Verification**: Quality checks on every invoice 💼 Perfect For Finance Teams**: Processing 50-500 invoices per week CFOs**: Need fraud prevention and spending visibility Controllers**: Want automated AP with audit compliance Growing Companies**: Scaling without adding AP headcount Multi-Location Businesses**: Centralized invoice processing across offices Fraud-Conscious Organizations**: Healthcare, legal, financial services, government contractors 💰 ROI & Business Impact Time Savings 90% reduction** in manual data entry time 20-25 hours saved per week** on invoice processing Same-day turnaround** on all legitimate invoices Zero data entry errors** with AI extraction No more lost invoices** - complete tracking Fraud Prevention 100% duplicate detection** before payment Catch suspicious patterns** automatically Prevent invoice splitting** (gaming approval thresholds) Identify fake vendors** before payment Average savings: $50K-$200K annually** in prevented fraud losses Process Improvements 24-hour vendor response times** (vs 7-10 days manual) 95%+ payment accuracy** with AI validation Better cash flow management** via due date tracking Vendor satisfaction** from transparent, fast processing Audit-ready** with complete decision trail 🔧 Required Integrations Core Services Jotform** - Invoice Submissions Create your form for free on Jotform using this link OpenAI API** - GPT-4o-mini for OCR & fraud detection (~$0.03/invoice) Google Sheets** - Invoice database and analytics (free) Accounting System** - QuickBooks, Xero, NetSuite, or Sage (via API) Optional Add-Ons Slack** - Real-time fraud alerts and approval requests Bill.com** - Payment processing automation Linear/Asana** - Task creation for manual reviews Expensify/Ramp** - Expense management integration 🚀 Quick Setup Guide Step 1: Import Template Copy JSON from artifact In n8n: Workflows → Import from File → Paste JSON Template imports with all nodes and sticky notes Step 2: Configure Email Monitoring Connect Gmail or Outlook account Update filter: invoices@yourcompany.com (or your AP email) Test: Send yourself a sample invoice Step 3: Add OpenAI API Get API key: https://platform.openai.com/api-keys Add to both AI nodes (OCR + Fraud Detection) Cost: ~$0.03 per invoice processed Step 4: Connect Accounting System Get API credentials from QuickBooks/Xero/NetSuite Configure HTTP Request node with your endpoint Map invoice fields to your GL codes Step 5: Setup Approval Workflows Update email addresses (finance-manager@yourcompany.com) Configure Slack webhook (optional) Set approval thresholds ($5K default, customize as needed) Step 6: Create Google Sheet Database Create spreadsheet with columns:
by Cheng Siong Chin
How It Works This workflow automates student academic advising by deploying a multi-agent AI system that triages student queries, routes them intelligently, and escalates when human intervention is needed. Designed for academic institutions, it eliminates manual triage bottlenecks and ensures timely, context-aware responses. A student event triggers the webhook, which feeds into a Status Agent to classify the student's situation. A routing node directs the request to an Academic Orchestration Agent, which delegates to specialized sub-agents—Advising, Notification, or Escalation—based on query type. Results are routed by action type, checked for escalation, then dispatched via student email, faculty email, or Slack advisor alert before logging completion. Setup Steps Import workflow and configure Student Event Webhook URL. Add OpenAI API credentials to all OpenAI Model nodes. Configure Gmail credentials for student and faculty email nodes. Add Slack credentials and set target advisor channel for Slack alert. Set escalation thresholds in the "Check if Escalation Required" node. Test with sample student event payload via webhook. Prerequisites OpenAI API key Gmail account with OAuth2 Slack workspace with bot token Use Cases Automated academic query triage for universities Customization Add new sub-agents for career or financial advising Benefits Reduces advisor workload through intelligent auto-triage Ensures urgent cases are escalated instantly
by Cheng Siong Chin
How It Works This workflow automates athlete performance monitoring through two parallel pipelines: real-time session analysis triggered by training form submissions, and scheduled weekly performance summaries. Designed for sports coaches, athletic trainers, and performance analysts, it eliminates manual data aggregation and ensures threshold breaches and weekly trends are communicated instantly. A training session form submission stores the record to Google Sheets, fetches historical data, and combines both inputs for a Performance Analysis Agent. OpenAI analyses the combined data, updates the sheet with insights, then checks performance thresholds—triggering Slack alerts or email notifications on breach. In parallel, a weekly schedule fetches all athlete data, groups by athlete, and passes to a Weekly Summary Agent that distributes summaries via both Slack and email. Setup Steps Configure Training Session Form fields to match athlete and session data schema. Connect Google Sheets credentials to Store, Fetch, and Update Record nodes. Add OpenAI API credentials to Performance Analysis and Weekly Summary Agent nodes. Configure Slack credentials and set coaching team alert and summary channels. Add Gmail/SMTP credentials to Send Email Alert and Weekly Summary Email nodes. Define performance threshold values in the Check Performance Threshold node. Prerequisites Google Sheets with service account credentials Slack workspace with bot token Gmail or SMTP credentials Use Cases Real-time performance threshold alerts for elite athlete training programmes Customization Replace OpenAI with Anthropic Claude for analysis and summary agents Benefits Automates session analysis and insight storage immediately after each training entry
by Davide
This workflow automates the collection, analysis, and reporting of Trustpilot reviews for a specific company using ScrapeGraphAI, transforming unstructured customer feedback into structured insights and actionable intelligence. Key Advantages 1. ✅ End-to-End Automation The entire process—from scraping reviews to delivering a polished management report—is fully automated, eliminating manual data collection and analysis . 2. ✅ Structured Insights from Unstructured Data The workflow transforms raw, unstructured review text into structured fields and standardized sentiment categories, making analysis reliable and repeatable. 3. ✅ Company-Level Reputation Intelligence Instead of focusing on individual products, the analysis evaluates the overall brand, service quality, customer experience, and operational performance, which is critical for leadership and strategic teams. 4. ✅ Action-Oriented Outputs The AI-generated report goes beyond summaries by: Identifying reputational risks Highlighting improvement opportunities Proposing concrete actions with priorities, effort estimates, and KPIs 5. ✅ Visual & Executive-Friendly Reporting Automatic sentiment charts and structured executive summaries make insights immediately understandable for non-technical stakeholders. 6. ✅ Scalable and Configurable Easily adaptable to different companies or review volumes Page limits and batching protect against rate limits and excessive API usage 7. ✅ Cross-Team Value The output is tailored for multiple internal teams: Management Marketing Customer Support Operations Product & UX Ideal Use Cases Brand reputation monitoring Voice-of-the-customer programs Executive reporting Customer experience optimization Competitive benchmarking (by reusing the workflow across brands) How It Works This workflow automates the complete process of scraping Trustpilot reviews, extracting structured data, analyzing sentiment, and generating comprehensive reports. The workflow follows this sequence: Trigger & Configuration: The workflow starts with a manual trigger, allowing users to set the target company URL and the number of review pages to scrape. Review Scraping: An HTTP request node fetches review pages from Trustpilot with pagination support, extracting review links from the HTML content. Review Processing: The workflow processes individual review pages in batches (limited to 5 reviews per execution for efficiency). Each review page is converted to clean markdown using ScrapegraphAI. Data Extraction: An information extractor using OpenAI's GPT-4.1-mini model parses the markdown to extract structured review data including author, rating, date, title, text, review count, and country. Sentiment Analysis: Another OpenAI model performs sentiment classification on each review text, categorizing it as Positive, Neutral, or Negative. Data Aggregation: Processed reviews are collected and compiled into a structured dataset. Analytics & Visualization: A pie chart is generated showing sentiment distribution A comprehensive reputation analysis report is created using an AI agent that evaluates company-level insights, recurring themes, and provides actionable recommendations Reporting & Delivery: The analysis is converted to HTML format and sent via email, providing stakeholders with immediate insights into customer feedback and company reputation. Set Up Steps To configure and run this workflow: Credential Setup: Configure OpenAI API credentials for the chat models and information extraction Set up ScrapeGraphAI credentials for webpage-to-markdown conversion Configure Gmail OAuth2 credentials for email notifications Company Configuration: In the "Set Parameters" node, update company_id to the target Trustpilot company URL Adjust max_page to control how many review pages to scrape Review Processing Limits: The "Limit" node restricts processing to 5 reviews per execution to manage API costs and processing time Adjust this value based on your needs and OpenAI usage limits Email Configuration: Update the "Send a message" node with the recipient email address Customize the email subject and content as needed Analysis Customization: Modify the prompt in the "Company Reputation Analyst" node to tailor the report format Adjust sentiment analysis categories if different classification is needed Execution: Click "Test workflow" to execute the manual trigger Monitor execution in the n8n editor to ensure all API calls succeed Check the configured email inbox for the generated report Note: Be mindful of API rate limits and costs associated with OpenAI and ScrapegraphAI services when processing large numbers of reviews. The workflow includes a 5-second delay between paginated requests to comply with Trustpilot's terms of service. 👉 Subscribe to my new YouTube channel. Here I’ll share videos and Shorts with practical tutorials and FREE templates for n8n. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Rully Saputra
Daily Sales Metrics Auto-Insight with Gemini, Google Sheets, Calendar, Telegram, Trello and Gmail Who’s it for This workflow is ideal for sales managers, operations teams, and business owners who need daily automated sales summaries and team notifications. It eliminates the hassle of manually gathering, analyzing, and reporting daily sales data, providing instant insights and proactive notifications to keep your team aligned. How it works / What it does This advanced workflow automates the entire daily sales reporting pipeline with actionable team alerts: Webhook captures new sales entries in real-time. The data is logged into Google Sheets. It retrieves all rows to compile current sales metrics. A custom node concatenates the data into an AI-friendly format. The Google Gemini Chat Model generates concise sales insights. HTML tags are cleaned up with a Remove HTML Tags node. The insights are classified (Good, Bad, Very Bad) using AI. Based on the classification: -- Teams are alerted via Telegram group messages. -- For negative insights, a Trello card backlog is created for follow-up. -- A Google Calendar meeting is scheduled automatically to discuss issues. An email summary is also sent out via Gmail to ensure no update is missed How to set up Import the workflow into your n8n instance. Configure the Webhook URL in your data source (POS, CRM, etc.). Connect Google Sheets, Google Gemini API, Trello, Telegram, and Google Calendar. Adjust classification logic inside the Classify Insight node if needed. Customize the message templates for email and Telegram. Test the workflow with sample data to validate automation flow. Requirements n8n account with active workflows. Google Sheets API credentials. Google Gemini API access. Telegram Bot Token & Group ID. Trello API Key & Token. Google Calendar API setup. Gmail or SMTP credentials for email notifications. How to customize the workflow Adjust the Concat Sales Data node if you want to include more fields or different data formats. Modify the Gemini prompt for personalized insight summaries. Change the classification thresholds (Good, Bad, Very Bad) based on your business KPIs. Update the notification messages in Telegram and Email nodes. Add or remove post-classification actions, like creating different task cards or sending escalations to other platforms (Slack, Microsoft Teams, etc.). Automate daily sales insights from Google Sheets using Gemini AI, classify results, and notify your team via email, Telegram, Trello, and Google Calendar instantly. Email Preview
by Patrick Campbell
Who's this for Finance teams, AI developers, product managers, and business owners who need to monitor and control OpenAI API costs across different models and projects. If you're using GPT-4, GPT-3.5, or other OpenAI models and want to track spending patterns, identify cost optimization opportunities, and generate stakeholder reports, this workflow is for you. What it does This workflow automatically tracks your OpenAI token usage on a monthly basis, breaks down costs by model and date, stores the data in Google Sheets with automatic cost calculations, and emails PDF reports to stakeholders. It transforms raw API usage data into actionable insights, helping you understand which models are driving costs, identify usage trends over time, and maintain budget accountability. The workflow runs completely hands-free once configured, generating comprehensive monthly reports without manual intervention. How it works The workflow executes automatically on the 5th of each month and follows these steps: Creates a new Google Sheet from your template with the naming format "Token_Tracking_[Month]_[Year]" Fetches the previous month's OpenAI usage data via the OpenAI Admin API Transforms raw API responses into a clean daily breakdown showing usage by model Appends the data to Google Sheets with columns for date, model, input tokens, and output tokens Your Google Sheets formulas automatically calculate costs based on OpenAI's pricing for each model Exports the completed report as both PDF and Excel formats Emails the PDF report to designated stakeholders with a summary message Archives the Excel file to Google Drive for long-term recordkeeping and historical analysis Requirements OpenAI account with Admin API access (required to access organization usage endpoints) Google Sheets template pre-configured with cost calculation formulas Google Drive for report storage and archiving Gmail account for sending email notifications n8n instance (self-hosted or cloud) with the following credentials configured: OpenAI API credentials Google Sheets OAuth2 Google Drive OAuth2 Gmail OAuth2 Setup instructions Create your Google Sheets template Set up a Google Sheet with these columns: Date Model Token Usage In Token Usage Out Token Cost Input (formula: =C2 * [price per 1M input tokens] / 1000000) Token Cost Output (formula: =D2 * [price per 1M output tokens] / 1000000) Total Cost USD (formula: =E2 + F2) Total Cost AUD (optional, formula: =G2 * [exchange rate]) (workflow contains a template) Include pricing formulas based on OpenAI's current pricing. Add summary calculations at the bottom to total costs by model. 2. Configure n8n credentials In your n8n instance, set up credentials for: OpenAI API (you'll need admin access to your organization) Google Sheets (OAuth2 connection) Google Drive (OAuth2 connection) Gmail (OAuth2 connection) 3. Update workflow placeholders Replace the following placeholders in the workflow: your-api-key-id: Your OpenAI API key ID (find this in your OpenAI dashboard) your-template-file-id: The ID of your Google Sheets template your-archive-folder-id: The Google Drive folder ID where reports should be archived your-email@example.com: The email address that should receive monthly reports 4. Assign credentials to nodes Open each node that requires credentials and select the appropriate credential from your configured options: "Fetch OpenAI Usage Data" → OpenAI API credential "Append Data to Google Sheet" → Google Sheets credential "Create Monthly Report from Template" → Google Drive credential "Export Sheet as Excel" → Google Drive credential "Export Sheet as PDF for Email" → Google Drive credential "Archive Report to Drive" → Google Drive credential "Email Report to Stakeholder" → Gmail credential 5. Test the workflow Before enabling the schedule, manually execute the workflow to ensure: The template copies successfully OpenAI data fetches correctly Data appends to the sheet properly PDF and Excel exports work Email sends successfully File archives to the correct folder 6. Enable the schedule Once testing is complete, activate the workflow. It will run automatically on the 5th of each month.