by Țugui Dragoș
This workflow automatically collects customer reviews from Google, Facebook, Trustpilot, and Yelp, analyzes their sentiment using AI, sends real-time alerts for negative feedback, and generates a weekly summary report. It is ideal for businesses that want to monitor their online reputation across multiple platforms and respond quickly to customer concerns. How It Works Daily Schedule**: Triggers the workflow every day at 09:00. Review Collection**: Fetches new reviews from Google, Facebook, Trustpilot, and Yelp using their official APIs. Data Normalization**: Merges and standardizes all reviews into a unified format. AI Sentiment Analysis**: Uses GPT-4 to analyze the sentiment and extract key insights from each review. Negative Review Alerts**: Sends a Slack notification to managers if a negative review is detected. Logging**: Saves all reviews and their analysis to a Google Sheet for record-keeping. Weekly Reporting**: Every Monday, aggregates the past week’s reviews, generates an AI-powered summary, and emails it to management. Configuration API Credentials: Google My Business API: Create a project in Google Cloud, enable the My Business API, and generate OAuth credentials. Facebook Graph API: Create a Facebook App, request the necessary permissions, and obtain a Page Access Token. Trustpilot API: Register for a Trustpilot Business account and generate an API key. Yelp Fusion API: Sign up for Yelp Fusion, create an app, and get your API key. OpenAI API: Create an account at OpenAI, generate an API key for GPT-4 access. Slack API: Create a Slack App, enable Incoming Webhooks, and get the webhook URL. Google Sheets API: Enable the Google Sheets API in Google Cloud and create OAuth credentials. Set Up Environment Variables: Add all API keys, tokens, and configuration values in the Workflow Configuration node or as n8n credentials. Google Sheet Setup: Create a Google Sheet with columns for review data and share it with your service account email. Slack Channel: Set up a Slack channel for alerts and add the webhook URL in the configuration. Email Settings: Configure the recipient email address for weekly reports in the workflow. Usage Activate the workflow after configuration. Reviews will be collected and analyzed daily. Negative reviews trigger instant Slack alerts. Every Monday, a summary report is sent via email. Use Case: Monitor and respond to customer feedback across Google, Facebook, Trustpilot, and Yelp, automate sentiment analysis, and keep management informed with actionable weekly insights.
by Incrementors
Description Submit up to 3 landing page URLs using a simple form and the workflow audits each one automatically. For every page, GPT-4o-mini scrapes the content, scores it across five dimensions, identifies the top conversion issue, and generates five specific page fixes. Every result is logged to Google Sheets and pages that score below your alert threshold trigger an automatic Gmail alert with the full breakdown. Built for marketing managers, agency owners, and growth teams who need structured, repeatable conversion audits without spending hours reviewing pages manually. What This Workflow Does Scrapes and cleans each page automatically** — Fetches the raw HTML for every URL submitted, strips all scripts and styles, and prepares clean text for AI analysis Scores pages across five CRO dimensions** — GPT-4o-mini rates each page on overall conversion potential, CTA strength, trust score, and message clarity on a 1–10 scale Generates a letter grade per page** — Assigns A+ through F based on the overall score so you can rank pages at a glance Produces five page-specific fixes** — Every fix references actual content from the page and is scoped to be doable within 1–2 days — no generic advice Runs each page independently** — All three pages are processed as separate audits so one slow page never blocks the others Logs every audit to Google Sheets** — Appends a 14-column row per page with all scores, top issue, quick win, fix list, and priority label Sends Gmail alerts only for low-scoring pages** — Pages below your score threshold get an immediate email with the full audit breakdown — pages that pass trigger no email Setup Requirements Tools Needed n8n instance (self-hosted or cloud) OpenAI account with GPT-4o-mini API access Google Sheets (one sheet with a tab named CRO Audit Dashboard) Gmail account (the account you want alerts sent from) Credentials Required OpenAI API key Google Sheets OAuth2 Gmail OAuth2 Estimated Setup Time: 10–15 minutes Step-by-Step Setup Import the workflow — Open n8n → Workflows → Import from JSON → paste the workflow JSON → click Import Fill in Config Values — Open node 2. Set — Config Values → replace all four placeholders: | Field | What to enter | |---|---| | PASTE_YOUR_GOOGLE_SHEET_ID_HERE | The ID from your Google Sheet URL (the string between /d/ and /edit) | | PASTE_YOUR_EMAIL_ADDRESS_HERE | The email address where CRO alerts should be sent | | PASTE_YOUR_NAME_HERE | Your name for the email sign-off | | scoreThreshold | Leave as 6 — pages scoring 5 or below get an alert. Change to 7 for stricter alerting or 5 for looser alerting | Connect OpenAI — Open node 7. OpenAI — GPT-4o-mini Model → click the credential dropdown → add your OpenAI API key → test the connection Create your Google Sheet tab — Open your Google Sheet → add a tab named exactly CRO Audit Dashboard → add these 14 column headers in row 1: Date, Page URL, Page Name, CRO Score, Grade, Top Issue, Quick Win, CTA Strength, Trust Score, Message Clarity, Priority, Fix List, Audited By, Logged At Connect Google Sheets — Open node 10. Google Sheets — Log CRO Audit → click the credential dropdown → add Google Sheets OAuth2 → sign in with your Google account → authorize access Connect Gmail — Open node 12. Gmail — Send CRO Alert → click the credential dropdown → add Gmail OAuth2 → sign in with the Gmail account you want to send alerts from → authorize access Activate the workflow — Toggle the workflow to Active → copy the Form URL from node 1. Form — Multi-Page CRO Audit → open it in a browser to submit your first audit How It Works (Step by Step) Step 1 — Form: Multi-Page CRO Audit You open the form URL in a browser and fill in your name, up to three page URLs with their names, and the business type (e.g. SaaS, Law Firm, E-commerce). Only Page 1 URL and name are required — Pages 2 and 3 are optional. Submitting the form starts the workflow. Step 2 — Set: Config Values Your Google Sheet ID, alert email, sender name, score threshold, and all form inputs are stored here. Each page URL and name is mapped to its own variable. The business type and auditor name are also captured here for use in the AI prompt and the sheet log. Step 3 — Code: Build Pages List The three possible page URLs are filtered to remove any blank entries. Only URLs that start with http are kept. If no valid URLs are found, the workflow stops with a clear error. Each valid page is then output as its own separate entry so every page runs through the full audit pipeline independently. This means if you submit 3 pages, the scraping, AI analysis, sheet logging, and Gmail alert steps run 3 times — once per page. Step 4 — HTTP: Scrape Landing Page For each page, an HTTP request fetches the raw HTML content with a 15-second timeout. The response is returned as plain text for the cleaning step. Step 5 — Code: Clean HTML to Text The raw HTML is cleaned by removing all script blocks, style blocks, and HTML comments. Tag closings for structural elements like divs, paragraphs, and headings are converted to line breaks to preserve readability. All remaining HTML tags are stripped, common HTML entities are decoded, and extra whitespace is collapsed. The result is capped at 4,000 characters — if the page is longer, a truncation note is added so the AI knows content was cut off. If the page returns less than 100 characters, the workflow throws an error rather than sending near-empty content to the AI. Step 6 — AI Agent: CRO Audit GPT-4o-mini receives the business type, page name, page URL, and cleaned page content. It returns nine structured fields: an overall score from 1–10, a letter grade, a CTA strength score, a trust score, a message clarity score, the single biggest conversion problem on the page, the fastest single fix to improve conversions, five page-specific improvements each doable in 1–2 days, and a priority label (CRITICAL, HIGH, MEDIUM, or LOW). Step 7 — OpenAI: GPT-4o-mini Model This is the language model powering the audit. It runs at temperature 0.3 for consistent, factual scoring and is capped at 800 tokens to keep each audit concise and costs predictable. Step 8 — Parser: Structured CRO Output This step enforces the exact nine-field schema GPT-4o-mini must return. It validates that all required fields are present and correctly typed before results move forward, preventing malformed AI output from reaching your sheet. Step 9 — Code: Prepare Audit Results The AI output is merged with all page metadata. Sub-scores are formatted as "X out of 10" for readability. The fix list is formatted as a numbered list — pipe-separated for the sheet column and line-separated for the email. A needsAlert flag is set to true if the overall score is below your configured threshold. The email subject and full email body are also assembled here, including all scores, the top issue, quick win, and all five fixes. Step 10 — Google Sheets: Log CRO Audit One row is appended to your CRO Audit Dashboard tab with all 14 columns populated. The Sheet log and the Gmail alert step both run at the same time — logging never delays the email. Step 11 — IF: Score Below Threshold? This is the alert gate. If the page score is below your threshold (YES path), the Gmail alert fires immediately. If the page passed the threshold (NO path), the workflow routes to 13. Set — No Alert Needed and ends silently for that page — no email is sent. Step 12 — Gmail: Send CRO Alert A plain-text alert email is sent to your configured address. It includes the page name, URL, overall score, grade, priority level, top issue, fastest fix, all five specific improvements, and the three sub-scores for CTA strength, trust, and message clarity. Step 13 — Set: No Alert Needed This step handles pages that passed the threshold. It sets a brief log message confirming the page scored above the threshold and no email was sent. Key Features ✅ Batch audit up to 3 pages in one submission — Submit multiple URLs at once and each page gets its own independent audit without you having to run the workflow three times ✅ Letter grade for instant ranking — A+ through F grades let you sort pages by quality at a glance without interpreting raw scores ✅ Four sub-scores per page — CTA strength, trust score, and message clarity give you a breakdown of why a page scored the way it did ✅ Five page-specific fixes every time — Every fix references actual content from that specific page, not generic CRO advice — each one is scoped to 1–2 days of work ✅ Configurable alert threshold — Change one number in Config Values to define what counts as a failing score — lower for stricter monitoring, higher for looser ✅ Structured output enforced — A schema parser validates all nine AI fields before anything reaches your sheet — no broken rows or missing scores ✅ Sheet logs and Gmail run simultaneously — Both outputs fire at the same time so your dashboard is always current and alerts are never delayed ✅ Silent pass for healthy pages — Pages above the threshold log to Sheets and stop — no noise, no email clutter for pages that are already performing well Customisation Options Lower the threshold for more aggressive alerting — In node 2. Set — Config Values, change scoreThreshold from 6 to 8 so any page scoring 7 or below triggers an email — useful for agencies running CRO campaigns where even average pages need attention. Add a Slack notification alongside Gmail — After node 9. Code — Prepare Audit Results, add an IF check for needsAlert === true and connect a Slack node to also post the page name, score, and top issue to a #cro-alerts channel so your team sees issues in real time without checking email. Increase the character limit for longer pages — In node 5. Code — Clean HTML to Text, change .substring(0, 4000) to 6000 or 8000 if your pages are content-heavy and you want GPT to see more of the page — note this will slightly increase OpenAI token usage per audit. Expand to 5 pages per submission — In node 3. Code — Build Pages List, add two more page objects following the same pattern as pages 1–3, and add corresponding fields to the form in node 1. Form — Multi-Page CRO Audit and Config Values in node 2. Send a weekly CRO summary email — Add a separate Schedule trigger that fires every Monday, reads the CRO Audit Dashboard sheet, counts pages by priority label from the past 7 days, and sends a weekly digest showing how many CRITICAL, HIGH, MEDIUM, and LOW pages were found. Troubleshooting Form submission not starting the workflow: Confirm the workflow is Active — inactive workflows do not receive form submissions Copy the Form URL fresh from node 1. Form — Multi-Page CRO Audit after activating — URLs copied before activation will not work Make sure at least one URL starts with http — node 3. Code — Build Pages List filters out any blank or invalid URLs Page scraping returning empty or very short content: Some pages block automated HTTP requests and return a 403 error or a redirect — check the execution log of node 4. HTTP — Scrape Landing Page for the response Try adding a User-Agent header to node 4: Name = User-Agent, Value = Mozilla/5.0 (compatible; n8n-bot/1.0) to bypass basic bot detection If the page uses JavaScript rendering (e.g. React or Vue), the HTML may be empty — these pages require a headless browser and cannot be scraped with a simple HTTP request OpenAI not generating audit results: Confirm the API key is connected in node 7. OpenAI — GPT-4o-mini Model and your account has available credits Check the execution log of node 6. AI Agent — CRO Audit for the raw GPT response If the schema parser in node 8. Parser — Structured CRO Output is throwing an error, the AI returned an unexpected format — re-run the audit to see if the issue is consistent Google Sheets not logging rows: Confirm the Google Sheets OAuth2 credential in node 10. Google Sheets — Log CRO Audit is connected and not expired — re-authorize if needed Check that PASTE_YOUR_GOOGLE_SHEET_ID_HERE in node 2. Set — Config Values has been replaced with your actual sheet ID from the URL Confirm the tab is named CRO Audit Dashboard exactly — capitalization must match sheetName in Config Values Verify all 14 column headers in row 1 match exactly: Date, Page URL, Page Name, CRO Score, Grade, Top Issue, Quick Win, CTA Strength, Trust Score, Message Clarity, Priority, Fix List, Audited By, Logged At Gmail alert not arriving for low-scoring pages: Confirm the Gmail OAuth2 credential in node 12. Gmail — Send CRO Alert is connected and authorized Check that PASTE_YOUR_EMAIL_ADDRESS_HERE in node 2. Set — Config Values has been replaced with your actual email address Check the execution log of node 11. IF — Score Below Threshold? to confirm the needsAlert value is true for the pages that should trigger an alert — if false, the score may be at or above the threshold Check your spam or promotions folder — automated Gmail OAuth2 messages sometimes land there on first send Support Need help setting this up or want a custom version built for your team or agency? 📧 Email: info@incrementors.com 🌐 Website: https://www.incrementors.com/
by Nguyen Thieu Toan
Smart email forwarder Zoho mail to Gmail with attachment, analysis by AI agent This n8n template automatically monitors a Zoho Mail inbox, analyzes every incoming email with Google Gemini AI, then forwards it — including all attachments — to a designated recipient via Gmail. The admin receives an instant smart digest on Telegram with priority level, deadline, and required action extracted by AI. If you manage a shared inbox and waste time triaging emails before forwarding them, this workflow does it for you — automatically. How it works Zoho Mail Trigger:** Detects new incoming emails matching a configured list of sender addresses. AI Analysis:* *Google Gemini reads the email subject and summary, then classifies the email type, assigns a priority level (High / Medium / Low), extracts deadlines, key numbers, and required actions — all returned as structured JSON. HTML Builder:* Assembles the forwarded email body with an *AI analysis panel** embedded at the top, shared across both sending paths. Attachment handling:* If the email has attachments, the workflow fetches each file via the *Zoho Mail API, renames binary keys to prevent collision, aggregates all files, then attaches them to the outgoing Gmail. Gmail:** Forwards the formatted email — with or without attachments — to the configured recipient. Telegram:** Sends the admin a structured, emoji-rich notification with all AI-extracted data, so decisions can be made without opening the email. How to use Connect credentials: Link your Zoho Mail, Gmail, Google Gemini (googlePalmApi), and Telegram Bot credentials in n8n. Configure in one place: Open the Set Context node and fill in all 6 fields — recipient email, subject prefix, Telegram chat ID, footer text, AI role description, and email category options. Set sender filter: Edit the Zoho Mail Trigger node and update the from field with the sender addresses you want to monitor. Activate the workflow — it runs automatically on every new matching email. Requirements n8n Version:* Built and tested on *n8n 2.9.4+*. *(It is recommended to use the latest n8n version for best compatibility.) Zoho Mail** account with OAuth2 credentials and API access enabled. Gmail** account connected via OAuth2. Google Gemini** API key (via googlePalmApi credential). Telegram Bot** token and a target chat ID. Customizing this workflow Different inbox:* Swap the *Zoho Mail Trigger for a Gmail Trigger, IMAP, or any other email node — the rest of the workflow remains unchanged. Different AI behavior:* Change ai_context and ai_type_options in *Set Context** to adapt the AI to any domain (legal, finance, HR, support) — no prompt editing needed elsewhere. Different notification channel:* Replace the *Telegram node with Slack, Discord, or Microsoft Teams. Persistent storage:* Add a *Google Sheets* or *Airtable** node after Telegram to log every forwarded email with its AI analysis for audit purposes. About the Author Created by: Nguyễn Thiệu Toàn (Jay Nguyen) Email: me@nguyenthieutoan.com Website: nguyenthieutoan.com Company: GenStaff (genstaff.net) Socials (Facebook / X / LinkedIn): @nguyenthieutoan More templates: n8n.io/creators/nguyenthieutoan
by Jitesh Dugar
Automated Event Badge Generator Streamline your event registration process with this fully automated badge generation system. Perfect for conferences, seminars, corporate events, universities, and training programs. 🎯 What This Workflow Does Receives Registration Data via webhook (POST request) Validates & Sanitizes attendee information (email, name, role) Generates Unique QR Codes for each attendee with scannable IDs Creates Beautiful HTML Badges with gradient design and branding Converts to High-Quality PNG Images (400x680px) via HTMLCSStoImage API Logs Everything to Google Sheets for tracking and analytics Sends Personalized Emails with badge attachment and event instructions Handles Errors Gracefully with admin notifications ✨ Key Features Professional Badge Design**: Gradient purple background, attendee photos (initials), QR codes Automatic QR Code Generation**: Unique scannable codes for quick check-in Email Delivery**: Personalized HTML emails with download links Google Sheets Tracking**: Complete audit trail of all badge generations Error Handling**: Admin alerts when generation fails Scalable**: Process registrations one-by-one or in batches 🔧 Required Setup APIs & Credentials: HTMLCSStoImg API - Sign up at https://htmlcsstoimg.com Get API Key Gmail OAuth2 Connect your Gmail account Grant send permissions Google Sheets OAuth2 Create a tracking spreadsheet Add headers: Name, Email, Event, Role, Attendee ID, Badge URL, Timestamp Connect via OAuth2 Before Activation: Replace YOUR_GOOGLE_SHEETS_ID with your Google Sheet ID Replace admin@example.com with your admin email address Add all three credentials Test with sample data 📊 Use Cases Conferences & Seminars**: Generate badges for 100+ attendees Universities**: Student ID cards and event passes Corporate Events**: Employee badges with QR check-in Training Programs**: Course participant badges Workshops**: Professional badges with role identification Trade Shows**: Exhibitor and visitor badges 🎨 Customization Options Badge Design**: Modify HTML/CSS for custom branding, colors, logos QR Code Size**: Adjust dimensions for different use cases Email Template**: Personalize welcome message and instructions Role-Based Badges**: Different designs for VIP, Speaker, Staff, Attendee Multi-Event Support**: Handle multiple events with different templates 📈 What You'll Track Total badges generated Attendee names, emails, roles Badge image URLs for reprints Generation timestamps Event names and dates ⚡ Processing Time Average**: 5-8 seconds per badge Includes**: Validation, QR generation, HTML rendering, image conversion, logging, email 🔒 Security Features Email format validation Continue-on-fail error handling Admin notifications on failures Secure credential storage 💡 Pro Tips Use a dedicated Gmail account for automation Monitor HTMLCSStoImg API limits Create separate sheets for different events Archive old data periodically Set up webhook authentication for production 🚀 Getting Started Import this workflow Add the three required credentials Update Sheet ID and admin email Test with sample registration data Activate and integrate with your registration form Perfect for event organizers, HR teams, universities, and anyone managing events with 10-1000+ attendees!
by isaWOW
Description Automatically re-engage old or inactive clients by sending AI-personalized follow-up emails using Claude 3.7 Sonnet, Gmail, and Google Sheets — with smart reply detection to avoid messaging clients who are already in active conversation. What this workflow does This workflow runs every day on a schedule and processes your entire old client database automatically. For each client, it checks whether they've replied to any of your emails in the last 30 days. If they have, it pauses automation and flags that client for manual reply. If they haven't, it fetches their last 10 email conversations, pulls scenario-based AI prompts and follow-up message direction templates from your Google Sheet, feeds everything into Claude 3.7 Sonnet to draft a highly personalized re-engagement email, sends it via Gmail, and updates your tracking sheet — all without any manual intervention. Perfect for agencies and freelancers who have past clients sitting idle in their database and want a fully automated, intelligent outreach system that feels human, not spammy. Key Features Smart 30-day reply detection: Before sending any email, the workflow checks the client's most recent email timestamp. If they replied within the last 30 days, automation is skipped and the sheet is flagged as "Reply Manually" so your team knows to handle it personally. Scenario-based AI prompting: Each client in your sheet is tagged with a Scenario. The workflow pulls the exact AI prompt and follow-up message direction that matches that scenario from your Google Sheet, so Claude always writes from the right context and angle. Full conversation context for Claude: Instead of drafting blindly, Claude receives the last 10 email conversations with the client, their original goal when hiring your agency, the detailed reason their contract ended, their industry, industry vertical, and the specific services they used — resulting in emails that feel genuinely personalized. Structured email output: Claude outputs a properly structured JSON with subject line, greeting, body content, and closing — ensuring the email is always cleanly formatted before sending. Live Google Sheets tracking: After every email sent, the workflow increments the email count in your sheet and updates the workflow status, giving you a live dashboard of where each client stands in the re-engagement sequence. Rate limiting built-in: A 1-minute wait between each client prevents Gmail API rate limit errors and ensures smooth processing even for large client lists. Loop-based batch processing: Every client in your database is processed one by one in a controlled loop — no skipped records, no duplicates. How it works Step 1 — Daily trigger fires: The workflow runs automatically every day using a Schedule Trigger. No manual action needed. Step 2 — Loads client database: Reads all rows from the "Database" sheet in your Google Sheet where the "Manually Stop Workflow" column is not flagged, so already-stopped clients are excluded automatically. Step 3 — Loops through each client: Passes each client record one by one into the processing loop using the Split In Batches node. Step 4 — Checks latest email from client: Fetches the single most recent email from the client's address using Gmail's filter by sender. Step 5 — 30-day window check: A JavaScript code node calculates how many days ago that email was sent, checks if it falls within the last 30 days, and formats the date cleanly (e.g., 21-Mar-2026). Step 6 — Routes based on reply status: A Switch node branches the flow: If replied within 30 days → Updates sheet with "Reply Manually" status and the latest email content, then loops back to next client. If no recent reply → Continues to AI email generation path. Step 7 — Parallel data fetching: Three nodes run in parallel — fetching the scenario-specific follow-up message template, the situation-based AI prompt, and the last 10 email conversations with the client from Gmail. Step 8 — Bundles email history: All 10 fetched emails are aggregated into a single text bundle to be passed into Claude as conversation context. Step 9 — Merges all inputs: A Merge node combines the follow-up template, situation prompt, and email conversation bundle into one unified data object. Step 10 — AI drafts the email: Claude 3.7 Sonnet receives the full context — prompt, follow-up direction, conversation history, client's goal, reason for contract ending, industry details, and services used — and drafts a re-engagement email tailored specifically to that client. Step 11 — Structured output parsing: The output is parsed into a clean JSON structure with subject, greeting, content, and closing fields using a Structured Output Parser. Step 12 — Sends email via Gmail: The formatted email is sent directly from your Gmail account to the client. Step 13 — Updates sheet and loops: The "Number of Emails Sent" counter is incremented in your sheet, the workflow waits 1 minute for rate limiting, then loops back to process the next client. Setup Requirements Tools you'll need: Active n8n instance (self-hosted or n8n Cloud) Google Sheets with OAuth access for client database management Gmail account with OAuth credentials Anthropic API key (for Claude 3.7 Sonnet) Estimated setup time: 20–30 minutes Configuration Steps Add credentials in n8n: Google Sheets OAuth API Gmail OAuth2 API Anthropic API (for Claude 3.7 Sonnet) Set up your Google Sheet with these tabs: Tab 1 — Database (main client list) Client Name Email Address Scenario Number of Emails Sent Followup Workflow (Running / Reply Manually) Latest Email from Client Date of Latest Email from Client The goal which client wanted to achieve by hiring an agency Detailed Reason Why Contract Got Ended Client is from Industry of Client's Industry Vertical Specific services they used Manually Stop Workflow (STOP) Tab 2 — Follow-up Messages (message templates) Scenario Number of Emails Sent Followup Message Direction Tab 3 — Situation (AI prompts per scenario) Scenario Prompt Update the Google Sheet ID: Replace all instances of YOUR_GOOGLE_SHEET_ID in the workflow nodes with your actual Google Sheet ID. Update the send email address: In the "Send Re-engagement Email" node, replace YOUR_EMAIL@yourdomain.com with the Gmail address you want to send from. Fill your client database: Add all old/inactive clients with their details, scenario tags, and goals into the Database tab. Create your scenarios and templates: Fill the Follow-up Messages and Situation tabs with the re-engagement angles and AI prompt instructions relevant to your business. Activate the workflow: Turn it on and let it run daily automatically. Use Cases Marketing & digital agencies: Re-engage a full database of past clients who stopped using your services — automatically, every single day, with zero manual effort. Freelancers: Keep past clients warm by sending intelligent, personalized check-in emails based on what they originally hired you for and why they left. SaaS companies: Run structured win-back campaigns for churned users by mapping scenarios to different churn reasons and tailoring AI messages accordingly. Consultants: Maintain long-term relationships with former clients by sending contextually relevant follow-ups that reference their original goals and show how you've improved. Sales teams: Use the 30-day reply detection to automatically filter out recently responsive leads and focus AI outreach only on truly cold contacts in your pipeline. Customization Options Change the AI model: Swap Claude 3.7 Sonnet for any other Anthropic model or replace with OpenAI GPT-4 in the LLM node — the agent and parser work with any LangChain-compatible model. Adjust the reply detection window: Change the 30 in the Date Checker code node to any number of days that fits your follow-up cadence (e.g., 14 days for more aggressive outreach). Add more scenario types: Simply add new rows to your Follow-up Messages and Situation sheets — the workflow dynamically fetches matching templates so no node changes are needed. Modify email structure: Edit the Structured Output Parser schema to add or remove fields like a PS section, CTA button text, or custom signature block. Add notifications: Connect a Slack, Discord, or webhook node after the Send Email node to notify your team every time a re-engagement email goes out. Expand tracking: Add more columns to your Google Sheet update nodes (e.g., last sent date, email subject used) to build a richer outreach history. Troubleshooting Gmail not fetching emails: Confirm your Gmail OAuth credentials are correctly connected and the sender filter is using the exact email address format from your sheet. Make sure Gmail API access is enabled in your Google Cloud Console. Claude not generating emails: Verify your Anthropic API key is active and has sufficient credits. Check that the Merge node is receiving all 3 inputs before passing data to the AI agent. Sheet not updating: Ensure the Google Sheets OAuth token has edit permissions on your spreadsheet. Confirm the "Email Address" column is set as the matching key in all update nodes. Emails sending to wrong address: Double-check that the sendTo field in the Send Email node is pointing to YOUR_EMAIL@yourdomain.com or the correct dynamic field reference. Loop not processing all clients: If some clients are being skipped, check the filter in the Old Client Database node — make sure the "Manually Stop Workflow (STOP)" column filter is only excluding rows where the value is explicitly set. Rate limit errors on Gmail: Increase the wait time in the Rate Limit Wait node from 1 minute to 2–3 minutes if you have a large client list or are hitting Gmail's sending limits. Resources n8n Documentation Anthropic Claude API Gmail API Reference Google Sheets API n8n LangChain Agent Node Important Notes This workflow is designed specifically for re-engagement outreach to past or inactive clients. It does not handle inbound replies — once a client responds, the workflow flags them for manual handling and stops automation for that contact. Make sure your Google Sheet is properly structured with all required columns before activating, as missing fields will cause the AI prompt to be incomplete and affect email quality. Always test with a small batch of 2–3 clients first before activating at full scale. Support Need help setting this up or want a custom version built for your specific use case? 📧 Email: info@isawow.com 🌐 Website: https://isawow.com/
by Davide
This workflow automates the collection, analysis, and reporting of Trustpilot reviews for a specific company using ScrapeGraphAI, transforming unstructured customer feedback into structured insights and actionable intelligence. Key Advantages 1. ✅ End-to-End Automation The entire process—from scraping reviews to delivering a polished management report—is fully automated, eliminating manual data collection and analysis . 2. ✅ Structured Insights from Unstructured Data The workflow transforms raw, unstructured review text into structured fields and standardized sentiment categories, making analysis reliable and repeatable. 3. ✅ Company-Level Reputation Intelligence Instead of focusing on individual products, the analysis evaluates the overall brand, service quality, customer experience, and operational performance, which is critical for leadership and strategic teams. 4. ✅ Action-Oriented Outputs The AI-generated report goes beyond summaries by: Identifying reputational risks Highlighting improvement opportunities Proposing concrete actions with priorities, effort estimates, and KPIs 5. ✅ Visual & Executive-Friendly Reporting Automatic sentiment charts and structured executive summaries make insights immediately understandable for non-technical stakeholders. 6. ✅ Scalable and Configurable Easily adaptable to different companies or review volumes Page limits and batching protect against rate limits and excessive API usage 7. ✅ Cross-Team Value The output is tailored for multiple internal teams: Management Marketing Customer Support Operations Product & UX Ideal Use Cases Brand reputation monitoring Voice-of-the-customer programs Executive reporting Customer experience optimization Competitive benchmarking (by reusing the workflow across brands) How It Works This workflow automates the complete process of scraping Trustpilot reviews, extracting structured data, analyzing sentiment, and generating comprehensive reports. The workflow follows this sequence: Trigger & Configuration: The workflow starts with a manual trigger, allowing users to set the target company URL and the number of review pages to scrape. Review Scraping: An HTTP request node fetches review pages from Trustpilot with pagination support, extracting review links from the HTML content. Review Processing: The workflow processes individual review pages in batches (limited to 5 reviews per execution for efficiency). Each review page is converted to clean markdown using ScrapegraphAI. Data Extraction: An information extractor using OpenAI's GPT-4.1-mini model parses the markdown to extract structured review data including author, rating, date, title, text, review count, and country. Sentiment Analysis: Another OpenAI model performs sentiment classification on each review text, categorizing it as Positive, Neutral, or Negative. Data Aggregation: Processed reviews are collected and compiled into a structured dataset. Analytics & Visualization: A pie chart is generated showing sentiment distribution A comprehensive reputation analysis report is created using an AI agent that evaluates company-level insights, recurring themes, and provides actionable recommendations Reporting & Delivery: The analysis is converted to HTML format and sent via email, providing stakeholders with immediate insights into customer feedback and company reputation. Set Up Steps To configure and run this workflow: Credential Setup: Configure OpenAI API credentials for the chat models and information extraction Set up ScrapeGraphAI credentials for webpage-to-markdown conversion Configure Gmail OAuth2 credentials for email notifications Company Configuration: In the "Set Parameters" node, update company_id to the target Trustpilot company URL Adjust max_page to control how many review pages to scrape Review Processing Limits: The "Limit" node restricts processing to 5 reviews per execution to manage API costs and processing time Adjust this value based on your needs and OpenAI usage limits Email Configuration: Update the "Send a message" node with the recipient email address Customize the email subject and content as needed Analysis Customization: Modify the prompt in the "Company Reputation Analyst" node to tailor the report format Adjust sentiment analysis categories if different classification is needed Execution: Click "Test workflow" to execute the manual trigger Monitor execution in the n8n editor to ensure all API calls succeed Check the configured email inbox for the generated report Note: Be mindful of API rate limits and costs associated with OpenAI and ScrapegraphAI services when processing large numbers of reviews. The workflow includes a 5-second delay between paginated requests to comply with Trustpilot's terms of service. 👉 Subscribe to my new YouTube channel. Here I’ll share videos and Shorts with practical tutorials and FREE templates for n8n. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Jitesh Dugar
Transform procurement from manual chaos to intelligent automation - AI-powered supplier selection analyzes urgency, cost, and delivery requirements to recommend optimal vendors, then automatically generates professional POs, manages approval workflows, and tracks delivery while maintaining complete audit trails. What This Workflow Does Revolutionizes purchase order management with AI-driven supplier optimization and automated procurement workflows: Webhook-Triggered Generation** - Automatically creates POs from inventory systems, manual requests, or threshold alerts Smart Data Validation** - Verifies item details, quantities, pricing, and calculates totals with tax and shipping AI Supplier Selection** - OpenAI agent analyzes order requirements and recommends optimal supplier based on multiple factors Intelligent Analysis** - AI considers urgency level, total value, item categories, delivery requirements, and cost optimization Multi-Supplier Database** - Maintains supplier profiles with contact details, payment terms, delivery times, and specializations Approval Workflow** - Routes high-value orders (>$5000) for management approval before supplier notification Professional PO Generation** - Creates beautifully formatted purchase orders with company branding and complete details AI Insights Display** - Shows supplier selection reasoning, cost optimization notes, and alternative supplier recommendations PDF Conversion** - Transforms HTML into print-ready, professional-quality purchase order documents Automated Email Distribution** - Sends POs directly to selected suppliers with all necessary attachments Google Drive Archival** - Automatically saves POs to organized folders with searchable filenames Procurement System Logging** - Records complete PO details, supplier info, and status in centralized system Delivery Tracking** - Monitors order status from placement through delivery confirmation Slack Team Notifications** - Real-time alerts to procurement team with PO details and AI recommendations Urgency Classification** - Prioritizes orders based on urgency (urgent, normal) affecting supplier selection Cost Optimization** - AI identifies opportunities for savings or faster delivery based on requirements Key Features AI-Powered Supplier Matching**: Machine learning analyzes order characteristics and recommends best supplier from database based on delivery speed, cost, and specialization Intelligent Trade-Off Analysis**: AI balances cost vs delivery time vs supplier capabilities to find optimal choice for specific order requirements Automatic PO Numbering**: Generates unique sequential purchase order numbers with format PO-YYYYMM-#### for tracking and reference Approval Threshold Management**: Configurable dollar thresholds trigger approval workflows for high-value purchases requiring management authorization Multi-Criteria Supplier Selection**: Considers urgency level, order value, item categories, delivery requirements, and historical performance Supplier Specialization Matching**: Routes technology orders to tech suppliers, construction materials to building suppliers, etc. Cost vs Speed Optimization**: AI recommends premium suppliers for urgent orders and budget suppliers for standard delivery timelines Alternative Supplier Suggestions**: Provides backup supplier recommendations in case primary choice is unavailable Real-Time Pricing Calculations**: Automatically computes line items, subtotals, taxes, shipping, and grand totals Payment Terms Automation**: Pulls supplier-specific payment terms (Net 30, Net 45, etc.) from supplier database Shipping Address Management**: Maintains multiple delivery locations with automatic address population Special Instructions Field**: Captures custom requirements, delivery notes, or handling instructions for suppliers Item Catalog Integration**: Supports product codes, descriptions, quantities, and unit pricing for accurate ordering Audit Trail Generation**: Complete activity log tracking PO creation, approvals, supplier notification, and delivery Status Tracking System**: Monitors PO lifecycle from creation through delivery confirmation with real-time updates Multi-Department Support**: Tracks requesting department for budget allocation and accountability Perfect For Retail Stores** - Automated inventory reordering when stock reaches threshold levels Manufacturing Companies** - Raw material procurement with delivery scheduling for production planning Restaurant Chains** - Food and supplies ordering with vendor rotation and cost optimization IT Departments** - Equipment purchasing with approval workflows for technology investments Construction Companies** - Materials procurement with urgency-based supplier selection for project timelines Healthcare Facilities** - Medical supplies ordering with compliance tracking and vendor management Educational Institutions** - Procurement for facilities, supplies, and equipment across departments E-commerce Businesses** - Inventory replenishment with AI-optimized supplier selection for margins Hospitality Industry** - Supplies procurement for hotels and resorts with cost control Government Agencies** - Compliant procurement workflows with approval chains and audit trails What You Will Need Required Integrations OpenAI API** - AI agent for intelligent supplier selection and optimization (API key required) HTML to PDF API** - PDF conversion service for professional PO documents (approximately 1-5 cents per PO) Gmail or SMTP** - Email delivery for sending POs to suppliers and approval requests Google Drive** - Cloud storage for PO archival and compliance documentation Optional Integrations Slack Webhook** - Procurement team notifications with PO details and AI insights Procurement Software** - ERP/procurement system API for automatic logging and tracking Inventory Management** - Connect to inventory systems for automated reorder triggers Accounting Software** - QuickBooks, Xero integration for expense tracking and reconciliation Supplier Portal** - Direct integration with supplier order management systems Approval Software** - Connect to approval management platforms for workflow automation Quick Start Import Template - Copy JSON workflow and import into your n8n instance Configure OpenAI - Add OpenAI API credentials for AI supplier selection agent Setup PDF Service - Add HTML to PDF API credentials in the HTML to PDF node Configure Gmail - Connect Gmail OAuth2 credentials and update sender email Connect Google Drive - Add Google Drive OAuth2 credentials and set folder ID for PO archival Customize Company Info - Edit company data with your company name, address, contact details Update Supplier Database - Modify supplier information in enrichment node with actual vendor details Set Approval Threshold - Adjust dollar amount requiring management approval ($5000 default) Configure Email Templates - Customize supplier email and approval request messages Add Slack Webhook - Configure Slack notification URL for procurement team alerts Test AI Agent - Submit sample order to verify AI supplier selection logic Test Complete Workflow - Run end-to-end test with real PO data to verify all integrations Customization Options Supplier Scoring Algorithm** - Adjust AI weighting for cost vs delivery speed vs quality factors Multi-Location Support** - Add multiple shipping addresses for different facilities or warehouses Budget Tracking** - Integrate departmental budgets with automatic budget consumption tracking Volume Discounts** - Configure automatic discount calculations based on order quantities Contract Compliance** - Enforce existing vendor contracts and preferred supplier agreements Multi-Currency Support** - Handle international suppliers with currency conversion and forex rates RFQ Generation** - Extend workflow to generate requests for quotes for new items Delivery Scheduling** - Integrate calendar for scheduled deliveries and receiving coordination Quality Tracking** - Add supplier performance scoring based on delivery time and quality Return Management** - Create return authorization workflows for defective items Recurring Orders** - Automate standing orders with scheduled generation Inventory Forecasting** - AI predicts reorder points based on historical consumption patterns Supplier Negotiation** - Track pricing history and flag opportunities for renegotiation Compliance Documentation** - Attach required certifications, insurance, or regulatory documents Multi-Approver Chains** - Configure complex approval hierarchies for different dollar thresholds Expected Results 90% time savings** - Reduce PO creation from 30 minutes to 3 minutes per order 50% faster supplier selection** - AI recommends optimal vendor instantly vs manual research Elimination of stockouts** - Automated reordering prevents inventory shortages 20-30% cost savings** - AI optimization identifies better pricing and supplier options 100% approval compliance** - No high-value orders bypass required approvals Zero lost POs** - Complete digital trail with automatic archival Improved supplier relationships** - Professional, consistent POs with clear requirements Faster order processing** - Suppliers receive clear POs immediately enabling faster fulfillment Better delivery predictability** - AI matches urgency to supplier capabilities reducing delays Reduced procurement overhead** - Automation eliminates manual data entry and follow-up Pro Tips Train AI with Historical Data** - Feed past successful orders to improve AI supplier recommendations Maintain Supplier Performance Scores** - Track delivery times and quality to enhance AI selection accuracy Set Smart Thresholds** - Adjust approval amounts based on department budgets and risk tolerance Use Urgency Levels Strategically** - Reserve "urgent" classification for true emergencies to optimize costs Monitor AI Recommendations** - Review AI reasoning regularly to validate supplier selection logic Integrate Inventory Triggers** - Connect to inventory systems for automatic PO generation at reorder points Establish Preferred Vendors** - Flag preferred suppliers in database for AI to prioritize when suitable Document Special Requirements** - Use special instructions field consistently for better supplier compliance Track Cost Trends** - Export PO data to analyze spending patterns and negotiation opportunities Review Alternative Suppliers** - Keep AI's alternative recommendations for backup when primary unavailable Schedule Recurring Orders** - Set up automated triggers for regular supply needs Centralize Receiving** - Use consistent ship-to addresses to simplify delivery coordination Archive Systematically** - Organize Drive folders by fiscal year, department, or supplier Test Approval Workflow** - Verify approval routing works before deploying to production Communicate AI Benefits** - Help procurement team understand AI recommendations build trust Business Impact Metrics Track these key metrics to measure workflow success: PO Generation Time** - Average minutes from request to supplier notification (target: under 5 minutes) Supplier Selection Accuracy** - Percentage of AI recommendations that meet delivery and cost expectations (target: 90%+) Approval Workflow Speed** - Average hours for high-value PO approvals (target: under 4 hours) Stockout Prevention** - Reduction in inventory shortages due to faster PO processing Cost Savings** - Percentage reduction in procurement costs from AI optimization (typical: 15-25%) Order Accuracy** - Reduction in PO errors requiring correction or cancellation Supplier On-Time Delivery** - Improvement in delivery performance from better supplier matching Procurement Productivity** - Number of POs processed per procurement staff member Budget Compliance** - Percentage of POs staying within approved departmental budgets Audit Readiness** - Time required to produce PO documentation for audits (target: under 5 minutes) Template Compatibility Compatible with n8n version 1.0 and above Requires OpenAI API access for AI agent functionality Works with n8n Cloud and Self-Hosted instances Requires HTML to PDF API service subscription No coding required for basic setup Fully customizable supplier database and selection criteria Integrates with major procurement and ERP systems via API Supports unlimited suppliers and product categories Scales to handle thousands of POs monthly Ready to transform your procurement process? Import this template and start generating intelligent purchase orders with AI-powered supplier selection, automated approval workflows, and complete procurement tracking - eliminating manual processes, preventing stockouts, and optimizing costs across your entire supply chain!
by isaWOW
Paste your interview recording URL into a simple form, describe the moments you want to find, and the workflow takes care of everything else. WayinVideo AI scans the full recording and extracts only the clips that match your search — then downloads each one, uploads it to Google Drive, and logs every detail into a Google Sheet library. You receive a summary email with a direct link to your library the moment all clips are saved. Built for podcast producers, content teams, and agencies who want to build a searchable clip archive from interview recordings without any manual editing. What This Workflow Does AI moment extraction** — Sends your interview URL and a plain-English query to WayinVideo, which finds and exports only the clips matching your description Smart file naming** — Each clip is saved to Google Drive with a filename that includes the guest name, clip number, and relevance score — so your folder stays organised from day one Searchable clip library** — Saves every clip's title, description, timestamp, score, tags, and Drive link to a Google Sheet — so your whole team can search and reuse clips later Auto-polling with retry** — Waits 90 seconds, then checks if clips are ready, and loops back every 30 seconds automatically until processing is complete Permanent Drive storage** — Clips are uploaded to Google Drive immediately — no risk of losing them when WayinVideo export links expire after 24 hours Summary email on completion** — Sends a confirmation email with a link to your Google Sheet library as soon as all clips are saved Form-based input** — Anyone on your team can submit an interview and query through a web form — no access to n8n required Setup Requirements Tools you'll need: Active n8n instance (self-hosted or n8n Cloud) WayinVideo account + API key Google account connected to n8n via Google Drive OAuth2 Google account connected to n8n via Google Sheets OAuth2 Google account connected to n8n via Gmail OAuth2 A Google Sheet with a tab named exactly: Interview Clip Library Estimated Setup Time: 15–20 minutes Step-by-Step Setup Get your WayinVideo API key Log in at WayinVideo, go to your account settings or developer section, and copy your API key. Paste the API key into node "2. WayinVideo — Submit Find Moments" Open this node, find the Authorization header value, and replace YOUR_WAYIN_API_KEY with your actual key. Paste the API key into node "4. WayinVideo — Get Clips Result" Open this node, find the same Authorization header, and replace YOUR_WAYIN_API_KEY again. > ⚠️ This key appears in 2 nodes — you must replace it in both "2. WayinVideo — Submit Find Moments" and "4. WayinVideo — Get Clips Result". Missing either one will cause the workflow to fail. Set your Google Drive folder ID in node "9. Google Drive — Upload Clip" Open this node and replace YOUR_GDRIVE_FOLDER_ID with your actual folder ID. To find it: open your target Google Drive folder in a browser — the folder ID is the string of letters and numbers at the end of the URL after /folders/. Then connect your Google Drive credential via OAuth2 in the same node. Set your Google Sheet ID in node "10. Google Sheets — Save to Library" Open this node and replace YOUR_GOOGLE_SHEET_ID with your actual Sheet ID. To find it: open your Google Sheet in a browser — the Sheet ID is the long string in the URL between /d/ and /edit. Then connect your Google Sheets credential via OAuth2 in the same node. > ⚠️ Before running the workflow, make sure your Google Sheet has a tab named exactly Interview Clip Library — the sheet name must match this exactly or the workflow will fail to save rows. Connect your Gmail account in node "11. Gmail — Send Summary Email" Open this node, click the credential field, and connect your Google account via Gmail OAuth2. Follow the on-screen prompts to authorise n8n. Activate the workflow Toggle the workflow to Active. Open the form URL generated by node "1. Form — Interview URL + Details" and submit a test interview URL to confirm the full workflow runs end to end. How It Works (Step by Step) Step 1 — Form Trigger (Web Form) The workflow starts when someone fills in the web form. You enter five things: the interview recording URL (Zoom, YouTube, or any direct link), the guest's name, the interview topic or category, a plain-English description of what moments to find, and the email address for the summary. The form includes query tips to help users write better searches — the form description gives examples like "career advice and turning point moments" to guide them. Step 2 — Submit to WayinVideo Find Moments The recording URL, search query, and project name (built from the guest name and topic) are sent to WayinVideo's Find Moments endpoint. The request asks for up to 5 matching clips at HD 720p quality with original-language captions. WayinVideo scans the video and returns a task ID to track the job. Step 3 — Wait 90 Seconds The workflow pauses for 90 seconds to give WayinVideo time to scan the recording before checking for results. This prevents the workflow from requesting results before they are ready. Step 4 — Poll WayinVideo for Results The workflow calls WayinVideo's results endpoint using the task ID from Step 2. It checks whether the clips have been found and exported yet, and receives either a completed clips list or a status showing the job is still running. Step 5 — Check: Status SUCCEEDED? (YES / NO branch) YES** — If the status equals SUCCEEDED, the workflow moves forward to split and process each clip. NO** — If the job is still running, the workflow routes to a 30-second wait and then loops back to Step 4 to check again. This repeats automatically until the clips are ready. > ⚠️ Infinite Loop Risk: If WayinVideo never returns SUCCEEDED — due to an invalid URL, private video, or API error — this loop runs forever. Add a retry counter to stop after 8–10 attempts and send an error notification instead. Step 6 — Wait 30 Seconds (Retry) When clips are not ready yet, the workflow pauses 30 seconds before checking again. This gap prevents too many requests hitting the WayinVideo API in quick succession. Step 7 — Split Each Clip (Code) Once clips are ready, this step reads the full clips list and splits it into individual results — one per clip. Each result carries the clip title, download link, relevance score, description, tags, start time, end time, and clip index number. Step 8 — Download Each Clip File For each clip, the workflow fetches the video file from WayinVideo's export link and downloads it as a binary file — ready to be uploaded to Google Drive. Step 9 — Upload to Google Drive Each downloaded clip is uploaded to your specified Google Drive folder. The filename is built automatically using the guest name, clip number, and relevance score — for example: Raj_Sharma_Clip_1_Score_87.mp4. This keeps your Drive folder organised and sortable by score. Step 10 — Save to Google Sheets Library After each clip is uploaded, the workflow adds a new row to your Interview Clip Library sheet. It records the guest name, topic, query used, clip title, description, timestamp range, relevance score, tags, a direct Google Drive view link, the Drive file ID, the original interview URL, and the date. This creates a searchable, permanent library your whole team can use. Step 11 — Gmail Sends Summary Email After all clips are saved to the sheet, a summary email is sent to the address you entered in the form. The email confirms that clips have been uploaded to Google Drive and saved to the library, and reminds you to open the Interview Clip Library sheet to access all the Drive links. The final result is a set of clips in Google Drive, a fully logged library row per clip in Google Sheets, and a confirmation email — all from one form submission. Key Features ✅ Plain-English search queries — Describe moments in normal language — "failure and comeback story" or "key career turning point" — and WayinVideo finds them ✅ Score-based file naming — Every clip filename includes the guest name and relevance score — so your Drive folder is self-sorting and instantly readable ✅ Permanent clip archive — Clips are saved to Google Drive immediately — no 24-hour expiry risk because the file is stored before the export link dies ✅ Full metadata in Google Sheets — Every clip's title, description, timestamp, score, tags, and Drive link are logged — your team can search and reuse clips without rewatching recordings ✅ Auto-retry polling — The workflow keeps checking until clips are ready — no manual monitoring needed ✅ Guest + topic project naming — WayinVideo jobs are named using the guest name and topic you enter — so your WayinVideo project dashboard stays organised too ✅ Captions on every clip — Original-language captions are embedded in each exported clip — useful for accessibility and silent viewing ✅ One summary email per run — A single confirmation email is sent after all clips are saved — clean and non-spammy Customisation Options Extract more clips per search In node "2. WayinVideo — Submit Find Moments", change "limit": 5 to a higher number — use 10 to get a broader set of matching moments from longer recordings. Upgrade to Full HD resolution In node "2. WayinVideo — Submit Find Moments", change "resolution": "HD_720" to "FULL_HD_1080" for higher quality exports — useful for client-facing or broadcast use. Generate vertical clips for social media In node "2. WayinVideo — Submit Find Moments", add "enable_ai_reframe": true and "ratio": "RATIO_9_16" to automatically reframe clips to 9:16 vertical format — ready for Reels, Shorts, or TikTok. Add a Slack notification when the library is updated Insert a Slack node after "11. Gmail — Send Summary Email" to post a channel message with the guest name, topic, and number of clips saved — so your team gets notified without checking email. Sort clips into guest-specific Drive subfolders In node "9. Google Drive — Upload Clip", change the folder ID to a dynamic expression using the guest name field from the form — so each guest's clips automatically go into their own subfolder. Add a retry limit to stop infinite loops Add a Set step before "6. Wait — 30 Seconds Retry" to track a retry counter, then add a second IF check to stop after 10 attempts and route to "11. Gmail — Send Summary Email" with a failure message instead of looping forever. Troubleshooting WayinVideo API key not working: Check that you replaced YOUR_WAYIN_API_KEY in both "2. WayinVideo — Submit Find Moments" and "4. WayinVideo — Get Clips Result" — missing either one causes the workflow to fail Confirm your WayinVideo account is active and the key has not expired or been revoked Make sure there are no extra spaces before or after the key when pasting Workflow stuck in the retry loop: Check that the interview recording URL is publicly accessible — private, password-protected, or region-blocked videos will not be processed by WayinVideo Open the output of "4. WayinVideo — Get Clips Result" and inspect the raw response — WayinVideo may have returned an error message explaining the failure If the status never reaches SUCCEEDED, deactivate and reactivate the workflow, fix the video URL, and resubmit the form Google Drive upload failing: Open node "9. Google Drive — Upload Clip" and confirm the Google Drive OAuth2 credential is connected and not expired — reconnect it if needed Confirm YOUR_GDRIVE_FOLDER_ID was replaced with just the folder ID string (e.g. 1BxiMVs0XRA5nFMdKvBdBZjgmUUqptlbs), not the full URL Check that your Google account has write permission for the target folder Google Sheets not saving rows: Confirm your Google Sheet has a tab named exactly Interview Clip Library — the name must match character-for-character, including capitalisation Open node "10. Google Sheets — Save to Library" and check that YOUR_GOOGLE_SHEET_ID was replaced with just the Sheet ID (the string between /d/ and /edit in the Sheet URL) Check that the Google Sheets OAuth2 credential is connected and not expired — reconnect it in n8n credentials if needed No clips found (empty results): Your query may be too short or too vague — WayinVideo needs descriptive phrases to find relevant moments; try "career advice and lessons learned" instead of just "advice" Confirm the recording has clear spoken audio — low-quality audio or heavy background noise reduces accuracy Check that the recording URL is a direct video link and not a login-required viewer or landing page Support Need help setting this up or want a custom version built for your team or agency? 📧 Email:info@isawow.com 🌐 Website:https://isawow.com
by explorium
Outbound Agent - AI-Powered Lead Generation with Natural Language Prospecting This n8n workflow transforms natural language queries into targeted B2B prospecting campaigns by combining Explorium's data intelligence with AI-powered research and personalized email generation. Simply describe your ideal customer profile in plain English, and the workflow automatically finds prospects, enriches their data, researches them, and creates personalized email drafts. DEMO Template Demo Credentials Required To use this workflow, set up the following credentials in your n8n environment: Anthropic API Type:** API Key Used for:** AI Agent query interpretation, email research, and email writing Get your API key at Anthropic Console Explorium API Type:** Generic Header Auth Header:** Authorization Value:** Bearer YOUR_API_KEY Used for:** Prospect matching, contact enrichment, professional profiles, and MCP research Get your API key at Explorium Dashboard Explorium MCP Type:** HTTP Header Auth Used for:** Real-time company and prospect intelligence research Connect to: https://mcp.explorium.ai/mcp Gmail Type:** OAuth2 Used for:** Creating email drafts Alternative options: Outlook, Mailchimp, SendGrid, Lemlist Go to Settings → Credentials, create these credentials, and assign them in the respective nodes before running the workflow. Workflow Overview Node 1: When chat message received This node creates an interactive chat interface where users can describe their prospecting criteria in natural language. Type:** Chat Trigger Purpose:** Accept natural language queries like "Get 5 marketing leaders at fintech startups who joined in the past year and have valid contact information" Example Prompts:** "Find SaaS executives in New York with 50-200 employees" "Get marketing directors at healthcare companies" "Show me VPs at fintech startups with recent funding" Node 2: Chat or Refinement This code node manages the conversation flow, handling both initial user queries and validation error feedback. Function:** Routes either the original chat input or validation error messages to the AI Agent Dynamic Input:** Combines chatInput and errorInput fields Purpose:** Creates a feedback loop for validation error correction Node 3: AI Agent The core intelligence node that interprets natural language and generates structured API calls. Functionality: Interprets user intent from natural language queries Maps concepts to Explorium API filters (job levels, departments, company size, revenue, location, etc.) Generates valid JSON requests with precise filter criteria Handles off-topic queries with helpful guidance Connected to MCP Client for real-time filter specifications AI Components: Anthropic Chat Model:** Claude Sonnet 4 for query interpretation Simple Memory:** Maintains conversation context (100 message window) Output Parser:** Structured JSON output with schema validation MCP Client:** Connected to https://mcp.explorium.ai/mcp for Explorium specifications System Instructions: Expert in converting natural language to Explorium API filters Can revise previous responses based on validation errors Strict adherence to allowed filter values and formats Default settings: mode: "full", size: 10000, page_size: 100, has_email: true Node 4: API Call Validation This code node validates the AI-generated API request against Explorium's filter specifications. Validation Checks: Filter key validity (only allowed filters from approved list) Value format correctness (enums, ranges, country codes) No duplicate values in arrays Proper range structure for experience fields (total_experience_months, current_role_months) Required field presence Allowed Filters: country_code, region_country_code, company_country_code, company_region_country_code company_size, company_revenue, company_age, number_of_locations google_category, naics_category, linkedin_category, company_name city_region_country, website_keywords has_email, has_phone_number job_level, job_department, job_title business_id, total_experience_months, current_role_months Output: isValid: Boolean validation status validationErrors: Array of specific error messages Node 5: Is API Call Valid? Conditional routing node that determines the next step based on validation results. If Valid:** Proceed to Explorium API: Fetch Prospects If Invalid:** Route to Validation Prompter for correction Node 6: Validation Prompter Generates detailed error feedback for the AI Agent when validation fails. This creates a self-correcting loop where the AI learns from validation errors and regenerates compliant requests by routing back to Node 2 (Chat or Refinement). Node 7: Explorium API: Fetch Prospects Makes the validated API call to Explorium's prospect database. Method:** POST Endpoint:** /v1/prospects/fetch Authentication:** Header Auth (Bearer token) Input:** JSON with filters, mode, size, page_size, page Returns:** Array of matched prospects with prospect IDs based on filter criteria Node 8: Pull Prospect IDs Extracts prospect IDs from the fetch response for bulk enrichment. Input:** Full fetch response with prospect data Output:** Array of prospect_id values formatted for enrichment API Node 9: Explorium API: Contact Enrichment Single enrichment node that enhances prospect data with both contact and profile information. Method:** POST Endpoint:** /v1/prospects/enrich Enrichment Types:** contacts, profiles Authentication:** Header Auth (Bearer token) Input:** Array of prospect IDs from Node 8 Returns: Contacts:** Professional emails (current, verified), phone numbers (mobile, work), email validation status, all available email addresses Profiles:** Full professional history, current role details, company information, skills and expertise, education background, experience timeline, job titles and seniority levels Node 10: Clean Output Data Transforms and structures the enriched data for downstream processing. Node 11: Loop Over Items Iterates through each prospect to generate individualized research and emails. Batch Size:** 1 (processes prospects one at a time) Purpose:** Enable personalized research and email generation for each prospect Loop Control:** Processes until all prospects are complete Node 12: Research Email AI-powered research agent that investigates each prospect using Explorium MCP. Input Data: Prospect name, job title, company name, company website LinkedIn URL, job department, skills Research Focus: Company automation tool usage (n8n, Zapier, Make, HubSpot, Salesforce) Data enrichment practices Tech stack and infrastructure (Snowflake, Segment, etc.) Recent company activity and initiatives Pain points related to B2B data (outdated CRM data, manual enrichment, static workflows) Public content (speaking engagements, blog posts, thought leadership) AI Components: Anthropic Chat Model1:** Claude Sonnet 4 for research Simple Memory1:** Maintains research context Explorium MCP1:** Connected to https://mcp.explorium.ai/mcp for real-time intelligence Output: Structured JSON with research findings including automation tools, pain points, personalization notes Node 13: Email Writer Generates personalized cold email drafts based on research findings. Input Data: Contact info from Loop Over Items Current experience and skills Research findings from Research Email agent Company data (name, website) AI Components: Anthropic Chat Model3:** Claude Sonnet 4 for email writing Structured Output Parser:** Enforces JSON schema with email, subject, message fields Output Schema: email: Selected prospect email address (professional preferred) subject: Compelling, personalized subject line message: HTML formatted email body Node 14: Create a draft (Gmail) Creates email drafts in Gmail for review before sending. Resource:** Draft Subject:** From Email Writer output Message:** HTML formatted email body Send To:** Selected prospect email address Authentication:** Gmail OAuth2 After Creation: Loops back to Node 11 (Loop Over Items) to process next prospect Alternative Output Options: Outlook:** Create drafts in Microsoft Outlook Mailchimp:** Add to email campaign SendGrid:** Queue for sending Lemlist:** Add to cold email sequence Workflow Flow Summary Input: User describes target prospects in natural language via chat interface Interpret: AI Agent converts query to structured Explorium API filters using MCP Validate: API call validation ensures filter compliance Refine: If invalid, error feedback loop helps AI correct the request Fetch: Retrieve matching prospect IDs from Explorium database Enrich: Parallel bulk enrichment of contact details and professional profiles Clean: Transform and structure enriched data Loop: Process each prospect individually Research: AI agent uses Explorium MCP to gather company and prospect intelligence Write: Generate personalized email based on research Draft: Create reviewable email drafts in preferred platform This workflow eliminates manual prospecting work by combining natural language processing, intelligent data enrichment, automated research, and personalized email generation—taking you from "I need marketing leaders at fintech companies" to personalized, research-backed email drafts in minutes. Customization Options Flexible Triggers The chat interface can be replaced with: Scheduled runs for recurring prospecting Webhook triggers from CRM updates Manual execution for ad-hoc campaigns Scalable Enrichment Adjust enrichment depth by: Adding more Explorium API endpoints (technographics, funding, news) Configuring prospect batch sizes Customizing data cleaning logic Output Destinations Route emails to your preferred platform: Email Platforms:** Gmail, Outlook, SendGrid, Mailchimp Sales Tools:** Lemlist, Outreach, SalesLoft CRM Integration:** Salesforce, HubSpot (create leads with research) Collaboration:** Slack notifications, Google Docs reports AI Model Flexibility Swap AI providers based on your needs: Default: Anthropic Claude (Sonnet 4) Alternatives: OpenAI GPT-4, Google Gemini Setup Notes Domain Filtering: The workflow prioritizes professional emails—customize email selection logic in the Clean Output Data node MCP Configuration: Explorium MCP requires Header Auth setup—ensure credentials are properly configured Rate Limits: Adjust Loop Over Items batch size if hitting API rate limits Memory Context: Simple Memory maintains conversation history—increase window length for longer sessions Validation: The AI self-corrects through validation loops—monitor early runs to ensure filter accuracy This workflow represents a complete AI-powered sales development representative (SDR) that handles prospecting, research, and personalized outreach with minimal human intervention.
by Zac Nielsen
Automatically draft email replies using AI. This workflow monitors your Gmail inbox, filters out automated emails (newsletters, receipts, notifications), and uses AI to create draft responses only for emails that genuinely need your attention. Who is this for? Freelancers, consultants, and business owners who want to reduce email response time while maintaining quality responses. How it works Gmail Trigger polls your inbox for new emails AI Classification (OpenRouter) determines if the email needs a human response - automatically filters out newsletters, receipts, system notifications, and marketing emails Email Draft Agent (OpenAI) generates a contextual draft reply matching your writing style Gmail saves the draft to your drafts folder Telegram sends you a notification when drafts are created Prerequisites Gmail account with OAuth2 access OpenAI API key OpenRouter API key Telegram bot token and chat ID (Optional) Supabase account for vector store Setup steps Import the workflow and connect your Gmail OAuth2 credentials Add your OpenAI and OpenRouter API keys Create a Telegram bot via BotFather and add your chat ID to the notification nodes Customise the Email Draft Agent system prompt with your business context and example emails Test with a few emails before activating
by Feras Dabour
Who’s it for This template is for founders, finance teams, and solo operators who receive lots of invoices by email and want them captured automatically in a single, searchable source of truth. If you’re tired of hunting through your inbox for invoice PDFs or “that one receipt from three months ago,” this is for you. What it does / How it works The workflow polls your Gmail inbox on a schedule and fetches new messages including their attachments. A JavaScript Code node restructures all attachments, and a PDF extraction node reads any attached PDFs. An AI “Invoice Recognition Agent” then analyzes the email body and attachments to decide whether the email actually contains an invoice. If not, the workflow stops. If it is an invoice, a second AI “Invoice Data Extractor” pulls structured fields such as date_email, date_invoice, invoice_nr, description, provider, net_amount, vat, gross_amount, label (saas/hardware/other), and currency. Depending on whether the invoice is in an attachment or directly in the email text, the workflow either: uploads the invoice file to Google Drive, or document a direct link to the mail, then appends/updates a row in Google Sheets with all invoice parameters plus a Drive link, and finally marks the Gmail message as read. How to set up Add and authenticate: Gmail credentials Google Sheets credentials Google Drive credentials OpenAI (or compatible) credentials for the AI nodes Create or select a Google Sheet with the expected columns (date_email, date_invoice, invoice_nr, description, provider, net_amount, vat, gross_amount, label, currency, link). Create or select a Google Drive folder where invoices/docs should be stored. Adjust the Gmail Trigger filters (labels, search query, polling interval) to match the mailbox you want to process. Update node credentials and resource IDs (Sheet, Drive folder) via the node UIs, not hardcoded in HTTP nodes. Requirements n8n instance (cloud or self-hosted) Gmail account with OAuth2 setup Google Drive and Google Sheets access OpenAI (or compatible) API key configured in n8n Sufficient permissions to read emails, read/write Drive files, and edit the target Sheet How to customize the workflow Change invoice categories**: Extend the label enum (e.g., add “services”, “subscriptions”) in the extraction schema and adjust any downstream logic. Refine invoice detection**: Tweak the AI prompts to be more or less strict about what counts as an invoice or receipt. Add notifications**: After updating the Sheet, send a Slack/Teams message or email summary for high-value invoices. Filter by sender or subject**: Narrow the Gmail Trigger to specific vendors, labels, or keywords. Extend the data model**: Add fields (e.g., cost center, project code) to the extractor prompt and Sheet mapping to fit your bookkeeping setup.
by Vigh Sandor
Network Vulnerability Scanner (used NMAP as engine) with Automated CVE Report Workflow Overview This n8n workflow provides comprehensive network vulnerability scanning with automated CVE enrichment and professional report generation. It performs Nmap scans, queries the National Vulnerability Database (NVD) for CVE information, generates detailed HTML/PDF reports, and distributes them via Telegram and email. Key Features Automated Network Scanning**: Full Nmap service and version detection scan CVE Enrichment**: Automatic vulnerability lookup using NVD API CVSS Scoring**: Vulnerability severity assessment with CVSS v3.1/v3.0 scores Professional Reporting**: HTML reports with detailed findings and recommendations PDF Generation**: Password-protected PDF reports using Prince XML Multi-Channel Distribution**: Telegram and email delivery Multiple Triggers**: Webhook API, web form, manual execution, scheduled scans Rate Limiting**: Respects NVD API rate limits Comprehensive Data**: Service detection, CPE matching, CVE details with references Use Cases Regular security audits of network infrastructure Compliance scanning for vulnerability management Penetration testing reconnaissance phase Asset inventory with vulnerability context Continuous security monitoring Vulnerability assessment reporting for management DevSecOps integration for infrastructure testing Setup Instructions Prerequisites Before setting up this workflow, ensure you have: System Requirements n8n instance (self-hosted) with command execution capability Alpine Linux base image (or compatible Linux distribution) Minimum 2 GB RAM (4 GB recommended for large scans) 2 GB free disk space for dependencies Network access to scan targets Internet connectivity for NVD API Required Knowledge Basic networking concepts (IP addresses, ports, protocols) Understanding of CVE/CVSS vulnerability scoring Nmap scanning basics External Services Telegram Bot (optional, for Telegram notifications) Email server / SMTP credentials (optional, for email reports) NVD API access (public, no API key required but rate-limited) Step 1: Understanding the Workflow Components Core Dependencies Nmap: Network scanner Purpose: Port scanning, service detection, version identification Usage: Performs TCP SYN scan with service/version detection nmap-helper: JSON conversion tool Repository: https://github.com/net-shaper/nmap-helper Purpose: Converts Nmap XML output to JSON format Prince XML: HTML to PDF converter Website: https://www.princexml.com Version: 16.1 (Alpine 3.20) Purpose: Generates professional PDF reports from HTML Features: Password protection, print-optimized formatting NVD API: Vulnerability database Endpoint: https://services.nvd.nist.gov/rest/json/cves/2.0 Purpose: CVE information, CVSS scores, vulnerability descriptions Rate Limit: Public API allows limited requests per minute Documentation: https://nvd.nist.gov/developers Step 2: Telegram Bot Configuration (Optional) If you want to receive reports via Telegram: Create Telegram Bot Open Telegram and search for @BotFather Start a chat and send /newbot Follow prompts: Bot name: Network Scanner Bot (or your choice) Username: network_scanner_bot (must end with 'bot') BotFather will provide: Bot token: 123456789:ABCdefGHIjklMNOpqrsTUVwxyz (save this) Bot URL: https://t.me/your_bot_username Get Your Chat ID Start a chat with your new bot Send any message to the bot Visit: https://api.telegram.org/bot<YOUR_BOT_TOKEN>/getUpdates Find your chat ID in the response Save this chat ID (e.g., 123456789) Alternative: Group Chat ID For sending to a group: Add bot to your group Send a message in the group Check getUpdates URL Group chat IDs are negative: -1001234567890 Add Credentials to n8n Navigate to Credentials in n8n Click Add Credential Select Telegram API Fill in: Access Token: Your bot token from BotFather Click Save Test connection if available Step 3: Email Configuration (Optional) If you want to receive reports via email: Add SMTP Credentials to n8n Navigate to Credentials in n8n Click Add Credential Select SMTP Fill in: Host: SMTP server address (e.g., smtp.gmail.com) Port: SMTP port (587 for TLS, 465 for SSL, 25 for unencrypted) User: Your email username Password: Your email password or app password Secure: Enable for TLS/SSL Click Save Gmail Users: Enable 2-factor authentication Generate app-specific password: https://myaccount.google.com/apppasswords Use app password in n8n credential Step 4: Import and Configure Workflow Configure Basic Parameters Locate "1. Set Parameters" Node: Click the node to open settings Default configuration: network: Input from webhook/form/manual trigger timestamp: Auto-generated (format: yyyyMMdd_HHmmss) report_password: Almafa123456 (change this!) Change Report Password: Edit report_password assignment Set strong password: 12+ characters, mixed case, numbers, symbols This password will protect the PDF report Save changes Step 5: Configure Notification Endpoints Telegram Configuration Locate "14/a. Send Report in Telegram" Node: Open node settings Update fields: Chat ID: Replace -123456789012 with your actual chat ID Credentials: Select your Telegram credential Save changes Message customization: Current: Sends PDF as document attachment Automatic filename: vulnerability_report_<timestamp>.pdf No caption by default (add if needed) Email Configuration Locate "14/b. Send Report in Email with SMTP" Node: Open node settings Update fields: From Email: report.creator@example.com → Your sender email To Email: report.receiver@example.com → Your recipient email Subject: Customize if needed (default includes network target) Text: Email body message Credentials: Select your SMTP credential Save changes Multiple Recipients: Change toEmail field to comma-separated list: admin@example.com, security@example.com, manager@example.com Add CC/BCC: In node options, add: cc: Carbon copy recipients bcc: Blind carbon copy recipients Step 6: Configure Triggers The workflow supports 4 trigger methods: Trigger 1: Webhook API (Production) Locate "Webhook" Node: Path: /vuln-scan Method: POST Response: Immediate acknowledgment "Process started!" Async: Scan runs in background Trigger 2: Web Form (User-Friendly) Locate "On form submission" Node: Path: /webhook-test/form/target Method: GET (form display), POST (form submit) Form Title: "Add scan parameters" Field: network (required) Form URL: https://your-n8n-domain.com/webhook-test/form/target Users can: Open form URL in browser Enter target network/IP Click submit Receive confirmation Trigger 3: Manual Execution (Testing) Locate "Manual Trigger" Node: Click to activate Opens workflow with "Pre-Set-Target" node Default target: scanme.nmap.org (Nmap's official test server) To change default target: Open "Pre-Set-Target" node Edit network value Enter your test target Save changes Trigger 4: Scheduled Scans (Automated) Locate "Schedule Trigger" Node: Default: Daily at 1:00 AM Uses "Pre-Set-Target" for network To change schedule: Open node settings Modify trigger time: Hour: 1 (1 AM) Minute: 0 Day of week: All days (or select specific days) Save changes Schedule Examples: Every day at 3 AM: Hour: 3, Minute: 0 Weekly on Monday at 2 AM: Hour: 2, Day: Monday Twice daily (8 AM, 8 PM): Create two Schedule Trigger nodes Step 7: Test the Workflow Recommended Test Target Use Nmap's official test server for initial testing: Target**: scanme.nmap.org Purpose**: Official Nmap testing server Safe**: Designed for scanning practice Permissions**: Public permission to scan Important: Never scan targets without permission. Unauthorized scanning is illegal. Manual Test Execution Open workflow in n8n editor Click Manual Trigger node to select it Click Execute Workflow button Workflow will start with scanme.nmap.org as target Monitor Execution Watch nodes turn green as they complete: Need to Add Helper?: Checks if nmap-helper installed Add NMAP-HELPER: Installs helper (if needed, ~2-3 minutes) Optional Params Setter: Sets scan parameters 2. Execute Nmap Scan: Runs scan (5-30 minutes depending on target) 3. Parse NMAP JSON to Services: Extracts services (~1 second) 5. CVE Enrichment Loop: Queries NVD API (1 second per service) 8-10. Report Generation: Creates HTML/PDF reports (~5-10 seconds) 12. Convert to PDF: Generates password-protected PDF (~10 seconds) 14a/14b. Distribution: Sends reports Check Outputs Click nodes to view outputs: Parse NMAP JSON**: View discovered services CVE Enrichment**: See vulnerabilities found Prepare Report Structure**: Check statistics Read Report PDF**: Download report to verify Verify Distribution Telegram: Open Telegram chat with your bot Check for PDF document Download and open with password Email: Check inbox for report email Verify subject line includes target network Download PDF attachment Open with password How to Use Understanding the Scan Process Initiating Scans Method 1: Webhook API Use curl or any HTTP client and add "network" parameter in a POST request. Response: Process started! Scan runs asynchronously. You'll receive results via configured channels (Telegram/Email). Method 2: Web Form Open form URL in browser: https://your-n8n.com/webhook-test/form/target Fill in form: network: Enter target (IP, range, domain) Click Submit Receive confirmation Wait for report delivery Advantages: No command line needed User-friendly interface Input validation Good for non-technical users Method 3: Manual Execution For testing or one-off scans: Open workflow in n8n Edit "Pre-Set-Target" node: Change network value to your target Click Manual Trigger node Click Execute Workflow Monitor progress in real-time Advantages: See execution in real-time Debug issues immediately Test configuration changes View intermediate outputs Method 4: Scheduled Scans For regular, automated security audits: Configure "Schedule Trigger" node with desired time Configure "Pre-Set-Target" node with default target Activate workflow Scans run automatically on schedule Advantages: Automated security monitoring Regular compliance scans No manual intervention needed Consistent scheduling Scan Targets Explained Supported Target Formats Single IP Address: 192.168.1.100 10.0.0.50 CIDR Notation (Subnet): 192.168.1.0/24 # Scans 192.168.1.0-255 (254 hosts) 10.0.0.0/16 # Scans 10.0.0.0-255.255 (65534 hosts) 172.16.0.0/12 # Scans entire 172.16-31.x.x range IP Range: 192.168.1.1-50 # Scans 192.168.1.1 to 192.168.1.50 10.0.0.1-10.0.0.100 # Scans across range Multiple Targets: 192.168.1.1,192.168.1.2,192.168.1.3 Hostname/Domain: scanme.nmap.org example.com server.local Choosing Appropriate Targets Development/Testing: Use scanme.nmap.org (official test target) Use your own isolated lab network Never scan public internet without permission Internal Networks: Use CIDR notation for entire subnets Scan DMZ networks separately from internal Consider network segmentation in scan design Understanding Report Contents Report Structure The generated report includes: 1. Executive Summary: Total hosts discovered Total services identified Total vulnerabilities found Severity breakdown (Critical, High, Medium, Low, Info) Scan date and time Target network 2. Overall Statistics: Visual dashboard with key metrics Severity distribution chart Quick risk assessment 3. Detailed Findings by Host: For each discovered host: IP address Hostname (if resolved) List of open ports and services Service details: Port number and protocol Service name (e.g., http, ssh, mysql) Product (e.g., Apache, OpenSSH, MySQL) Version (e.g., 2.4.41, 8.2p1, 5.7.33) CPE identifier 4. Vulnerability Details: For each vulnerable service: CVE ID**: Unique vulnerability identifier (e.g., CVE-2021-44228) Severity**: CRITICAL / HIGH / MEDIUM / LOW / INFO CVSS Score**: Numerical score (0.0-10.0) Published Date**: When vulnerability was disclosed Description**: Detailed vulnerability explanation References**: Links to advisories, patches, exploits 5. Recommendations: Immediate actions (patch critical/high severity) Long-term improvements (security processes) Best practices Vulnerability Severity Levels CRITICAL (CVSS 9.0-10.0): Color: Red Characteristics: Remote code execution, full system compromise Action: Immediate patching required Examples: Log4Shell, EternalBlue, Heartbleed HIGH (CVSS 7.0-8.9): Color: Orange Characteristics: Significant security impact, data exposure Action: Patch within days Examples: SQL injection, privilege escalation, authentication bypass MEDIUM (CVSS 4.0-6.9): Color: Yellow Characteristics: Moderate security impact Action: Patch within weeks Examples: Information disclosure, denial of service, XSS LOW (CVSS 0.1-3.9): Color: Green Characteristics: Minor security impact Action: Patch during regular maintenance Examples: Path disclosure, weak ciphers, verbose error messages INFO (CVSS 0.0): Color: Blue Characteristics: No vulnerability found or informational Action: No action required, awareness only Examples: Service version detected, no known CVEs Understanding CPE CPE (Common Platform Enumeration): Standard naming scheme for IT products Used for CVE lookup in NVD database Workflow CPE Handling: Nmap detects service and version Nmap provides CPE (if in database) Workflow uses CPE to query NVD API NVD returns CVEs associated with that CPE Special case: nginx vendor fixed from igor_sysoev to nginx Working with Reports Accessing HTML Report Location: /tmp/vulnerability_report_<timestamp>.html Viewing: Open in web browser directly from n8n Click "11. Read Report for Output" node Download HTML file Open locally in any browser Advantages: Interactive (clickable links) Searchable text Easy to edit/customize Smaller file size Accessing PDF Report Location: /tmp/vulnerability_report_<timestamp>.pdf Password: Default: Almafa123456 (configured in "1. Set Parameters") Change in workflow before production use Required to open PDF Opening PDF: Receive PDF via Telegram or Email Open with PDF reader (Adobe, Foxit, Browser) Enter password when prompted View, print, or share Advantages: Professional appearance Print-optimized formatting Password protection Portable (works anywhere) Preserves formatting Report Customization Change Report Title: Open "8. Prepare Report Structure" node Find metadata object Edit title and subtitle fields Customize Styling: Open "9. Generate HTML Report" node Modify CSS in <style> section Change colors, fonts, layout Add Company Logo: Edit HTML generation code Add `` tag in header section Include base64-encoded logo or URL Modify Recommendations: Open "9. Generate HTML Report" node Find Recommendations section Edit recommendation text Scanning Ethics and Legality Authorization is Mandatory: Never scan networks without explicit written permission Unauthorized scanning is illegal in most jurisdictions Can result in criminal charges and civil liability Scope Definition: Document approved scan scope Exclude out-of-scope systems Maintain scan authorization documents Notification: Inform network administrators before scans Provide scan window and source IPs Have emergency contact procedures Safe Targets for Testing: scanme.nmap.org: Official Nmap test server Your own isolated lab network Cloud instances you own Explicitly authorized environments Compliance Considerations PCI DSS: Quarterly internal vulnerability scans required Scan all system components Re-scan after significant changes Document scan results HIPAA: Regular vulnerability assessments required Risk analysis and management Document remediation efforts ISO 27001: Vulnerability management process Regular technical vulnerability scans Document procedures NIST Cybersecurity Framework: Identify vulnerabilities (DE.CM-8) Maintain inventory Implement vulnerability management License and Credits Workflow: Created for n8n workflow automation Free for personal and commercial use Modify and distribute as needed No warranty provided Dependencies: Nmap**: GPL v2 - https://nmap.org nmap-helper**: Open source - https://github.com/net-shaper/nmap-helper Prince XML**: Commercial license required for production use - https://www.princexml.com NVD API**: Public API by NIST - https://nvd.nist.gov Third-Party Services: Telegram Bot API: https://core.telegram.org/bots/api SMTP: Standard email protocol Support For Nmap issues: Documentation: https://nmap.org/book/ Community: https://seclists.org/nmap-dev/ For NVD API issues: Status page: https://nvd.nist.gov Contact: https://nvd.nist.gov/general/contact For Prince XML issues: Documentation: https://www.princexml.com/doc/ Support: https://www.princexml.com/doc/help/ Workflow Metadata External Dependencies**: Nmap, nmap-helper, Prince XML, NVD API License**: Open for modification and commercial use Security Disclaimer This workflow is provided for legitimate security testing and vulnerability assessment purposes only. Users are solely responsible for ensuring they have proper authorization before scanning any network or system. Unauthorized network scanning is illegal and unethical. The authors assume no liability for misuse of this workflow or any damages resulting from its use. Always obtain written permission before conducting security assessments.