by WeblineIndia
Snyk Vulnerability Automation Workflow with Webhook, Jira, Slack & Airtable This workflow receives vulnerability data(e.g., Snyk, Dependabot or any security scanner) from Snyk through a webhook, standardizes and validates the payload, checks Jira for duplicates using a unique vulnerability key, and either updates an existing Jira issue or creates a new one. It also sends real-time alerts to Slack and stores every new vulnerability in Airtable for reporting and auditing. The workflow ensures fast triage, prevents duplicate Jira tickets and centralizes all data for easy tracking. Quick Start – Implementation Steps Add the n8n Webhook URL to Snyk. Configure Jira, Slack and Airtable credentials in n8n. Adjust severity rules or Jira fields if required. Activate the workflow — vulnerability triage becomes fully automated. What It Does This workflow automates how your team processes vulnerabilities reported by Snyk. When a new vulnerability arrives, the system first normalizes the payload into a clean, consistent format. It then validates required fields such as the vulnerability ID, CVSS score, title and URL. If anything is missing, the workflow instantly sends a Slack alert so the team can review. If the payload is valid, the workflow assigns a severity level and generates a unique “vulnerability key.” This key is used to search Jira for existing issues. If a match is found, the workflow updates the existing Jira ticket and notifies the team. If no match exists, the workflow creates a brand-new Jira issue, sends a Slack alert and also writes the data into Airtable for centralized tracking and analytics. This ensures accurate documentation, avoids duplicates and gives teams visibility through both Jira and Airtable. Who’s It For This workflow is ideal for: DevOps and platform engineering teams Security engineers QA and development teams Companies using Snyk for vulnerability scanning Teams needing automated Jira creation and Airtable reporting Requirements to Use This Workflow To fully use this workflow, you need: An n8n instance (cloud or self-hosted) A Snyk webhook configured to send vulnerability notifications A Jira Software Cloud account A Slack workspace with bot permissions An Airtable base and personal access token Basic understanding of JSON fields How It Works Receive Vulnerability – Snyk posts data to an n8n webhook. Normalize Payload – Converts inconsistent Snyk formats into a standard structure. Validate Required Fields – Missing fields trigger a Slack alert. Assign Severity – CVSS score is mapped to Low/Medium/High/Critical. Generate Vulnerability Key – Used for deduplication (e.g., vuln-SNYK-12345). Check Jira for Matches – Searches by label to detect duplicates. Duplicate Handling – Updates existing Jira issue and sends Slack notification. Create New Issue – If no duplicate exists, creates a new Jira ticket. Store in Airtable – Adds a new vulnerability row for reporting and history. Slack Alerts – Notifies the team of new or updated vulnerabilities. Setup Steps Import the workflow JSON file into n8n. Configure credentials: Jira Slack Airtable Add the generated webhook URL inside your Snyk project settings. Update Jira project ID, issue type, or description fields as needed. Map Airtable fields (Title, CVSS, Severity, URL, Key, etc.). Update Slack channel IDs. Activate the workflow. How To Customize Nodes Customize Severity Rules Modify the node that maps CVSS score ranges: Change thresholds Add custom severity levels Map severity to Jira priority Customize Jira Fields Inside the Create or Update Jira Issue nodes, you can modify: Project ID Issue type Labels Description template Assigned user Customize Slack Messages Adjust Slack text blocks to: Change formatting Add emojis or styling Mention specific users or teams Send different messages based on severity Customize Airtable Storage Update the Airtable node to: Add new columns Save timestamps Link vulnerabilities to other Airtable tables Store more metadata for reporting Add-Ons (Optional Enhancements) You can extend this workflow with: Auto-close Jira tickets when Snyk marks vulnerabilities as “fixed”. Severity-based Slack routing (e.g., Critical → #security-alerts). Email notifications for high-risk vulnerabilities. Google Sheets or Notion logging for long-term tracking. Weekly summary report generated using OpenAI. Mapping vulnerabilities to microservices or repositories. Automated dashboards using Airtable Interfaces. Use Case Examples Automatic Vulnerability Triage – Instantly logs new Snyk findings into Jira. Duplicate Prevention – Ensures every vulnerability is tracked only once. Slack Alerts – Real-time notifications for new or updated issues. Airtable Reporting – Creates a central, filterable database for analysis. Security Team Automation – Reduces manual reviews and saves time. Troubleshooting Guide | Issue | Possible Cause | Solution | |-----------------------------|--------------------------------------------------|--------------------------------------------------------| | Slack alert not sent | Wrong API credentials or channel ID | Re-check Slack configuration | | Jira issue not created | Incorrect project ID / issue type | Update Jira node details | | Duplicate detection not working | Vulnerability key or label mismatch | Confirm key generation and JQL settings | | Airtable row not added | Wrong base or field mapping | Reconfigure Airtable node | | Webhook not triggered | Snyk not pointing to correct URL | Re-add the n8n webhook in Snyk | | Severity not correct | CVSS parsing error | Check normalization and mapping node | Need Help? If you need help setting up this workflow, customizing the logic, integrating new nodes or adding advanced reporting, feel free to reach out to our n8n automation development team at WeblineIndia. We can help automate with advanced security processes, build dashboards, integrate additional tools or expand the workflow as per your business needs.
by Adnan Azhar
Template Overview This n8n workflow provides an intelligent, timezone-aware AI voice calling system for e-commerce businesses to automatically confirm customer orders via phone calls. The system uses VAPI (Voice AI Platform) to make natural, conversational calls while respecting customer time zones and business hours. 🎯 Use Case Perfect for e-commerce businesses that want to: Automatically confirm high-value or important orders via phone Reduce order cancellations and disputes Provide personalized customer service at scale Maintain human-like interactions while automating the process Respect customer time zones and calling hours ✨ Key Features Timezone Intelligence Automatically detects customer timezone from shipping address or phone number Only calls during appropriate business hours (10 AM - 3 PM local time, weekdays) Schedules calls for appropriate times when outside calling hours Uses timezone-aware greetings (Good morning/afternoon/evening) AI-Powered Conversations Natural, context-aware conversations using VAPI Personalized greetings with customer names and local time awareness Intelligent confirmation detection from call transcripts Handles customer concerns and change requests gracefully Smart Call Management Automatic retry logic with attempt tracking Call quality assessment and cost tracking Detailed transcript analysis and sentiment detection Follow-up alerts for calls requiring human intervention Comprehensive Tracking Complete call history and analytics in Airtable Real-time status updates throughout the process Detailed reporting on confirmation rates and call quality Cost tracking and ROI analysis 🏗️ Workflow Architecture Main Flow (Order Confirmation) Order Webhook - Receives order data from e-commerce platform Data Validation - Validates required fields (phone, status) Timezone Detection - Determines customer timezone and calling eligibility Call Routing - Either initiates immediate call or schedules for later VAPI Integration - Makes the actual AI voice call Status Tracking - Updates database with call results Scheduled Flow (Retry System) Runs every 15 minutes to check for scheduled calls Respects retry limits and calling hours Automatically processes queued confirmations Webhook Handler (Results Processing) Receives VAPI call completion webhooks Analyzes call transcripts for confirmation status Sends follow-up alerts or confirmation emails Updates final order status 🔧 Prerequisites & Setup Required Services VAPI Account - For AI voice calling functionality Airtable Base - For order tracking and analytics SMTP Server - For email notifications n8n Instance - Self-hosted or cloud
by Cheng Siong Chin
How It Works Automates fraud risk detection for financial transactions by analyzing real-time webhook events through AI-powered scoring. Target audience: fintech companies, payment processors, and banking teams preventing fraud losses. Problem solved: manual fraud checks are reactive and slow; automated detection catches suspicious transactions instantly. Workflow receives transactions via webhook, configures processing parameters, runs OpenAI GPT-4 fraud analysis, calculates risk scores, branches on risk level, holds high-risk transactions, alerts fraud teams, logs incidents, and documents evidence for compliance investigations. Setup Steps Configure webhook endpoint for transaction ingestion. Set OpenAI API key and fraud detection prompts. Connect Google Sheets for incident logging. Enable email alerts to fraud team distribution list. Map risk thresholds (high/low). Prerequisites OpenAI API key, webhook-capable transaction source, Gmail for alerts, Google Sheets access, incident tracking database. Use Cases Payment processors detecting card fraud, fintech platforms catching account takeovers Customization Adjust risk thresholds and scoring logic. Add phone/SMS alerts for urgency. Benefits Detects fraud within seconds, reduces financial losses by up to 90%
by Abdullah Alshiekh
What Problem Does It Solve? SEO professionals and marketers spend hours manually searching keywords to analyze competitor content. Copying and pasting SERP results into spreadsheets is tedious and prone to formatting errors. Analyzing "why" a page ranks requires significant mental effort and time for every single keyword. This workflow solves these by: Automatically fetching live Google search results for a list of keywords. Using AI to instantly analyze the top ranking pages for Intent, Strengths, and Weaknesses. Delivering a consolidated, strategic SEO report directly to your email inbox. How to Configure It API Setup: Connect your Decodo credentials (for scraping Google results).- Connect your Google Gemini credentials (for the AI analysis). Connect your Gmail account (to send the final report). Keyword Input: Open the "Edit Fields" node and replace the placeholder items (keyword_1, etc.) with the actual search terms you want to track. Email Recipient: Update the "Send a message" node with your email address. How It Works The workflow triggers manually (or can be scheduled). It loops through your defined list of keywords one by one. Decodo performs a real-time Google search for each term and extracts organic results. A JavaScript node cleans the data, removing ads and irrelevant snippets. The AI Agent acts as an expert SEO analyst, processing the top results to generate a concise audit. Finally, the workflow compiles all insights into a single email report and sends it to you. Customization Ideas Change the output: Save the analysis to a Google Sheet or Notion database instead of Email. Adjust the AI Persona: Modify the system prompt to focus on specific metrics (e.g., content gaps or backlink opportunities). Automate the Input: Connect a Google Sheet to dynamically pull new keywords every week. Schedule It: Replace the Manual Trigger with a Cron node to run this report automatically every Monday morning. If you need any help Get in Touch
by oka hironobu
TimeRex AI-Powered Booking Automation Description (for n8n template submission) Transform your TimeRex booking management with AI-powered automation. This workflow automatically processes bookings, enriches data with AI insights, and keeps your team informed via Slack—all in real-time. What This Workflow Does 🤖 AI-Powered Intelligence Smart Company Detection**: Automatically identifies company names from guest email domains Booking Categorization**: Uses Google Gemini to classify bookings (Sales/Support/Interview/Partnership/Media) Meeting Brief Generation**: AI creates actionable preparation notes for hosts before each meeting ⚡ Automated Processing Receives webhooks from TimeRex for confirmed and cancelled bookings Validates requests with security token verification Logs enriched booking data to Google Sheets Sends detailed Slack notifications with AI-generated insights 🛡️ Security & Reliability Token-based webhook authentication Security alerts for unauthorized access attempts Automatic cancellation handling with data cleanup Use Cases Sales Teams**: Automatically categorize leads and prepare meeting briefs Recruitment**: Streamline interview scheduling with AI-powered candidate insights Customer Success**: Track support meetings and prepare context for calls Media Relations**: Manage press interviews with automated briefings How It Works TimeRex sends a webhook when a booking is confirmed or cancelled Security token is verified (failed attempts trigger Slack alerts) For confirmed bookings: Media source is detected from calendar name Company name is extracted from email domain AI categorizes the booking purpose AI generates a meeting preparation brief Enriched data is saved to Google Sheets Slack notification is sent with AI insights For cancellations: Booking is found by Event ID Row is deleted from Google Sheets Cancellation alert is sent to Slack Setup Instructions Webhook Configuration Copy the webhook URL from the "TimeRex Webhook" node Paste it in TimeRex Settings → Webhook Security Token Copy your TimeRex security token Update the Verify Security Token node with your token Google Sheets Create a spreadsheet with these columns: event_id, booking_date, guest_name, guest_email, calendar_name, meeting_url, host_name, media_source, company_name, booking_category, ai_meeting_brief, created_at Update all Google Sheets nodes with your Sheet ID AI Credentials Connect your Google Gemini API credentials to both AI model nodes Slack Connect your Slack account Select your notification channel in all Slack nodes Activate Turn on the workflow and start receiving AI-enhanced booking notifications! Requirements TimeRex account with webhook access Google Cloud account (for Sheets & Gemini API) Slack workspace n8n instance (self-hosted or cloud) Customization Tips Modify the Filter by Calendar Type node to match your calendar naming convention Adjust AI prompts in the LLM Chain nodes for different categorization or brief styles Add more media sources to the Media Master sheet for accurate source tracking Extend the workflow with email confirmations or calendar event creation Short Description (100 characters max) Automate TimeRex bookings with AI-powered categorization, meeting briefs, and Slack notifications. Categories Sales Productivity AI Scheduling Tags TimeRex, Booking, AI, Google Gemini, Slack, Google Sheets, Automation, Meeting Management, LLM, Scheduling
by Yaron Been
Track companies adopting tools that complement yours and send AI-drafted co-marketing outreach emails to new adopters. This workflow reads a list of complementary tools (with their PredictLeads technology IDs) from Google Sheets, discovers companies that recently adopted each tool via the PredictLeads Technology Detections API, compares against previously scanned domains to find new adopters, enriches each new company, and uses OpenAI to draft a personalized co-marketing partnership email. How it works: Schedule trigger runs the workflow daily at 8 AM. Reads complementary tool names and PredictLeads tech IDs from Google Sheets. Loops through each tool and discovers recent technology adopters via PredictLeads. Reads previously scanned domains from a separate Google Sheets tab. Compares current detections against previous scans to identify new adopters only. Limits processing to 2 new companies per tool per run (adjustable). Enriches each new adopter with PredictLeads company data. Builds a structured prompt and sends it to OpenAI to draft a co-marketing email. Sends the email via Gmail. Logs the domain, tool name, and timestamp to the Previous Scan sheet to prevent duplicates. Setup: Create a Google Sheet with two tabs: "Complementary Tools" with columns: tool_name, tech_id (PredictLeads technology ID). "Previous Scan" with columns: domain, tool_name, detected_at, email_sent. Connect your Gmail account (OAuth2) for sending outreach emails. Add your OpenAI API key in the Draft Co-Marketing Email HTTP Request node. Add your PredictLeads API credentials (X-Api-Key and X-Api-Token headers). Requirements: Google Sheets OAuth2 credentials. Gmail OAuth2 credentials. OpenAI API account (uses gpt-4o-mini, ~$0.003-0.008 per call). PredictLeads API account (https://docs.predictleads.com). Notes: The Limit node caps outreach at 2 companies per tool per run -- adjust as needed. Technology IDs for the complementary tools can be found via the PredictLeads API. The Previous Scan tab prevents the same company from being contacted twice. PredictLeads Technology Detections and Company API docs: https://docs.predictleads.com
by Rajeet Nair
Overview This workflow automates bulk email campaigns with built-in validation, deliverability protection, and smart send-time optimization. It processes CSV leads, validates emails, enriches data, and schedules campaigns intelligently. Emails are sent using controlled inbox rotation, while engagement tracking and analytics continuously improve performance. How It Works Campaign Input Receives campaign data and CSV leads via webhook. Lead Processing Extracts CSV data, splits leads, and validates email format. Domain & Quality Checks Verifies domains using MX records and filters invalid leads. Lead Enrichment Adds timezone, domain type, and engagement score for better targeting. Lead Storage Stores valid leads and separates invalid ones for tracking. Campaign Execution Scheduler fetches active campaigns and selects top leads. Send Optimization Calculates best send time per lead based on timezone and historical performance, while selecting inboxes within sending limits. Email Delivery Waits until optimal time and sends emails using selected inbox. Tracking & Logging Logs sent emails and updates inbox usage statistics. Event Tracking Captures opens, clicks, replies, and bounces via webhook. Performance Analytics Updates campaign stats and analyzes engagement trends. Continuous Optimization Updates send-time rules to improve future campaign performance. Setup Instructions Connect webhook for campaign and CSV upload Configure send limits, delays, and MX API Set up Postgres with required tables Connect Gmail or SMTP for sending Configure event webhook for tracking Enable campaign and analytics schedulers Test with sample campaign before activating Use Cases Running cold email campaigns at scale Improving email deliverability and sender reputation Automating lead validation and enrichment Optimizing send times based on engagement data Managing multi-inbox outbound systems Requirements n8n instance with webhook support Postgres database Gmail or SMTP email account(s) MX record lookup API (e.g., Google DNS) Email tracking system or webhook integration Notes Inbox rotation and throttling help prevent spam and protect reputation. Engagement-based lead scoring improves campaign performance. Send-time optimization is continuously refined using real data. You can extend this workflow with personalization or AI-generated emails.
by Databox
Your paid ads and website analytics live in separate tools. This workflow bridges both via Databox MCP, runs three specialized AI agents in sequence, and emails a daily intelligence report with a correlation layer that surfaces insights neither dataset could show alone. Who's it for Performance marketers** who want to understand how ads influence website quality Growth teams** looking for daily cross-channel signals without building custom dashboards Marketing managers** who need one morning briefing covering paid spend and website behavior How it works Schedule Trigger fires every day at 8 AM Agent 1 fetches website performance from Databox: sessions, bounce rate, goal completions, conversion rate Agent 2 fetches paid channel data from Databox: spend, CPC, CTR, ROAS per platform Agent 3 synthesizes both outputs - ranks channel efficiency, estimates cost per quality visit, and writes 3 actionable recommendations A styled HTML email report is delivered to your inbox Requirements Databox account** with website analytics and at least one paid ads platform connected (free plan works) OpenAI API key (or Anthropic) Gmail account How to set up Click each Databox MCP Tool node - set Authentication to OAuth2 and authorize Add your OpenAI API key to each of the three Chat Model nodes Connect Gmail and set the recipient address in the Send Email node Activate - your first report arrives tomorrow at 8 AM
by Cheng Siong Chin
How It Works This workflow automates end-to-end carbon emissions monitoring, strategy optimisation, and ESG reporting using a multi-agent AI supervisor architecture in n8n. Designed for sustainability managers, ESG teams, and operations leads, it eliminates the manual effort of tracking emissions, evaluating reduction strategies, and producing compliance reports. Data enters via scheduled pulls and real-time webhooks, then merges into a unified feed processed by a Carbon Supervisor Agent. Sub-agents handle monitoring, optimisation, policy enforcement, and ESG reporting. Approved strategies are auto-executed or routed for human sign-off. Outputs are consolidated and pushed to Slack, Google Sheets, and email, keeping all stakeholders informed. The workflow closes the loop from raw sensor data to actionable ESG dashboards with minimal human intervention. Setup Steps Connect scheduled trigger and webhook nodes to your emissions data sources. Add credentials for Slack (bot token), Gmail (OAuth2), and Google Sheets (service account). Configure the Carbon Supervisor Agent with your preferred LLM (OpenAI or compatible). Set approval thresholds in the Check Approval Required node. Map Google Sheets document ID for ESG report and KPI dashboard nodes. Prerequisites OpenAI or compatible LLM API key Slack bot token Gmail OAuth2 credentials Google Sheets service account Use Cases Corporate sustainability teams automating monthly ESG reporting Customisation Swap LLM models per agent for cost or accuracy trade-offs Benefits Eliminates manual emissions data aggregation and report generation
by Cheng Siong Chin
How It Works This workflow automates end-to-end carbon emissions monitoring, strategy optimisation, and ESG reporting using a multi-agent AI supervisor architecture in n8n. Designed for sustainability managers, ESG teams, and operations leads, it eliminates the manual effort of tracking emissions, evaluating reduction strategies, and producing compliance reports. Data enters via scheduled pulls and real-time webhooks, then merges into a unified feed processed by a Carbon Supervisor Agent. Sub-agents handle monitoring, optimisation, policy enforcement, and ESG reporting. Approved strategies are auto-executed or routed for human sign-off. Outputs are consolidated and pushed to Slack, Google Sheets, and email, keeping all stakeholders informed. The workflow closes the loop from raw sensor data to actionable ESG dashboards with minimal human intervention. Setup Steps Connect scheduled trigger and webhook nodes to your emissions data sources. Add credentials for Slack (bot token), Gmail (OAuth2), and Google Sheets (service account). Configure the Carbon Supervisor Agent with your preferred LLM (OpenAI or compatible). Set approval thresholds in the Check Approval Required node. Map Google Sheets document ID for ESG report and KPI dashboard nodes. Prerequisites OpenAI or compatible LLM API key Slack bot token Gmail OAuth2 credentials Google Sheets service account Use Cases Corporate sustainability teams automating monthly ESG reporting Customisation Swap LLM models per agent for cost or accuracy trade-offs Benefits Eliminates manual emissions data aggregation and report generation
by Anwar Bouilouta
If your team shares an inbox and someone has to manually read every email to decide where it goes, this workflow takes that off your plate. Emails come in, AI reads them, and the right Slack channel gets notified with a summary and priority level. Who is this for Teams that receive a mix of sales inquiries, support requests, and billing questions through a shared inbox. Instead of someone triaging emails by hand every morning, the AI handles it in real time. How it works The workflow polls Gmail for new emails every minute. Each email gets sent to OpenAI (gpt-4o-mini by default) with a prompt that classifies it into one of four categories: sales, support, billing, or spam. The AI also writes a short summary and assigns a priority level (low, medium, or high). A Switch node then routes the message to the correct Slack channel. Spam gets quietly dropped so your team never sees it. Every email that comes through gets logged to a Google Sheet with its timestamp, sender, subject, category, priority, and AI summary. So you always have a record of what came in and how it was handled. How to set it up Connect your Gmail, OpenAI, Slack, and Google Sheets credentials in n8n Create a Google Sheet with these columns: Timestamp, From, Subject, Category, Priority, Summary Open the "Configure Settings" node and fill in your Slack channel IDs for sales, support, and billing, plus your Google Sheet ID Activate the workflow and send yourself a test email Requirements Gmail account with OAuth connected in n8n OpenAI API key (gpt-4o-mini costs roughly $0.15 per 1M input tokens, so classification is very cheap) Slack workspace with OAuth connected in n8n Google Sheets for logging Customizing the workflow Want more categories? Edit the system prompt in the "Classify Email Intent" node to add new categories (e.g. "partnerships", "recruitment"), then add matching outputs to the Switch node and wire them to new Slack channels. You can also change the polling interval on the Gmail trigger if every minute is too frequent. And if you want more accurate classification, swap gpt-4o-mini for gpt-4o in the OpenAI node, though it'll cost a bit more per email.
by Avkash Kakdiya
How it works This workflow starts whenever a new lead comes in through Typeform (form submission) or Calendly (meeting booking). It captures the lead’s information, standardizes it into a clean format, and checks the email domain. If it’s a business domain, the workflow uses AI to enrich the lead with company details such as industry, headquarters, size, and website. Finally, it merges all the data and automatically saves the enriched contact in HubSpot CRM. Step-by-step Capture Leads The workflow listens for new form responses in Typeform or new invitees in Calendly. Both sources are merged into a single stream of leads. Standardize Data All incoming data is cleaned and formatted into a consistent structure: Name, Email, Phone, Message, and Domain. Filter Domains Checks the email domain. If it’s a free/public domain (like Gmail or Yahoo), the lead is ignored. If it’s a business domain, the workflow continues. AI Company Enrichment Sends the domain to an AI Agent (OpenAI GPT-4o-mini). AI returns structured company details: Company Name Industry Headquarters (city & country) Employee Count Website LinkedIn Profile Short Company Description Merge Lead & AI Data Combines the original lead details with the AI-enriched company information. Adds metadata like timestamp and workflow ID. Save to HubSpot CRM Creates or updates a contact record in HubSpot. Maps enriched fields like company name, LinkedIn, website, and description. Why use this? Automatically enriches every qualified lead with valuable company intelligence. Filters out unqualified leads with personal email addresses. Keeps your CRM updated without manual research. Saves time by centralizing lead capture, enrichment, and CRM sync in one flow. Helps sales teams focus on warm, high-value prospects instead of raw, unverified leads.