by Davide
This workflow automatically processes Fireflies.ai meeting recap emails, extracts the meeting transcript, generates a structured summary email, and sends it to a designated recipient. Key Advantages 1. ✅ Full Automation of Meeting Summaries The workflow eliminates all manual steps from receiving the Fireflies email to sending a polished summary. This ensures: No delays No forgotten recaps No repetitive manual tasks 2. ✅ Accurate Extraction of Meeting Information Using AI-based information extraction and custom parsing, the workflow reliably identifies: The correct meeting link The Fireflies meeting ID Relevant transcript data This avoids human error and ensures consistency. 3. ✅ High-Quality, AI-Generated Email Summaries The Gemini-powered summary generator: Produces well-structured, readable emails Includes decisions, action items, and discussion points Automatically crafts a professional subject line Uses real content (no placeholders) This results in clear, usable communication for recipients. 4. ✅ Robust Error-Free Data Handling The workflow integrates custom JavaScript steps to: Parse URLs safely Convert AI responses into valid JSON Ensure correct formatting before email delivery This guarantees the message is always properly structured. 5. ✅ Professional Formatting By converting Markdown to HTML, the summary: Is visually clear Displays well on all email clients Enhances readability for recipients 6. ✅ Easily Scalable and Adaptable The workflow can be expanded to: Send summaries to multiple recipients Add storage (e.g., Google Drive) Trigger based on additional conditions Integrate with CRMs or project management tools How It Works Trigger The workflow starts with a Gmail Trigger that checks for new emails with the subject "Your meeting recap" from fred@fireflies.ai every hour. Email Processing When a matching email is found, the workflow retrieves the full email content and extracts the meeting recap URL using an Information Extractor node powered by OpenAI GPT-4.1-mini. Meeting ID Extraction A Code Node extracts the meeting ID from the Fireflies URL (between :: and ?) for use in the next step. Transcript Fetching The meeting ID is sent to the Fireflies Node, which retrieves the full transcript and summary data (short summary, short overview, and full overview). AI-Powered Email Generation The meeting summary data is passed to a Google Gemini node, which generates a complete meeting summary email with a subject line and body in JSON format. Data Formatting The raw JSON output is parsed in a Code Node, and the email body is converted from Markdown to HTML using the Markdown Node. Email Delivery Finally, the email is sent via Gmail with the AI-generated subject and HTML body. Set Up Steps Configure Credentials Set up Gmail OAuth2 credentials for email triggering and sending. Add Fireflies.ai API credentials for fetching transcripts. Configure OpenAI and Google Gemini API keys for AI processing. Adjust Email Filters Update the Gmail Trigger filters (subject and sender) if Fireflies.ai uses a different sender or subject format. Customize Output Email Modify the recipient email in the Send email node to the desired address. Optional: Modify AI Prompts Adjust the system prompts in the Information Extractor and Email Agent nodes to change extraction behavior or email tone. Activate Workflow Ensure the workflow is set to Active in n8n, and test it by sending a sample Fireflies recap email to your connected Gmail account. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Amit Kumar
Who This Workflow Is For This workflow is ideal for YouTube creators, automation builders, and marketers who want to produce short AI-generated videos automatically. It’s especially useful for channels that publish frequent Shorts-style content or want to automate the entire video creation and posting process without manual scripting, editing, or uploading. What This Workflow Does This automation creates short AI videos by combining Gemini-generated scripts with KIE AI’s text-to-video rendering. It generates a title, description, and video prompt, sends the prompt to KIE AI to create the video, and then automatically uploads the finished result to your YouTube channel using Blotato. Each run generates a new video concept selected from a predefined set of templates, providing ongoing variety and fresh content. The workflow handles idea generation, video rendering, polling, media upload, and publishing from start to finish. How It Works Schedule Trigger starts the workflow based on your chosen frequency. Randomizer selects one creative template from several predefined options. Gemini Prompter generates a title, description, and structured video prompt. KIE AI renders the video using the Sora-style text-to-video model. Polling + Wait retrieves the completed video once rendering finishes. Blotato uploads and publishes the final video to your connected YouTube channel. How to Set Up Add your Google Gemini, KIE AI, and Blotato API credentials. Connect your YouTube channel inside Blotato. Adjust the schedule (e.g., every 6–12 hours). Edit or expand prompt templates inside the Prompter node. Activate the workflow to allow fully automated video generation and publishing. Customization Ideas Add logging to Google Sheets or Notion. Add Telegram, email, or Discord notifications when a new video is posted. Change video length, aspect ratio, or watermark settings in the Create video node. Expand your creative template list to increase content variety.
by Dinakar Selvakumar
📌 Workflow Overview This workflow enables multi-platform social media posting using Google Sheets as the control center. Whenever a new row is added to the sheet, the workflow automatically posts the content to Instagram, Facebook, and/or LinkedIn based on platform flags, then updates the post status to prevent duplicates. Supported Platforms Instagram (Business) Facebook Pages LinkedIn Pages 🧠 Key Concept Google Sheets acts as a lightweight CMS and automation trigger. Each row represents one post, and simple TRUE/FALSE columns decide where that post should be published. 📄 Required Google Sheets Columns The content sheet must include the following columns: Content** – Text to publish Instagram** – TRUE / FALSE Facebook** – TRUE / FALSE LinkedIn** – TRUE / FALSE Status** – Updated after posting Row Number** – Used for precise updates ⚙️ How This Workflow Works 1️⃣ Trigger: New Content Added The workflow starts when a new row is added to Google Sheets. This allows near real-time publishing without manual execution. 2️⃣ Configuration Setup Platform-specific values like: Instagram Business Account ID Facebook Page ID Are defined once in a configuration node for easy reuse and maintenance. 3️⃣ Platform Routing Logic IF nodes check each platform column: Instagram = TRUE → post to Instagram Facebook = TRUE → post to Facebook LinkedIn = TRUE → post to LinkedIn One row can trigger posting to multiple platforms. 4️⃣ Platform Posting Posts are published using: Facebook Graph API (Instagram + Facebook) LinkedIn API (LinkedIn Pages) The Content column is used directly as the post body. 5️⃣ Status Update (Per Platform) After posting: The workflow updates the same row using Row Number Marks the post as completed for that platform This prevents duplicate or accidental re-posts. 🔄 Current Capabilities Multi-platform posting from one sheet Platform-specific routing logic Real-time execution on new content Safe status updates using row matching 🚀 Designed for Easy Expansion This workflow is intentionally modular and can be extended with: Scheduled posting (date/time columns) Image & media handling AI-generated captions Hashtag optimization Engagement analytics Retry & error handling logic ✅ Best Practices Use TRUE / FALSE consistently in platform columns Keep Google Sheets as the single source of truth Add validation or approval columns if used by teams 📦 Ideal Use Cases Social media managers Marketing teams Founders & creators Agencies handling multiple platforms This workflow provides a scalable foundation for social media automation while remaining simple, transparent, and easy to maintain.
by Rahul Joshi
Description Boost your LinkedIn influence with AI-curated daily content ideas! This n8n automation fetches trending professional topics from LinkedIn, analyzes them with Azure OpenAI (GPT-4o-mini), and delivers a ready-to-use, Outlook-compatible email report with: Engagement scoring AI-generated hashtags Concise content suggestions Perfect for influencers, marketers, and thought leaders, this template ensures you never run out of fresh, relevant post ideas—tailored to boost reach and engagement. **Step-by-Step Workflow: ** 📅 Manual or Scheduled Trigger Run on-demand or set it to execute daily for fresh content ideas. 🤖 AI Topic Extraction (Basic LLM Chain) Pulls 3–5 trending LinkedIn topics with short professional descriptions. Ensures relevance for a business/corporate audience. 🧠 AI Processing & Optimization (Code Node) Generates high-impact hashtags based on topic and description. Calculates an Engagement Potential Score (0–100%) for prioritization. Creates short, copy-ready content suggestions. 📊 HTML Report Generation (Outlook-Compatible) Professionally styled with: Topic ranking Engagement percentage Hashtags Ready-to-post snippets 📧 Automated Email Delivery (Gmail Node) Sends the formatted daily report directly to your inbox. Optimized for Outlook, Gmail, and mobile viewing. Perfect For: LinkedIn Influencers – Daily inspiration for posts that trend. Marketing Teams – Streamlined trend analysis and content ideation. Brand Managers – Stay ahead with data-driven post suggestions. Thought Leaders – Maintain a consistent posting cadence with minimal effort. Built With: Azure OpenAI GPT-4o-mini – AI topic generation & optimization. n8n Code Node – Hashtag generation, scoring & formatting. Gmail API – Automated report delivery. HTML Email Template – Fully mobile and Outlook compatible. Key Benefits: ✅ Saves hours of manual trend research. 📈 Maximizes reach with AI-optimized hashtags. 🧠 Prioritizes high-engagement topics for better ROI. 🛠 Fully no-code & customizable to match your niche.
by Robert Breen
This n8n workflow automates bulk AI video generation using Freepik's Image-to-Video API powered by Minimax Hailuo-02-768p. It reads video prompts from a Google Sheet, generates multiple variations of each video using Freepik's AI, handles asynchronous video processing with intelligent polling, and automatically uploads completed videos to Google Drive with organized file names. This is perfect for content creators, marketers, or video producers who need to generate multiple AI videos in bulk and store them systematically. Key Features: Bulk video generation from Google Sheets prompts Multiple variations per prompt (configurable duplicates) Asynchronous processing with smart status polling Automatic retry mechanism for processing delays Direct upload to Google Drive with organized naming Freepik Minimax Hailuo-02 AI-powered video generation (768p quality) Intelligent wait/retry system for video rendering Step-by-Step Implementation Guide Prerequisites Before setting up this workflow, you'll need: n8n instance (cloud or self-hosted) Freepik API account with Video Generation access Google account with access to Sheets and Drive Google Sheet with your video prompts Step 1: Set Up Freepik API Credentials Go to Freepik API Developer Portal Create an account or sign in Navigate to your API dashboard Generate an API key with Video Generation permissions Copy the API key and save it securely In n8n, go to Credentials → Add Credential → HTTP Header Auth Configure as follows: Name: "Header Auth account" Header Name: x-freepik-api-key Header Value: Your Freepik API key Step 2: Set Up Google Credentials Google Sheets Access: Go to Google Cloud Console Create a new project or select existing one Enable Google Sheets API Create OAuth2 credentials In n8n, go to Credentials → Add Credential → Google Sheets OAuth2 API Enter your OAuth2 credentials and authorize with spreadsheets.readonly scope Google Drive Access: In Google Cloud Console, enable Google Drive API In n8n, go to Credentials → Add Credential → Google Drive OAuth2 API Enter your OAuth2 credentials and authorize Step 3: Create Your Google Sheet Create a new Google Sheet: sheets.google.com Set up your sheet with these columns: Column A: Prompt (your video generation prompts) Column B: Name (identifier for file naming) Example data: | Prompt | Name | |-------------------------------------------------|---------------| | A butterfly landing on a flower in slow motion | butterfly-01 | | Ocean waves crashing on rocky coastline | ocean-waves | | Time-lapse of clouds moving across blue sky | clouds-timelapse | Copy the Sheet ID from the URL (the long string between /d/ and /edit) Step 4: Set Up Google Drive Folder Create a folder in Google Drive for your generated videos Copy the Folder ID from the URL when viewing the folder Note: The workflow is configured to use a folder called "n8n workflows" Step 5: Import and Configure the Workflow Copy the provided workflow JSON In n8n, click Import from File or Import from Clipboard Paste the workflow JSON Configure each node as detailed below: Node Configuration Details: Get prompt from google sheet (Google Sheets) Document ID**: Your Google Sheet ID (from Step 3) Sheet Name**: Sheet1 (or your sheet name) Operation**: Read Credentials**: Select your "Google Sheets account" Duplicate Rows2 (Code Node) Purpose**: Creates multiple variations of each prompt JavaScript Code**: const original = items[0].json; return [ { json: { ...original, run: 1 } }, { json: { ...original, run: 2 } }, ]; Customization**: Add more runs for additional variations Loop Over Items (Split in Batches) Processes items in batches to manage API rate limits Options**: Keep default settings Reset**: false Create Video (HTTP Request) Method**: POST URL**: https://api.freepik.com/v1/ai/image-to-video/minimax-hailuo-02-768p Authentication**: Generic → HTTP Header Auth Credentials**: Select your "Header Auth account" Send Body**: true Body Parameters**: Name: prompt Value: ={{ $json.Prompt }} Get Video URL (HTTP Request) Method**: GET URL**: https://api.freepik.com/v1/ai/image-to-video/minimax-hailuo-02-768p/{{ $json.data.task_id }} Authentication**: Generic → HTTP Header Auth Credentials**: Select your "Header Auth account" Timeout**: 120000 (2 minutes) Purpose**: Polls the API for video completion status Switch (Switch Node) Purpose**: Routes workflow based on video generation status Conditions**: Completed: {{ $json.data.status }} equals COMPLETED Failed: {{ $json.data.status }} equals FAILED Created: {{ $json.data.status }} equals CREATED In Progress: {{ $json.data.status }} equals IN_PROGRESS Wait (Wait Node) Amount**: 30 seconds Purpose**: Waits before re-checking video status Webhook ID**: Auto-generated for resume functionality Download Video as Base64 (HTTP Request) Method**: GET URL**: ={{ $json.data.generated[0] }} Purpose**: Downloads completed video file Upload to Google Drive1 (Google Drive) Operation**: Upload Name**: =video - {{ $('Get prompt from google sheet').item.json.Name }} - {{ $('Duplicate Rows2').item.json.run }} Drive ID**: My Drive Folder ID**: Your Google Drive folder ID (from Step 4) Credentials**: Select your "Google Drive account" Step 6: Customize for Your Use Case Modify Duplicate Count: Edit the "Duplicate Rows2" code to create more variations Update File Naming: Change the naming pattern in the Google Drive upload node Adjust Wait Time: Modify the Wait node duration based on typical processing times Add Video Parameters: Enhance the Create Video request with additional Freepik parameters Step 7: Test the Workflow Ensure your Google Sheet has test data Click Execute Workflow on the manual trigger (if present) Monitor the execution flow - note that video generation takes time Watch the Switch node handle different status responses Verify videos are uploaded to Google Drive when completed Step 8: Production Deployment Set up error handling for API failures and timeouts Configure appropriate batch sizes based on your Freepik API limits Add logging for successful uploads and failed generations Consider webhook triggers for automated execution Set up monitoring for stuck or failed video generations Freepik Video API Details Video Generation Process: Submit Request: Send prompt to generate video Get Task ID: Receive task_id for tracking Poll Status: Check generation status periodically Download: Retrieve completed video URL Status Types: CREATED: Video generation task created IN_PROGRESS: Video is being generated COMPLETED: Video ready for download FAILED: Generation failed Model Specifications: Model**: minimax-hailuo-02-768p Resolution**: 768p Duration**: Typically 5-10 seconds Format**: MP4 Example Enhanced Parameters: { "prompt": "{{ $json.Prompt }}", "duration": 5, "aspect_ratio": "16:9", "fps": 24 } Workflow Flow Summary Start → Read prompts from Google Sheets Duplicate → Create multiple runs for variations Loop → Process items in batches Generate → Submit video generation request to Freepik Poll → Check video generation status Switch → Route based on status: Completed → Download video Processing/Created → Wait and retry Failed → Handle error Download → Retrieve completed video file Upload → Save to Google Drive with organized naming Continue → Process next batch Troubleshooting Tips Common Issues: Long Processing Times**: Video generation can take 2-5 minutes per video Timeout Errors**: Increase timeout in "Get Video URL" node Rate Limits**: Reduce batch size and add longer waits between requests Failed Generations**: Check prompt complexity and API limits Upload Failures**: Verify Google Drive folder permissions Error Handling: Add Try/Catch nodes around API calls Implement exponential backoff for retries Log failed generations to Google Sheets Set up email notifications for critical failures Performance Optimization: Adjust wait times based on typical generation duration Use smaller batch sizes for more reliable processing Monitor API usage and costs in Freepik dashboard Cost Considerations Freepik API: Video generation typically costs more than image generation Check your plan's video generation limits Monitor usage through Freepik dashboard Consider upgrading for higher volume needs Processing Time: Each video can take 2-5 minutes to generate Plan workflow execution time accordingly Consider running during off-peak hours for large batches Contact Information Robert A Ynteractive For support, customization, or questions about this workflow: 📧 Email: rbreen@ynteractive.com 🌐 Website: https://ynteractive.com/ 💼 LinkedIn: https://www.linkedin.com/in/robert-breen-29429625/ Need help implementing this workflow or want custom automation solutions? Get in touch for professional n8n consulting and workflow development services.
by Meelioo
How it Works This is a Telegram AI-to-Human Handover System that seamlessly transitions customer support conversations between an AI agent and human operators: AI-First Response: When users message the Telegram bot, an AI agent handles the conversation initially, using memory to maintain context across messages. Smart Handover Detection: The AI recognizes when users request human assistance and triggers a two-step confirmation process (user approval, then operator availability check). Topic-Based Routing: Once confirmed, the system creates a dedicated Telegram Forum topic named after the user's ID, where operators can respond. Messages are automatically forwarded between the user's private chat and the operator's topic. Session Management: A data table tracks conversation states ('ai', 'human', 'open', 'closed'), ensuring messages route correctly and maintaining conversation history. Clean Closure: Operators type "exit" in the topic to close conversations, updating the database and closing the forum topic. Set-up Steps Estimated Time: 30-45 minutes (first-time setup) You'll need to: Create and configure a Telegram bot via BotFather Set up a Telegram group with Topics enabled and add your bot as admin Configure SMTP credentials (Gmail app password recommended) Create an n8n Data Table with specific columns (type, status, topic, user) Add your bot token to multiple HTTP Request nodes Set up AI model credentials (OpenRouter or Azure OpenAI) Fill in the Configuration node with your IDs and email addresses Test the flow using the included Personal Trigger to capture your group/user IDs Note: The template includes detailed video guides (1-minute overview and 10-minute setup walkthrough) plus extensive documentation in sticky notes covering every node and credential setup.
by Calistus Christian
Summary Turns the latest CVEs from NVD into a clean, sortable email digest (table + plaintext) and sends it via Gmail. The flow pulls the newest CVEs, extracts Vendor / Product / Version, severity & CVSS, highlights public exploit references, drafts an HTML table, then asks OpenAI to tighten the copy before emailing it. Optionally, you can swap the Gmail node to Signal, Slack, Microsoft Teams, etc. Perfect for: SecOps leads who want a low-noise digest of what changed recently, grouped and ranked by severity. * What this workflow does Triggers on a schedule (every 30 minutes by default). Calls the NVD 2.0 API to fetch recent CVEs. Parses each CVE to extract: Vendor / Product / Version(s) (from CPE 2.3 where available, with a text fallback) Severity + CVSS (V3.1/V3.0/V2 fallback) and vector string Exploit signal (tags/links like Exploit‑DB, GitHub PoCs, etc.) Short English summary + direct NVD link Builds an HTML email (and a plaintext fallback) ranked by severity then score. Uses OpenAI to polish the subject line and copy into a concise, professional digest (JSON‑only contract). Sends the digest with the Gmail node. * Prerequisites NVD API key** (free) --- create at https://nvd.nist.gov/developers/request-an-api-key OpenAI API key** with access to gpt-4o-mini (or change the model) Email sending**: Gmail node with OAuth2 (recommended), or swap to the generic Email Send (SMTP) node if you prefer. Quick start Import the workflow JSON below. Open HTTP Request → Headers and confirm apiKey uses {{$env.NVD_API_KEY}}. Open Send a message (Gmail) and set To to {{$env.RECIPIENT_EMAIL}} (or your address). Open OpenAI Email Crafter and connect your OpenAI credential (or change model if needed). Hit Execute to test, then Activate when happy. Credits Created by ca7ai (n8n Creator). * Tags security, cve, cisa, nvd, email, monitoring, openai, gmail, automation
by Satoshi
Overiew This workflow builds an AI meeting assistant who sends information-dense pre-meeting notifications for a user's upcoming meetings. How It Works A scheduled trigger fires hourly and checks for upcoming meetings within the hour. When found, a search for last correspondence + recent activity is performed for each attendee. Using available correspondance, an AI/LLM is used to summarize this information and generate a short notification message which should help the user prepare for the meeting. The notification is finally sent to the user's Slack. Set up Steps Google Cloud Create the credentials and replace them in the workflow. Please enable the following APIs: Gmail API Google Calendar API OpenAI Create the credentials as instructed Replace your credentials and connect. Slack Create the credentials as instructed Replace your credentials and connect.
by AppStoneLab Technologies LLP
🤖 AI Support Bot for WooCommerce with Gemini & GPT (Telegram & Gmail) Managing customer support across multiple platforms like email and chat can be a huge time sink. Answering the same questions about order status repeatedly takes your focus away from growing your business. This workflow solves that problem by deploying a 24/7 conversational AI agent to act as your first line of support for your WooCommerce store. This AI-powered bot can handle customer inquiries from both Telegram and Gmail, understand conversational follow-ups, and use a tool to fetch live order data directly from your WooCommerce store. It's designed to be reliable, with a primary/fallback AI model setup, and robust enough to prevent the common pitfalls of email automation like infinite reply loops. How It Works ⚙️ The workflow operates in a clear, logical sequence: 📢 Multi-Channel Ingestion: The workflow starts when it receives a message from one of two sources: Telegram: An instant webhook trigger fires for every new message. Gmail: A polling trigger checks your inbox every minute for new, unread emails. 💎 Data Normalization: All incoming requests are merged and processed by a Set node. This crucial step transforms the platform-specific data into a universal format that the rest of the workflow can understand (e.g., platform, sender_id, query_text). 🧠 AI Processing: The standardized query is sent to a LangChain Agent. This agent is the "brain" of the operation. It uses Conversational Memory to understand the context of the conversation (like when a user provides an order ID in a follow-up message). 🛠️ Tool Usage: Based on its prompt, the AI Agent determines if it has enough information to use its one available tool: Get an order in WooCommerce. If a valid Order ID is present, it calls the tool to fetch live order details. 📮 Response & Routing: The agent formulates a natural language response. A Switch node then inspects the platform field and routes the response to the correct channel. ✅ Cleanup: For the Gmail path, two final actions occur in parallel: the reply is sent, and the original incoming email is marked as 'Read'. This is a critical step to prevent the workflow from re-triggering on the same email in an infinite loop. Nodes Used 🔗 This workflow uses a combination of standard nodes and AI nodes to achieve its goal: Telegram Trigger: To receive messages from Telegram in real-time. Gmail Trigger: To poll for new unread emails. Merge: To combine inputs from multiple triggers. Set: To normalize data into a consistent format. LangChain Agent: The core AI "brain" that orchestrates the logic, memory, and tools. Google Gemini & OpenAI: Used as the primary and fallback language models for the agent. WooCommerce Tool: The tool the AI agent uses to fetch order data. Switch: To route the final reply to the correct platform. Telegram: To send the final response to Telegram. Gmail: To reply to emails and mark them as read. Prerequisites 🔑 To use this workflow, you will need: An active n8n instance (self-hosted or cloud). A Telegram Bot account and its API token. A Gmail account with OAuth2 credentials configured in n8n. A WooCommerce store with API credentials (Consumer Key and Secret). An OpenAI API key. A Google AI (Gemini) API key. Usage 🚀 Follow these steps to set up the workflow: Download the Workflow: Import the workflow JSON file into your n8n instance. Configure Credentials: Telegram: Select your Telegram API credentials in the Fetch user query and Send Telegram Response nodes. Gmail: Select your Gmail OAuth2 credentials in the Fetch support mail, Send Response via Mail, and Mark received mail as read nodes. WooCommerce: Select your WooCommerce API credentials in the Get an order in WooCommerce node. AI Models: Select your OpenAI and Google AI credentials in the Fallback Model and Primary Model nodes, respectively. Activate Telegram Webhook: Open the Fetch user query (Telegram Trigger) node. Copy the Webhook URL. Register this URL with your Telegram bot using the /setWebhook command in the BotFather chat. Customize AI Prompt (Optional): Open the WooCommerce Customer support Agent1 node. You can edit the prompt in the Text field to change the AI's personality, rules, or language. Activate the Workflow: Save the workflow and toggle the "Active" switch ON. Your multi-channel AI support agent is now live\! Send a message to your Telegram bot or a new, unread email to your connected Gmail account to test it out. Resources 📚 n8n Documentation n8n Community Forum LangChain in n8n
by Marth
How It Works: The 5-Node Security Flow This workflow efficiently performs a scheduled data breach scan. 1. Scheduled Check (Cron Node) This is the workflow's trigger. It schedules the workflow to run at a specific, regular interval. Function:** Continuously runs on a set schedule, for example, every Monday morning. Process:* The *Cron** node automatically initiates the workflow, ensuring routine data breach scans are performed without manual intervention. 2. List Emails to Check (Code Node) This node acts as your static database, defining which email addresses to monitor for breaches. Function:** Stores a list of email addresses from your team or customers in a single, easy-to-update array. Process:** It configures the list of emails that are then processed by the subsequent nodes. This makes it simple to add or remove addresses as needed. 3. Query HIBP API (HTTP Request Node) This node connects to the HaveIBeenPwned (HIBP) API to check for breaches. Function:** Queries the HIBP API for each email address on your list. Process:** It sends a request to the HIBP API. The API responds with a list of data breaches that the email was found in, if any. 4. Is Breached? (If Node) This is the core detection logic. It checks the API response to see if any breach data was returned. Function:** Compares the API's response to an empty array. Process:* If the API response is *not empty**, it indicates a breach has been found, and the workflow is routed to the notification node. If the response is empty, the workflow ends safely. 5. Send High-Priority Alert (Slack Node) / End Workflow (No-Op Node) These nodes represent the final action of the workflow. Function:** Responds to a detected breach. Process:* If a breach is found, the *Slack* node sends an urgent alert to your team's security channel, notifying them of the compromised email. If no breaches are found, the *No-Op** node ends the workflow without any notification. How to Set Up Implementing this essential cybersecurity monitor in your n8n instance is quick and straightforward. 1. Prepare Your Credentials & API Before building the workflow, ensure all necessary accounts are set up and their credentials are ready. HIBP API Key:* You need to get an *API key** from haveibeenpwned.com. This key is required to access the API. Slack Credential:* Set up a *Slack credential* in n8n and note the *Channel ID** of your security alert channel (e.g., #security-alerts). 2. Import the Workflow JSON Get the workflow structure into your n8n instance. Import:** In your n8n instance, navigate to the "Workflows" section. Click the "New" or "+" icon, then select "Import from JSON." Paste the provided JSON code into the import dialog and import the workflow. 3. Configure the Nodes Customize the imported workflow to fit your specific monitoring needs. Scheduled Check (Cron):** Set the schedule according to your preference (e.g., every Monday at 8:00 AM). List Emails to Check (Code):* Open this node and *edit the emailsToCheck array**. Enter the list of company email addresses you want to monitor. Query HIBP API (HTTP Request):** Open this node and in the "Headers" section, add the header hibp-api-key with the value of your HIBP API key. Send High-Priority Alert (Slack):* Select your *Slack credential* and replace YOUR_SECURITY_ALERT_CHANNEL_ID with your actual *Channel ID**. 4. Test and Activate Verify that your workflow is working correctly before setting it live. Manual Test:** Run the workflow manually. You can test with a known breached email address (you can find examples online) to ensure the alert is triggered. Verify:** Check your specified Slack channel to confirm that the alert is sent with the correct information. Activate:** Once you're confident in its function, activate the workflow. n8n will now automatically monitor your important accounts for data breaches on the schedule you set.
by Oneclick AI Squad
This automated n8n workflow tracks hourly cloud spending across AWS, Azure, and GCP. It detects cost spikes or budget overruns in real time, tags affected resources, and sends alerts via email, WhatsApp, or Slack. This ensures proactive cost management and prevents budget breaches. Good to Know AWS, Azure, and GCP APIs must have read access to billing data. Use secure credentials for API keys or service accounts. The workflow runs every hour for near real-time cost tracking. Alerts can be sent to multiple channels (Email, WhatsApp, Slack). Tags are applied automatically to affected resources for easy tracking. How It Works Hourly Cron Trigger – Starts the workflow every hour to fetch updated billing data. AWS Billing Fetch – Retrieves latest cost and usage data via AWS Cost Explorer API. Azure Billing Fetch – Retrieves subscription cost data from Azure Cost Management API. GCP Billing Fetch – Retrieves project-level spend data using GCP Cloud Billing API. Data Parser – Combines and cleans data from all three clouds into a unified format. Cost Spike Detector – Identifies unusual spending patterns or budget overruns. Owner Identifier – Matches resources to their respective owners or teams. Auto-Tag Resource – Tags the affected resource for quick identification and follow-up. Alert Sender – Sends notifications through Email, WhatsApp, and Slack with detailed cost reports. How to Use Import the workflow into n8n. Configure credentials for AWS, Azure, and GCP billing APIs. Set your budget threshold in the Cost Spike Detector node. Test the workflow to ensure all APIs fetch data correctly. Adjust the Cron Trigger for your preferred monitoring frequency. Monitor alert logs to track and manage cost spikes. Requirements AWS Access Key & Secret Key with Cost Explorer Read Permissions. Azure Client ID, Tenant ID, Client Secret with Cost Management Reader Role. GCP Service Account JSON Key with Billing Account Viewer Role. Customizing This Workflow Change the trigger frequency in the Cron node (e.g., every 15 min for faster alerts). Modify alert channels to include additional messaging platforms. Adjust cost spike detection thresholds to suit your organization’s budget rules. Extend the Data Parser to generate more detailed cost breakdowns. Want a tailored workflow for your business? Our experts can craft it quickly Contact our team
by Tushar Mishra
Short description Automatically triage incoming chat messages into Incidents, Service Requests, or Other using an LLM-powered classifier; create Incidents in ServiceNow, submit Service Catalog requests (HTTP), and route everything else to an AI Agent with web search + memory. Includes an optional summarization step for ticket context. Full description This n8n template wires a chat trigger to an LLM-based Text Classifier and then routes messages to the appropriate downstream action: Trigger: When chat message received — incoming messages from your chat channel. Text Classifier: small LLM prompt/classifier that returns one of three labels: Incident, Request, or Everything Else. Create Incident (ServiceNow connector): when labeled Incident, the workflow creates a Servicenow Incident record (short fields: short\_description, description, priority, caller). Submit General Request (HTTP Request): when labeled Request, the workflow calls your Service Catalog API (POST) to place a catalog item / submit a request. AI Agent: when labeled Everything Else, route to an AI Agent node that: uses an OpenAI chat model for contextual replies, can consult SerpAPI (web search) as a tool, saves relevant context to Simple Memory for future conversations. Summarization Chain: optional chain to summarize long chat threads into concise ticket descriptions before creating incidents/requests. This template is ideal for support desks that want automated triage with human-quality context and searchable memory. Key highlights (what to call out) Three-way LLM triage**: ensures messages are routed automatically to the correct backend action (Incident vs Service Request vs AI handling). ServiceNow native connector**: uses the ServiceNow node to create Incidents (safer than raw HTTP for incidents). Service Catalog via HTTP**: flexible — supports organizations using RESTful catalog endpoints. Summarization before ticket creation**: produces concise, high-quality short_description and description fields. AI Agent + Memory + Web Search**: handles non-ticket queries with web-augmented answers and stores context for follow-ups. Failover & logging**: include a catch node (optional) that logs failures and notifies admins. Required credentials & inputs (must configure) ServiceNow**: Instance URL + API user (must have rights to create incidents). Service Catalog HTTP endpoint**: URL + API key / auth header (for POST). OpenAI API key** (or other LLM provider): for Text Classifier, Summarization Chain, and AI Agent. SerpAPI key** (optional): for web search tools inside the AI Agent. Memory store**: Simple Memory node (or external DB) for conversation history. Nodes included (quick map) Trigger: When chat message received Processor: Text Classifier (OpenAI/LLM) Branch A: ServiceNow (Create Incident) Branch B: HTTP Request (Service Catalog POST) Branch C: AI Agent (OpenAI + SerpAPI + Simple Memory) Shared: Summarization Chain (used before A or B where enabled) Optional: Error / Audit logging node, Slack/email notifications Recommended n8n settings & tips Use structured outputs** from classifier ({ label: "Incident", confidence: 0.92 }) so you can implement confidence thresholds. If confidence < 0.7**, route to a human review queue instead of auto-creating a ticket. Sanitize user PII** before storing in memory or sending to external APIs. Rate-limit** OpenAI/SerpAPI calls to avoid unexpected bills. Test the Service Catalog POST body** in Postman first — include sample variables JSON. Short sample variables JSON (Service Catalog POST) { "sysparm_quantity": 1, "variables": { "description": "User reports VPN timeout on Windows machine; error code 1234" } }