by Angel Menendez
n8n Workflow: Automate SIEM Alert Enrichment with MITRE ATT&CK & Qdrant Who is this for? This workflow is ideal for: Cybersecurity teams & SOC analysts* who want to automate *SIEM alert enrichment**. IT security professionals* looking to integrate *MITRE ATT&CK intelligence** into their ticketing system. Organizations using Zendesk for security incidents* who need enhanced *contextual threat data**. Anyone using n8n and Qdrant* to build *AI-powered security workflows**. What problem does this workflow solve? Security teams receive large volumes of raw SIEM alerts that lack actionable context. Investigating every alert manually is time-consuming and can lead to delayed response times. This workflow solves this problem by: ✔ Automatically enriching SIEM alerts with MITRE ATT&CK TTPs. ✔ Tagging & classifying alerts based on known attack techniques. ✔ Providing remediation steps to guide the response team. ✔ Enhancing security tickets in Zendesk with relevant threat intelligence. What this workflow does 1️⃣ Ingests SIEM alerts (via chatbot or ticketing system like Zendesk). 2️⃣ Queries a Qdrant vector store containing MITRE ATT&CK techniques. 3️⃣ Extracts relevant TTPs (Tactics, Techniques, & Procedures) from the alert. 4️⃣ Generates remediation steps using AI-powered enrichment. 5️⃣ Updates Zendesk tickets with threat intelligence & recommended actions. 6️⃣ Provides structured alert data for further automation or reporting. Setup Guide Prerequisites n8n instance** (Cloud or Self-hosted). Qdrant vector store** with MITRE ATT&CK data embedded. OpenAI API key** (for AI-based threat processing). Zendesk account** (for ticket enrichment, if applicable). Clean Mitre Data Python Script Cleaned Mitre Data Full Mitre Data Steps to Set Up 1️⃣ Embed MITRE ATT&CK data into Qdrant This workflow pulls MITRE ATT&CK data from Google Drive and loads it into Qdrant. The data is vectorized using OpenAI embeddings for fast retrieval. 2️⃣ Deploy the n8n Chatbot The chatbot listens for SIEM alerts and sends them to the AI processing pipeline. Alerts are analyzed using an AI agent trained on MITRE ATT&CK. 3️⃣ Enrich Zendesk Tickets The workflow extracts MITRE ATT&CK techniques from alerts. It updates Zendesk tickets with contextual threat intelligence. The remediation steps are included as internal notes for SOC teams. How to Customize This Workflow 🔧 Modify the chatbot trigger: Adapt the chatbot node to receive alerts from Slack, Microsoft Teams, or any other tool. 🔧 Change the SIEM input source: Connect your workflow to Splunk, Elastic SIEM, or Chronicle Security. 🔧 Customize remediation steps: Use a custom AI model to tailor remediation responses based on organization-specific security policies. 🔧 Extend ticketing integration: Modify the Zendesk node to also work with Jira, ServiceNow, or another ITSM platform. Why This Workflow is Powerful ✅ Saves time: Automates alert triage & classification. ✅ Improves security posture: Helps SOC teams act faster on threats. ✅ Leverages AI & vector search: Uses LLM-powered enrichment for real-time context. ✅ Works across platforms: Supports n8n Cloud, Self-hosted, and Qdrant. 🚀 Get Started Now! 📖 Watch the Setup Video 💬 Have Questions? Join the Discussion in the YouTube Comments!
by Markhah
Overview This workflow generates automated revenue and expense comparison reports from a structured Google Sheet. It enables users to compare financial data across the current period, last month, and last year, then uses an AI agent to analyze and summarize the results for business reporting. 1.Prerequisites A connected Google Sheets OAuth2 credential. A valid DeepSeek AI API (or replaceable with another Chat Model). A sub-workflow (child workflow) that handles processing logic. Properly structured Google Sheets data (see below). 2.Required Google Sheet Structure Column headers must include at least: Date, Amount, Type. Data format for Date must be in dd/MM/yyyy or dd-MM-yyyy. Entries should span over multiple time periods (e.g., current month, last month, last year). 3.Setup Steps Import the workflow into your n8n instance. Connect your Google Sheets and DeepSeek API credentials. Update: Sheet ID and Tab Name (already embedded in node: Get revenual from google sheet). Custom sub-workflow ID (in the Call n8n Workflow Tool node). Optionally configure chatbot webhook in the When chat message received node. 4.What the Workflow Does Accepts date inputs via AI chat interface (ChatTrigger + AI Agent). Fetches raw transaction data from Google Sheets. Segments and pivots revenue by classification for: Current period Last month Last year Aggregates totals and applies custom titles for comparison. Merges all summaries into a final unified JSON report. 5.Customization Options Replace DeepSeek with OpenAI or other LLMs. Change the date fields or cycle comparisons (e.g., quarterly, weekly). Add more AI analysis steps such as sentiment scoring or forecasting. Modify the pivot logic to suit specific KPI tags or labels. 6.Troubleshooting Tips If Google Sheets fetch fails: ensure the document is shared with your n8n Google credential. If parsing errors: verify that all dates follow the expected format. Sub-workflow must be active and configured to accept the correct inputs (6 dates). 7.SEO Keywords: google sheets report, AI financial report, compare revenue by month, expense analysis automation, chatbot n8n report generator, n8n Google Sheet integration
by PUQcloud
Setting up n8n workflow Overview The Docker n8n WHMCS module uses a specially designed workflow for n8n to automate deployment processes. The workflow provides an API interface for the module, receives specific commands, and connects via SSH to a server with Docker installed to perform predefined actions. Prerequisites You must have your own n8n server. Alternatively, you can use the official n8n cloud installations available at: n8n Official Site Installation Steps Install the Required Workflow on n8n You have two options: Option 1: Use the Latest Version from the n8n Marketplace The latest workflow templates for our modules are available on the official n8n marketplace. Visit our profile to access all available templates: PUQcloud on n8n Option 2: Manual Installation Each module version comes with a workflow template file. You need to manually import this template into your n8n server. n8n Workflow API Backend Setup for WHMCS/WISECP Configure API Webhook and SSH Access Create a Basic Auth Credential for the Webhook API Block in n8n. Create an SSH Credential for accessing a server with Docker installed. Modify Template Parameters In the Parameters block of the template, update the following settings: server_domain – Must match the domain of the WHMCS/WISECP Docker server. clients_dir – Directory where user data related to Docker and disks will be stored. mount_dir – Default mount point for the container disk (recommended not to change). Do not modify the following technical parameters: screen_left screen_right Deploy-docker-compose In the Deploy-docker-compose element, you have the ability to modify the Docker Compose configuration, which will be generated in the following scenarios: When the service is created When the service is unlocked When the service is updated nginx In the nginx element, you can modify the configuration parameters of the web interface proxy server. The main section allows you to add custom parameters to the server block in the proxy server configuration file. The main\_location section contains settings that will be added to the location / block of the proxy server configuration. Here, you can define custom headers and other parameters specific to the root location. Bash Scripts Management of Docker containers and all related procedures on the server is carried out by executing Bash scripts generated in n8n. These scripts return either a JSON response or a string. All scripts are located in elements directly connected to the SSH element. You have full control over any script and can modify or execute it as needed.
by Jez
Summary This n8n workflow implements an AI-powered agent that intelligently uses the Brave Search API (via an external MCP service like Smithery) to perform both web and local searches. It understands natural language queries, selects the appropriate search tool, and exposes this enhanced capability as a single, callable MCP tool. Key Features 🤖 Intelligent Tool Selection: AI agent decides between Brave's web search and local search tools based on user query context. 🌐 MCP Microservice: Exposes complex search logic as a single, easy-to-integrate MCP tool (call_brave_search_agent). 🧠 Powered by Google Gemini: Utilizes the gemini-2.5-flash-preview-05-20 LLM for advanced reasoning. 🗣️ Conversational Memory: Remembers context within a single execution flow. 📝 Customizable System Prompt: Tailor the AI's behavior and responses. 🧩 Modular Design: Connects to external Brave Search MCP tools (e.g., from Smithery). Benefits 🔌 Simplified Integration: Easily add advanced, AI-driven search capabilities to other applications or agent systems. 💸 Reduced Client-Side LLM Costs: Offloads complex prompting and tool orchestration to n8n, minimizing token usage for client-side LLMs. 🔧 Centralized Logic: Manage and update search strategies and AI behavior in one place. 🚀 Extensible: Can be adapted to use other search tools or incorporate more complex decision-making. Nodes Used @n8n/n8n-nodes-langchain.mcpTrigger (MCP Server Trigger) @n8n/n8n-nodes-langchain.toolWorkflow @n8n/n8n-nodes-langchain.agent (AI Agent) @n8n/n8n-nodes-langchain.lmChatGoogleGemini (Google Gemini Chat Model) n8n-nodes-mcp.mcpClientTool (MCP Client Tool - for Brave Search) @n8n/n8n-nodes-langchain.memoryBufferWindow (Simple Memory) n8n-nodes-base.executeWorkflowTrigger (Workflow Start - for direct execution/testing) Prerequisites An active n8n instance (v1.22.5+ recommended). A Google AI API key for using the Gemini LLM. Access to an external MCP service that provides Brave Search tools (e.g., a Smithery account configured with their Brave Search MCP). This includes the MCP endpoint URL and any necessary authentication (like an API key for Smithery). Setup Instructions Import Workflow: Download the Brave_Search_Smithery_AI_Agent_MCP_Server.json file and import it into your n8n instance. Configure LLM Credential: Locate the 'Google Gemini Chat Model' node. Select or create an n8n credential for "Google Palm API" (used for Gemini), providing your Google AI API key. Configure Brave Search MCP Credential: Locate the 'brave_web_search' and 'brave_local_search' (MCP Client) nodes. Create a new n8n credential of type "MCP Client HTTP API". Name: e.g., Smithery Brave Search Access Base URL: Enter the URL of your Brave Search MCP endpoint from your provider (e.g., https://server.smithery.ai/@YOUR_PROFILE/brave-search/mcp). Authentication: If your MCP provider requires an API key, select "Header Auth". Add a header with the name (e.g., X-API-Key) and value provided by your MCP service. Assign this newly created credential to both the 'brave_web_search' and 'brave_local_search' nodes. Note MCP Trigger Path: Open the 'Brave Search MCP Server Trigger' node. Copy its unique 'Path' (e.g., /cc8cc827-3e72-4029-8a9d-76519d1c136d). You will combine this with your n8n instance's base URL to get the full endpoint URL for clients. How to Use This workflow exposes an MCP tool named call_brave_search_agent. External clients can call this tool via the URL derived from the 'Brave Search MCP Server Trigger'. Example Client MCP Configuration (e.g., for Roo Code): "n8n-brave-search-agent": { "url": "https://YOUR_N8N_INSTANCE/mcp/cc8cc827-3e72-4029-8a9d-76519d1c136d/sse", "alwaysAllow": [ "call_brave_search_agent" ] } Replace YOUR_N8N_INSTANCE with your n8n's public URL and ensure the path matches your trigger node. Example Request: Send a POST request to the trigger URL with a JSON body: { "input": { "query": "best coffee shops in London" } } The agent will stream its response, including the summarized search results. Customization AI Behavior:* Modify the System Prompt within the *'Brave Search AI Agent'** node to fine-tune its decision-making, response style, or how it uses the search tools. LLM Choice:* Replace the *'Google Gemini Chat Model'** node with any other compatible LLM node supported by n8n. Search Tools:** Adapt the workflow to use different or additional search tools by modifying the MCP Client nodes and updating the AI Agent's system prompt and tool definitions. Further Information GitHub Repository: https://github.com/jezweb/n8n The workflow includes extensive sticky notes for in-canvas documentation. Author Jeremy Dawes (Jezweb)
by Khairul Muhtadin
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Cold Calling Automation - End-to-End Automated Cold Calling with Apify, RAG, and WhatsApp The "Cold Calling Automation" workflow is designed to fully automate the end-to-end cold calling process by intelligently combining web scraping, AI-powered research, and WhatsApp messaging. Leveraging key technologies such as Apify for data scraping, RAG (Retrieval-Augmented Generation) for intelligent content creation, and WhatsApp integration for automated outreach, this workflow transforms raw prospect data into personalized, high-converting cold calling campaigns with minimal manual intervention. 💡 Why Use Cold Calling Automation? Scale Your Outreach:** Automate hundreds of personalized cold calls without manual effort or hiring additional staff. Intelligent Personalization:** RAG technology creates highly relevant, personalized messages based on prospect research. Multi-Channel Approach:** Seamlessly integrate WhatsApp messaging with traditional cold calling methods. Real-Time Optimization:** Continuously improve message performance and conversion rates through AI analysis. Cost-Effective:** Reduce cold calling costs while dramatically increasing reach and response rates. ⚡ Who Is This For? Sales Teams:** Looking to scale their cold calling efforts with intelligent automation and personalization. Lead Generation Agencies:** Needing to deliver high-volume, high-quality cold calling services to clients. Business Development Professionals:** Seeking to maximize outreach efficiency while maintaining personal touch. Small Business Owners:** Who want professional-grade cold calling capabilities without hiring expensive sales teams. Marketing Agencies:** Offering comprehensive lead generation and conversion services to clients. ❓ What Problem Does It Solve? Traditional cold calling is time-consuming, expensive, and often ineffective due to lack of personalization and poor timing. Manual prospect research, script writing, and call execution create bottlenecks that limit outreach scale. Generic messages result in low response rates and damaged brand reputation. This workflow solves these problems by automating the entire cold calling pipeline - from prospect identification and research to personalized message creation and delivery - while maintaining high quality and relevance that converts prospects into qualified leads. 🔧 What This Workflow Does ⏱ Prospect Scraping: Uses Apify to automatically scrape and identify high-quality prospects based on your target criteria. 🔍 Intelligent Research: Employs RAG technology to research each prospect and gather relevant business intelligence. ✍️ Personalized Content: Automatically generates custom messages, scripts, and talking points for each prospect. 📱 WhatsApp Integration: Delivers personalized messages through WhatsApp automation for maximum engagement. 📊 Performance Tracking: Monitors response rates, engagement metrics, and conversion data for continuous optimization. 🤖 AI-Powered Follow-up: Automatically handles initial responses and schedules appropriate follow-up actions. 📈 Campaign Analytics: Provides detailed insights on campaign performance and ROI metrics. 🔄 Continuous Learning: Improves message effectiveness and targeting based on campaign results. This workflow also using community node: `@devlikeapro/n8n-nodes-waha` 🔐 Setup Instructions Import the provided workflow JSON into your n8n instance (Cloud or self-hosted). Set up credentials: Apify API credentials for prospect scraping OpenAI API key for RAG and content generation WhatsApp Business API credentials or WAHA integration Database credentials for prospect and campaign tracking Email credentials for notifications and reporting Customize parameters: Target prospect criteria and scraping parameters Message templates and personalization rules Campaign timing and frequency settings Response handling and follow-up logic Performance tracking and reporting preferences Test the complete workflow with a small prospect list to verify scraping, personalization, and delivery. 🧩 Pre-Requirements Active n8n instance (Cloud or Self-hosted) Apify account with appropriate scraping credits OpenAI API key with sufficient usage limits WhatsApp Business account or WAHA setup Database system for prospect and campaign management Basic understanding of your target audience and value proposition 🛠️ Customize It Further Integrate with CRM systems to sync prospects and track conversion through sales pipeline. Add voice calling capabilities using VoIP services for complete omnichannel outreach. Implement A/B testing for message templates and timing optimization. Connect with social media platforms for multi-channel prospecting and engagement. Add sentiment analysis to optimize message tone and approach for different prospect types. Integrate with calendar systems for automatic meeting scheduling from qualified responses. 🧠 Nodes Used Apify nodes for prospect scraping and data collection OpenAI Chat Model and Embeddings for RAG implementation WhatsApp/WAHA nodes for message delivery and response handling Database nodes for prospect storage and campaign tracking HTTP Request nodes for API integrations and webhooks Code nodes for data processing and personalization logic Schedule Trigger for automated campaign execution Conditional nodes for response handling and follow-up logic Set nodes for parameter configuration and data transformation Split In Batches for efficient bulk processing 📊 Expected Results 50-80% increase** in cold calling efficiency and prospect reach 25-40% higher** response rates compared to generic cold calling 60-75% reduction** in manual research and message preparation time Real-time insights** into campaign performance and prospect engagement Scalable system** that grows with your business needs 📞 Support Made by: khaisa Studio Tag: automation, cold calling, lead generation, apify, RAG, whatsapp, AI, sales automation, outreach Category: Sales Automation & Lead Generation Need a custom? contact me for more tailored templates
by David Olusola
How It Works The workflow is an automated appointment reminder system built on n8n. Here is a step-by-step breakdown of its process: Reminder Webhook This node acts as the entry point for the workflow. It's a unique URL that waits for data to be sent to it from an external application, such as a booking or scheduling platform. When a new appointment is created in that system, it sends a JSON payload to this webhook. Extract Appointment Data This is a Code node that processes the incoming data. It's a critical step that: Extracts the customer's name, phone number, appointment time, and service from the webhook's JSON payload. Includes validation to ensure a phone number is present, throwing an error if it's missing. Formats the raw appointment time into a human-readable string for the SMS message. Send SMS Reminder This node uses your Twilio credentials to send an SMS message. It dynamically constructs the message using the data extracted in the previous step. The message is personalized with the customer's name and includes the formatted appointment details. Setup Instructions Import the Workflow Copy the JSON code from the Canvas and import it into your n8n instance. Connect Your Twilio Account Click on the "Send SMS Reminder" node. In the "Credentials" section, you will need to either select your existing Twilio account or add new credentials by providing your Account SID and Auth Token from your Twilio console. Find the Webhook URL Click on the "Reminder Webhook" node. The unique URL for this workflow will be displayed. Copy this URL. Configure Your Booking System Go to your booking or scheduling platform (e.g., Calendly, Acuity). In the settings or integrations section, find where you can add a new webhook. Paste the URL you copied from n8n here. You'll need to map the data fields from your booking system (like customer name, phone, etc.) to match the expected format shown in the comments of the "Extract Appointment Data" node. Once these steps are complete, your workflow will be ready to automatically send SMS reminders whenever a new appointment is created.
by Nabin Bhandari
This n8n template uses AI to automatically classify incoming Gmail messages into five categories and route them to the right people or departments. It can also reply automatically and send WhatsApp alerts for urgent or relevant messages. This helps ensure high-priority emails never get missed, while other messages are handled efficiently. ##How It Works Trigger A new email in Gmail triggers the workflow. Classification (OpenAI GPT) The email is analyzed by an OpenAI GPT model and classified into one of: High Priority Customer Support Promotion Finance/Billing Random/Other Conditional Logic & Actions High Priority → Create draft reply + send WhatsApp alert. Customer Support → Auto-reply + send WhatsApp confirmation alert. Promotion → Summarize email + send WhatsApp promotional alert. Finance/Billing → Forward to finance team + send WhatsApp finance alert. Random/Other → Label and log only. Multi-Channel Output Responses are sent via Gmail. Alerts are sent via WhatsApp (or another compatible API). ##Setup Instructions Step 1: Gmail Authorization Add a Gmail node in n8n. Connect using OAuth2 and grant read/send permissions. Step 2: OpenAI API Key Get your API key from OpenAI. Add it to n8n credentials for the OpenAI node. Step 3: WhatsApp Integration Use your WhatsApp Business API or a provider like Twilio or 360Dialog. Replace placeholders with your details: [YOUR_WHATSAPP_NUMBER] [YOUR_FINANCE_TEAM_NUMBER] [YOUR_SUPPORT_TEAM_NUMBER] Step 4: Import & Run Import the workflow JSON into n8n. Adjust prompts, labels, and routing logic as needed. Execute and monitor results. ##Good to Know Fully customizable — add or remove categories, adjust responses, and change alert channels. Can be integrated with Slack, Discord, Trello, Notion, Jira, or CRM systems. Scales easily across teams and departments. ##Requirements Gmail account with OAuth2 credentials set up in n8n OpenAI API key for classification and content generation WhatsApp (or other messaging service) integration Optional: Slack, Notion, CRM, or accounting tool integrations ##Customization Ideas Create support tickets in Trello, Notion, or Jira from Customer Support emails. Sync Finance emails with QuickBooks, Stripe, or Google Sheets. Replace WhatsApp alerts with Slack or Discord messages. Use Zapier/Make for cross-platform automations.
by PUQcloud
Setting up n8n workflow Overview The Docker Grafana WHMCS module uses a specially designed workflow for n8n to automate deployment processes. The workflow provides an API interface for the module, receives specific commands, and connects via SSH to a server with Docker installed to perform predefined actions. Prerequisites You must have your own n8n server. Alternatively, you can use the official n8n cloud installations available at: n8n Official Site Installation Steps Install the Required Workflow on n8n You have two options: Option 1: Use the Latest Version from the n8n Marketplace The latest workflow templates for our modules are available on the official n8n marketplace. Visit our profile to access all available templates: PUQcloud on n8n Option 2: Manual Installation Each module version comes with a workflow template file. You need to manually import this template into your n8n server. n8n Workflow API Backend Setup for WHMCS/WISECP Configure API Webhook and SSH Access Create a Basic Auth Credential for the Webhook API Block in n8n. Create an SSH Credential for accessing a server with Docker installed. Modify Template Parameters In the Parameters block of the template, update the following settings: server_domain – Must match the domain of the WHMCS/WISECP Docker server. clients_dir – Directory where user data related to Docker and disks will be stored. mount_dir – Default mount point for the container disk (recommended not to change). Do not modify the following technical parameters: screen_left screen_right Deploy-docker-compose In the Deploy-docker-compose element, you have the ability to modify the Docker Compose configuration, which will be generated in the following scenarios: When the service is created When the service is unlocked When the service is updated nginx In the nginx element, you can modify the configuration parameters of the web interface proxy server. The main section allows you to add custom parameters to the server block in the proxy server configuration file. The main\_location section contains settings that will be added to the location / block of the proxy server configuration. Here, you can define custom headers and other parameters specific to the root location. Bash Scripts Management of Docker containers and all related procedures on the server is carried out by executing Bash scripts generated in n8n. These scripts return either a JSON response or a string. All scripts are located in elements directly connected to the SSH element. You have full control over any script and can modify or execute it as needed.
by Raphael De Carvalho Florencio
What this template does Transforms provider documentation (URLs) into an auditable, enforceable multicloud security control baseline. It: Fetches and sanitizes HTML Uses AI to extract security requirements (strict 3-line TXT blocks) Composes enforceable controls** (strict 7-line TXT blocks with true-equivalence consolidation) Builds the final baseline* (TXT or JSON, see *Outputs) with a Technology: header Returns a downloadable artifact via webhook and can append/create the file in Google Drive Why it’s useful Eliminates manual copy-paste and produces a consistent, portable baseline ready for review, audit, or enforcement tooling—ideal for rapidly generating or refreshing baselines across cloud providers and services. Multicloud support The workflow is multicloud by design. Provide the target cloud in the request and run the same pipeline for: AWS, **Azure, GCP (out of the box) Extensible to other providers/services by adjusting prompts and routing logic How it works (high level) POST /create (Basic Auth) with { cloudProvider, technology, urls[] } Input validation → generate uuid → resolve Google Drive folder (search-or-create) Download & sanitize each URL AI pipeline: Extractor → Composer → Baseline Builder → (optional) Baseline Auditor Append/create file in Drive and return a downloadable artifact (TXT/JSON) via webhook Request (webhook) Method: POST URL: https://<your-n8n>/webhook/create Auth: Basic Auth Headers: Content-Type: application/json Example input (Postman/CLI) { "cloudProvider": "aws", "technology": "Amazon S3", "urls": [ "https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html", "https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/S3/", "https://repost.aws/knowledge-center/secure-s3-resources" ] } Field reference cloudProvider (string, required) — case-insensitive. Supported: aws, azure, gcp. technology (string, required) — e.g., "Amazon S3", "Azure Storage", "Google Cloud Storage". urls (string\[], required) — 1–20 http(s) URLs (official/reputable docs). Optional (Google Drive destination): gdriveTargetId (string) — Google Drive folderId used for append/create. gdrivePath (string) — Path like "DefySec/Baselines" (folders are created if missing). gdriveTargetName (string) — Folder name to find/create under root. Optional (Assistant overrides): assistantExtractorId, assistantComposerId, assistantBaselineId, assistantAuditorId (strings) Resolution precedence Drive: gdriveTargetId → gdrivePath → gdriveTargetName → default folder. Assistants: explicit IDs above → dynamic resolution by name (expects 1_DefySec_Extractor, 2_DefySec_Control_Composer, 3_DefySec Baseline Builder, 4_DefySec_Baseline_Auditor). Validation Rejects empty urls or non-http(s) schemes; normalizes cloudProvider to aws|azure|gcp. Sanitizes fetched HTML (removes scripts/styles/headers) before AI steps. Outputs Primary:* downloadable *TXT** file controls_<technology>_<timestamp>.txt (via webhook). Composer outcomes:** if no groups to consolidate → NO_CONTROLS_TO_BE_CONSOLIDATED; if nothing valid remains → NO_CONTROLS_FOUND.  JSON path:* when the Builder stage is configured for *JSON-only** output (strict schema), the workflow returns a .json artifact and the Auditor validates it (see next section).  Techniques used (from the built-in assistants) Provider-aware extraction with strict TXT contract (3 lines):* Extractor limits itself to the declared provider/technology, outputs only Description/Reference/SecurityObjective, and applies a *reflexive quality check** before emitting.  Normalization & strict header parsing:** Composer normalizes whitespace/fences, requires the CloudProvider/Technology header, and ignores anything outside the exact 3-line block shape.  True-equivalence grouping & consolidation:* Composer groups *only** when intent, enforcement locus/mechanism, scope, and mode/setting all match—otherwise items remain distinct.  7-line enforceable control format:* Composer renders each (consolidated or unique) control in *exactly seven labeled lines** to keep results auditable and automatable.  Builder with JSON-only schema & technology inference:* Builder parses 7-line blocks, infers technology, consolidates true equivalents again if needed, and returns *pure JSON** matching a canonical schema (with counters in meta).  Self-evaluation loop (Auditor):* Auditor *unwraps transport, validates **schema & content, checks provider terminology/scope/automation, and returns either GOOD_ENOUGH or a JSON instruction set for the Builder to fix and re-emit—enabling reflective improvement.  Reference prioritization:** Across stages, official provider documentation is preferred in References (AWS/Azure/GCP).  Customization & extensions Prompt-reflective techniques:** keep (or extend) the Auditor loop to add more review passes and quality gates.  Compliance assistants:* add assistants to analyze/label controls for *HIPAA, PCI DSS, SOX** (and others), emitting mappings, gaps, and remediation notes. Implementation context:* feed internal implementation docs, runbooks, or *Architecture Decision Records (ADRs); use these as **grounding to generate or refine controls (works with local/self-hosted LLMs, too). Local/self-hosted LLMs:** swap OpenAI nodes for your on-prem LLM endpoint while keeping the pipeline. Provider-specific outputs:** extend the final stage to export Policy-as-Code or IaC snippets (Rego/Sentinel, CloudFormation Guard, Bicep/ARM, Terraform validations). Assistant configuration & prompts Full assistant configurations and prompts (Extractor, Composer, Baseline Builder, Baseline Auditor) are available here: https://github.com/followdrabbit/n8nlabs/tree/main/Lab03%20-%20Multicloud%20AI%20Security%20Control%20Baseline%20Builder/Assistants Security & privacy No hardcoded secrets in HTTP nodes; use n8n’s Credential Manager. Drive operations are optional and folder-scoped. For sensitive environments, switch to a local LLM and provide only sanitized/approved inputs. Quick test (curl) curl -X POST "https://<your-n8n>/webhook/create" \ -u "<user>:<pass>" \ -H "Content-Type: application/json" \ -d '{ "cloudProvider":"aws", "technology":"Amazon S3", "urls":[ "https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html" ] }' \ -OJ
by SIENNA
Automated AWS S3 / Azure / Google to local MinIO Object Backup with Scheduling What this workflow does ? This workflow performs automated, periodic backups of objects from an AWS S3 bucket, an Azure Container or a Google Storage Space to a MinIO S3 bucket running locally or on a dedicated container/VM/server. It can also work if the MinIO bucket is running on a remote cloud provider's infrastructure; you just need to change the URL and keys. Who's this intended for ? Storage administrators, cloud architects, or DevOps who need a simple and scalable solution for retrieving data from AWS, Azure or GCP. How it works This workflow uses the official AWS S3 API to list and download objects from a specific bucket, or the Azure BLOB one, then send them to MinIO using their version of the S3 API. Requirements None, just a source Bucket on your Cloud Storage Provider and a destination one on MinIO. You'll also need to get MinIO running. You're using Proxmox VE ? Create a MinIO LXC Container : https://community-scripts.github.io/ProxmoxVE/scripts?id=minio Need a Backup from another Cloud Storage Provider ? Need automated backup from another Cloud Storage Provider ? $\mapsto$ Check out our templates, we've done it with AWS, Azure, and GCP, and we even have a version for FTP/SFTP servers! For a dedicated source Cloud Storage Provider, please contact us ! $\odot$ These workflow can be integrated to bigger ones and modified to best suit your needs ! You can, for example, replace the MinIO node to another S3 Bucket from another Cloud Storage Provider (Backblaze, Wasabi, Scaleway, OVH, ...)
by Trung Tran
Automating AWS S3 Operations with n8n: Buckets, Folders, and Files Watch the demo video below: This tutorial walks you through setting up an automated workflow that generates AI-powered images from prompts and securely stores them in AWS S3. It leverages the new AI Tool Node and OpenAI models for prompt-to-image generation. Who’s it for This workflow is ideal for: Designers & marketers** who need quick, on-demand AI-generated visuals. Developers & automation builders* exploring *AI-driven workflows** integrated with cloud storage. Educators or trainers** creating tutorials or exercises on AI image generation. Businesses* looking to automate *image content pipelines** with AWS S3 storage. How it works / What it does Trigger: The workflow starts manually when you click “Execute Workflow”. Edit Fields: You can provide input fields such as image description, resolution, or naming convention. Create AWS S3 Bucket: Automatically creates a new S3 bucket if it doesn’t exist. Create a Folder: Inside the bucket, a folder is created to organize generated images. Prompt Generation Agent: An AI agent generates or refines the image prompt using the OpenAI Chat Model. Generate an Image: The refined prompt is used to generate an image using AI. Upload File to S3: The generated image is uploaded to the AWS S3 bucket for secure storage. This workflow showcases how to combine AI + Cloud Storage seamlessly in an automated pipeline. How to set up Import the workflow into n8n. Configure the following credentials: AWS S3 (Access Key, Secret Key, Region). OpenAI API Key (for Chat + Image models). Update the Edit Fields node with your preferred input fields (e.g., image size, description). Execute the workflow and test by entering a sample image prompt (e.g., “Futuristic city skyline in watercolor style”). Check your AWS S3 bucket to verify the uploaded image. Requirements n8n** (latest version with AI Tool Node support). AWS account** with S3 permissions to create buckets and upload files. OpenAI API key** (for prompt refinement and image generation). Basic familiarity with AWS S3 structure (buckets, folders, objects). How to customize the workflow Custom Buckets**: Replace the auto-create step with an existing S3 bucket. Image Variations**: Generate multiple image variations per prompt by looping the image generation step. File Naming**: Adjust file naming conventions (e.g., timestamp, user input). Metadata**: Add metadata such as tags, categories, or owner info when uploading to S3. Alternative Storage: Swap AWS S3 with **Google Cloud Storage, Azure Blob, or Dropbox. Trigger Options: Replace manual trigger with **Webhook, Form Submission, or Scheduler for automation. ✅ This workflow is a hands-on example of how to combine AI prompt engineering, image generation, and cloud storage automation into a single streamlined process.
by Haruki Kuwai
🧠 About this workflow This workflow automatically generates personalized B2B outreach email messages by combining AI-based company research and text generation. It’s designed to help sales and marketing professionals automate the creation of tailored cold emails for prospects. ⚙️ How it works Get rows from Google Sheets — Retrieves companies marked as “ready” for outreach. Loop Over Items — Processes each company individually. Company Research (LangChain Agent) — Uses the Tavily search tool to collect key company insights such as overview, offerings, and recent news. Generate Outreach Message (LLM Chain) — Drafts a professional, concise, and fully personalized email body in English using the AI training context from YOUR_COMPANY_NAME. This example uses an AI training and automation service context, but you can easily modify the prompt to fit your own company’s products, services, or industry. Add to Google Sheets — Writes the generated messages back into the sheet. (Optional) Add to Instantly.ai — Sends the finalized lead data to your Instantly campaign for cold email distribution. 👥Use Cases 💼Sales & CRM:Automatically build and update your client database from received business cards 🏢Back Office & Admin: Digitize incoming cards for unified company records 📧Marketing Teams: Collect and manage leads efficiently 📚 AI / OCR Research: Build structured datasets for training AI models or internal automation 🧩 Troubleshooting If the workflow does not generate emails or data fails to appear in Google Sheets, please check the following: Google Sheets credentials — Ensure that the connected account has edit permissions and the document ID and sheet name are correctly set. API keys — Verify that your OpenRouter and Tavily API credentials are valid and not expired. Rate limits — Tavily and OpenRouter may throttle requests when processing multiple records. Try lowering the batch size in the “Limit” node. Empty company background — If the “Company Research” node returns no output, make sure the input company name is correct and includes sufficient context (e.g., full company name, not abbreviation). LLM output format — Ensure the “Generate Outreach Message” node is set to return plain text, not JSON or markdown. Instantly.ai integration (optional) — If leads are not added, confirm that your API key and campaign ID are valid, and that the node is not disabled. If the issue persists, enable “Always Output Data” in key nodes (such as Company Research and Generate Outreach Message) to debug intermediate results. You can also use the Execution Log to inspect where the flow stops or returns an empty output. ⚠️ Disclaimer This workflow uses AI language models and third-party APIs (OpenRouter, Tavily). Ensure that you add your own API credentials securely and verify all AI-generated content before sending emails.