by n8n Team
This workflow sends a new Clockify invoice to a Notion database of your choosing when a new invoice is created in Clockify. Prerequisites Notion account and Notion credentials. Clockify account. How it works On new invoice in Clockify webhook node will trigger when a new invoice is created in Clockify. Setup is involved. Create database page Notion node will create a database page with the information specified from the Clockify trigger. You can add additional fields if required by following the setup. Setup This workflow requires that you set up a webhook in Clockify. Follow the steps below to set up the webhook: Create a Clockify webhook by going to the webhooks section in Clockify. Create the webhook specifying the "Invoice created" event and paste in the URL provided from On new invoice in Clockify webhook step. You will also have to set up a Notion database: In Notion, create a new database. Add the following columns to the database: Invoice number (renamed from "Name") Issue date (with type "Date") Due date (with type "Date") Amount (with type "Number") Add any other fields you require to the database. Share the database to n8n. By default, the workflow will fill all the fields provided above, except for any other additional fields you add.
by PUQcloud
Overview The Docker InfluxDB WHMCS module uses a specially designed workflow for n8n to automate deployment processes. The workflow provides an API interface for the module, receives specific commands, and connects via SSH to a server with Docker installed to perform predefined actions. Prerequisites You must have your own n8n server. Alternatively, you can use the official n8n cloud installations available at: n8n Official Site Installation Steps Install the Required Workflow on n8n You have two options: Option 1: Use the Latest Version from the n8n Marketplace The latest workflow templates for our modules are available on the official n8n marketplace. Visit our profile: PUQcloud on n8n Option 2: Manual Installation Each module version comes with a workflow template file. You need to manually import this template into your n8n server. n8n Workflow API Backend Setup for WHMCS/WISECP 1. Configure API Webhook and SSH Access Create a Basic Auth Credential for the Webhook API block in n8n. Create an SSH Credential for accessing a server with Docker installed. 2. Modify Template Parameters In the Parameters block of the template, update the following settings: server_domain – Must match the domain of the WHMCS/WISECP Docker server. clients_dir – Directory where user data related to Docker and disks will be stored. mount_dir – Default mount point for the container disk (recommended not to change). Do not modify the following technical parameters: screen_left screen_right Deploy-docker-compose In the Deploy-docker-compose element, you can modify the Docker Compose configuration. This is generated in the following scenarios: When the service is created When the service is unlocked When the service is updated nginx In the nginx element, you can modify configuration parameters of the web interface proxy server. The main section allows you to add custom parameters to the server block in the proxy server configuration file. The main_location section contains settings that will be added to the location / block of the configuration. Here, you can define custom headers and parameters. Bash Scripts Management of Docker containers and related procedures is done by executing Bash scripts generated in n8n. These scripts return either JSON or plain strings. All scripts are located in elements directly connected to the SSH element. You have full control over any script and can modify or execute it as needed.
by PUQcloud
Setting up n8n workflow Overview The Docker n8n WHMCS module uses a specially designed workflow for n8n to automate deployment processes. The workflow provides an API interface for the module, receives specific commands, and connects via SSH to a server with Docker installed to perform predefined actions. Prerequisites You must have your own n8n server. Alternatively, you can use the official n8n cloud installations available at: n8n Official Site Installation Steps Install the Required Workflow on n8n You have two options: Option 1: Use the Latest Version from the n8n Marketplace The latest workflow templates for our modules are available on the official n8n marketplace. Visit our profile to access all available templates: PUQcloud on n8n Option 2: Manual Installation Each module version comes with a workflow template file. You need to manually import this template into your n8n server. n8n Workflow API Backend Setup for WHMCS/WISECP Configure API Webhook and SSH Access Create a Basic Auth Credential for the Webhook API Block in n8n. Create an SSH Credential for accessing a server with Docker installed. Modify Template Parameters In the Parameters block of the template, update the following settings: server_domain – Must match the domain of the WHMCS/WISECP Docker server. clients_dir – Directory where user data related to Docker and disks will be stored. mount_dir – Default mount point for the container disk (recommended not to change). Do not modify the following technical parameters: screen_left screen_right Deploy-docker-compose In the Deploy-docker-compose element, you have the ability to modify the Docker Compose configuration, which will be generated in the following scenarios: When the service is created When the service is unlocked When the service is updated nginx In the nginx element, you can modify the configuration parameters of the web interface proxy server. The main section allows you to add custom parameters to the server block in the proxy server configuration file. The main\_location section contains settings that will be added to the location / block of the proxy server configuration. Here, you can define custom headers and other parameters specific to the root location. Bash Scripts Management of Docker containers and all related procedures on the server is carried out by executing Bash scripts generated in n8n. These scripts return either a JSON response or a string. All scripts are located in elements directly connected to the SSH element. You have full control over any script and can modify or execute it as needed.
by Jez
Summary This n8n workflow implements an AI-powered agent that intelligently uses the Brave Search API (via an external MCP service like Smithery) to perform both web and local searches. It understands natural language queries, selects the appropriate search tool, and exposes this enhanced capability as a single, callable MCP tool. Key Features 🤖 Intelligent Tool Selection: AI agent decides between Brave's web search and local search tools based on user query context. 🌐 MCP Microservice: Exposes complex search logic as a single, easy-to-integrate MCP tool (call_brave_search_agent). 🧠 Powered by Google Gemini: Utilizes the gemini-2.5-flash-preview-05-20 LLM for advanced reasoning. 🗣️ Conversational Memory: Remembers context within a single execution flow. 📝 Customizable System Prompt: Tailor the AI's behavior and responses. 🧩 Modular Design: Connects to external Brave Search MCP tools (e.g., from Smithery). Benefits 🔌 Simplified Integration: Easily add advanced, AI-driven search capabilities to other applications or agent systems. 💸 Reduced Client-Side LLM Costs: Offloads complex prompting and tool orchestration to n8n, minimizing token usage for client-side LLMs. 🔧 Centralized Logic: Manage and update search strategies and AI behavior in one place. 🚀 Extensible: Can be adapted to use other search tools or incorporate more complex decision-making. Nodes Used @n8n/n8n-nodes-langchain.mcpTrigger (MCP Server Trigger) @n8n/n8n-nodes-langchain.toolWorkflow @n8n/n8n-nodes-langchain.agent (AI Agent) @n8n/n8n-nodes-langchain.lmChatGoogleGemini (Google Gemini Chat Model) n8n-nodes-mcp.mcpClientTool (MCP Client Tool - for Brave Search) @n8n/n8n-nodes-langchain.memoryBufferWindow (Simple Memory) n8n-nodes-base.executeWorkflowTrigger (Workflow Start - for direct execution/testing) Prerequisites An active n8n instance (v1.22.5+ recommended). A Google AI API key for using the Gemini LLM. Access to an external MCP service that provides Brave Search tools (e.g., a Smithery account configured with their Brave Search MCP). This includes the MCP endpoint URL and any necessary authentication (like an API key for Smithery). Setup Instructions Import Workflow: Download the Brave_Search_Smithery_AI_Agent_MCP_Server.json file and import it into your n8n instance. Configure LLM Credential: Locate the 'Google Gemini Chat Model' node. Select or create an n8n credential for "Google Palm API" (used for Gemini), providing your Google AI API key. Configure Brave Search MCP Credential: Locate the 'brave_web_search' and 'brave_local_search' (MCP Client) nodes. Create a new n8n credential of type "MCP Client HTTP API". Name: e.g., Smithery Brave Search Access Base URL: Enter the URL of your Brave Search MCP endpoint from your provider (e.g., https://server.smithery.ai/@YOUR_PROFILE/brave-search/mcp). Authentication: If your MCP provider requires an API key, select "Header Auth". Add a header with the name (e.g., X-API-Key) and value provided by your MCP service. Assign this newly created credential to both the 'brave_web_search' and 'brave_local_search' nodes. Note MCP Trigger Path: Open the 'Brave Search MCP Server Trigger' node. Copy its unique 'Path' (e.g., /cc8cc827-3e72-4029-8a9d-76519d1c136d). You will combine this with your n8n instance's base URL to get the full endpoint URL for clients. How to Use This workflow exposes an MCP tool named call_brave_search_agent. External clients can call this tool via the URL derived from the 'Brave Search MCP Server Trigger'. Example Client MCP Configuration (e.g., for Roo Code): "n8n-brave-search-agent": { "url": "https://YOUR_N8N_INSTANCE/mcp/cc8cc827-3e72-4029-8a9d-76519d1c136d/sse", "alwaysAllow": [ "call_brave_search_agent" ] } Replace YOUR_N8N_INSTANCE with your n8n's public URL and ensure the path matches your trigger node. Example Request: Send a POST request to the trigger URL with a JSON body: { "input": { "query": "best coffee shops in London" } } The agent will stream its response, including the summarized search results. Customization AI Behavior:* Modify the System Prompt within the *'Brave Search AI Agent'** node to fine-tune its decision-making, response style, or how it uses the search tools. LLM Choice:* Replace the *'Google Gemini Chat Model'** node with any other compatible LLM node supported by n8n. Search Tools:** Adapt the workflow to use different or additional search tools by modifying the MCP Client nodes and updating the AI Agent's system prompt and tool definitions. Further Information GitHub Repository: https://github.com/jezweb/n8n The workflow includes extensive sticky notes for in-canvas documentation. Author Jeremy Dawes (Jezweb)
by Markhah
Overview This workflow generates automated revenue and expense comparison reports from a structured Google Sheet. It enables users to compare financial data across the current period, last month, and last year, then uses an AI agent to analyze and summarize the results for business reporting. 1.Prerequisites A connected Google Sheets OAuth2 credential. A valid DeepSeek AI API (or replaceable with another Chat Model). A sub-workflow (child workflow) that handles processing logic. Properly structured Google Sheets data (see below). 2.Required Google Sheet Structure Column headers must include at least: Date, Amount, Type. Data format for Date must be in dd/MM/yyyy or dd-MM-yyyy. Entries should span over multiple time periods (e.g., current month, last month, last year). 3.Setup Steps Import the workflow into your n8n instance. Connect your Google Sheets and DeepSeek API credentials. Update: Sheet ID and Tab Name (already embedded in node: Get revenual from google sheet). Custom sub-workflow ID (in the Call n8n Workflow Tool node). Optionally configure chatbot webhook in the When chat message received node. 4.What the Workflow Does Accepts date inputs via AI chat interface (ChatTrigger + AI Agent). Fetches raw transaction data from Google Sheets. Segments and pivots revenue by classification for: Current period Last month Last year Aggregates totals and applies custom titles for comparison. Merges all summaries into a final unified JSON report. 5.Customization Options Replace DeepSeek with OpenAI or other LLMs. Change the date fields or cycle comparisons (e.g., quarterly, weekly). Add more AI analysis steps such as sentiment scoring or forecasting. Modify the pivot logic to suit specific KPI tags or labels. 6.Troubleshooting Tips If Google Sheets fetch fails: ensure the document is shared with your n8n Google credential. If parsing errors: verify that all dates follow the expected format. Sub-workflow must be active and configured to accept the correct inputs (6 dates). 7.SEO Keywords: google sheets report, AI financial report, compare revenue by month, expense analysis automation, chatbot n8n report generator, n8n Google Sheet integration
by Nabin Bhandari
This n8n template uses AI to automatically classify incoming Gmail messages into five categories and route them to the right people or departments. It can also reply automatically and send WhatsApp alerts for urgent or relevant messages. This helps ensure high-priority emails never get missed, while other messages are handled efficiently. ##How It Works Trigger A new email in Gmail triggers the workflow. Classification (OpenAI GPT) The email is analyzed by an OpenAI GPT model and classified into one of: High Priority Customer Support Promotion Finance/Billing Random/Other Conditional Logic & Actions High Priority → Create draft reply + send WhatsApp alert. Customer Support → Auto-reply + send WhatsApp confirmation alert. Promotion → Summarize email + send WhatsApp promotional alert. Finance/Billing → Forward to finance team + send WhatsApp finance alert. Random/Other → Label and log only. Multi-Channel Output Responses are sent via Gmail. Alerts are sent via WhatsApp (or another compatible API). ##Setup Instructions Step 1: Gmail Authorization Add a Gmail node in n8n. Connect using OAuth2 and grant read/send permissions. Step 2: OpenAI API Key Get your API key from OpenAI. Add it to n8n credentials for the OpenAI node. Step 3: WhatsApp Integration Use your WhatsApp Business API or a provider like Twilio or 360Dialog. Replace placeholders with your details: [YOUR_WHATSAPP_NUMBER] [YOUR_FINANCE_TEAM_NUMBER] [YOUR_SUPPORT_TEAM_NUMBER] Step 4: Import & Run Import the workflow JSON into n8n. Adjust prompts, labels, and routing logic as needed. Execute and monitor results. ##Good to Know Fully customizable — add or remove categories, adjust responses, and change alert channels. Can be integrated with Slack, Discord, Trello, Notion, Jira, or CRM systems. Scales easily across teams and departments. ##Requirements Gmail account with OAuth2 credentials set up in n8n OpenAI API key for classification and content generation WhatsApp (or other messaging service) integration Optional: Slack, Notion, CRM, or accounting tool integrations ##Customization Ideas Create support tickets in Trello, Notion, or Jira from Customer Support emails. Sync Finance emails with QuickBooks, Stripe, or Google Sheets. Replace WhatsApp alerts with Slack or Discord messages. Use Zapier/Make for cross-platform automations.
by PUQcloud
Setting up n8n workflow Overview The Docker Grafana WHMCS module uses a specially designed workflow for n8n to automate deployment processes. The workflow provides an API interface for the module, receives specific commands, and connects via SSH to a server with Docker installed to perform predefined actions. Prerequisites You must have your own n8n server. Alternatively, you can use the official n8n cloud installations available at: n8n Official Site Installation Steps Install the Required Workflow on n8n You have two options: Option 1: Use the Latest Version from the n8n Marketplace The latest workflow templates for our modules are available on the official n8n marketplace. Visit our profile to access all available templates: PUQcloud on n8n Option 2: Manual Installation Each module version comes with a workflow template file. You need to manually import this template into your n8n server. n8n Workflow API Backend Setup for WHMCS/WISECP Configure API Webhook and SSH Access Create a Basic Auth Credential for the Webhook API Block in n8n. Create an SSH Credential for accessing a server with Docker installed. Modify Template Parameters In the Parameters block of the template, update the following settings: server_domain – Must match the domain of the WHMCS/WISECP Docker server. clients_dir – Directory where user data related to Docker and disks will be stored. mount_dir – Default mount point for the container disk (recommended not to change). Do not modify the following technical parameters: screen_left screen_right Deploy-docker-compose In the Deploy-docker-compose element, you have the ability to modify the Docker Compose configuration, which will be generated in the following scenarios: When the service is created When the service is unlocked When the service is updated nginx In the nginx element, you can modify the configuration parameters of the web interface proxy server. The main section allows you to add custom parameters to the server block in the proxy server configuration file. The main\_location section contains settings that will be added to the location / block of the proxy server configuration. Here, you can define custom headers and other parameters specific to the root location. Bash Scripts Management of Docker containers and all related procedures on the server is carried out by executing Bash scripts generated in n8n. These scripts return either a JSON response or a string. All scripts are located in elements directly connected to the SSH element. You have full control over any script and can modify or execute it as needed.
by Angel Menendez
n8n Workflow: Automate SIEM Alert Enrichment with MITRE ATT&CK & Qdrant Who is this for? This workflow is ideal for: Cybersecurity teams & SOC analysts* who want to automate *SIEM alert enrichment**. IT security professionals* looking to integrate *MITRE ATT&CK intelligence** into their ticketing system. Organizations using Zendesk for security incidents* who need enhanced *contextual threat data**. Anyone using n8n and Qdrant* to build *AI-powered security workflows**. What problem does this workflow solve? Security teams receive large volumes of raw SIEM alerts that lack actionable context. Investigating every alert manually is time-consuming and can lead to delayed response times. This workflow solves this problem by: ✔ Automatically enriching SIEM alerts with MITRE ATT&CK TTPs. ✔ Tagging & classifying alerts based on known attack techniques. ✔ Providing remediation steps to guide the response team. ✔ Enhancing security tickets in Zendesk with relevant threat intelligence. What this workflow does 1️⃣ Ingests SIEM alerts (via chatbot or ticketing system like Zendesk). 2️⃣ Queries a Qdrant vector store containing MITRE ATT&CK techniques. 3️⃣ Extracts relevant TTPs (Tactics, Techniques, & Procedures) from the alert. 4️⃣ Generates remediation steps using AI-powered enrichment. 5️⃣ Updates Zendesk tickets with threat intelligence & recommended actions. 6️⃣ Provides structured alert data for further automation or reporting. Setup Guide Prerequisites n8n instance** (Cloud or Self-hosted). Qdrant vector store** with MITRE ATT&CK data embedded. OpenAI API key** (for AI-based threat processing). Zendesk account** (for ticket enrichment, if applicable). Clean Mitre Data Python Script Cleaned Mitre Data Full Mitre Data Steps to Set Up 1️⃣ Embed MITRE ATT&CK data into Qdrant This workflow pulls MITRE ATT&CK data from Google Drive and loads it into Qdrant. The data is vectorized using OpenAI embeddings for fast retrieval. 2️⃣ Deploy the n8n Chatbot The chatbot listens for SIEM alerts and sends them to the AI processing pipeline. Alerts are analyzed using an AI agent trained on MITRE ATT&CK. 3️⃣ Enrich Zendesk Tickets The workflow extracts MITRE ATT&CK techniques from alerts. It updates Zendesk tickets with contextual threat intelligence. The remediation steps are included as internal notes for SOC teams. How to Customize This Workflow 🔧 Modify the chatbot trigger: Adapt the chatbot node to receive alerts from Slack, Microsoft Teams, or any other tool. 🔧 Change the SIEM input source: Connect your workflow to Splunk, Elastic SIEM, or Chronicle Security. 🔧 Customize remediation steps: Use a custom AI model to tailor remediation responses based on organization-specific security policies. 🔧 Extend ticketing integration: Modify the Zendesk node to also work with Jira, ServiceNow, or another ITSM platform. Why This Workflow is Powerful ✅ Saves time: Automates alert triage & classification. ✅ Improves security posture: Helps SOC teams act faster on threats. ✅ Leverages AI & vector search: Uses LLM-powered enrichment for real-time context. ✅ Works across platforms: Supports n8n Cloud, Self-hosted, and Qdrant. 🚀 Get Started Now! 📖 Watch the Setup Video 💬 Have Questions? Join the Discussion in the YouTube Comments!
by Raphael De Carvalho Florencio
What this template does Transforms provider documentation (URLs) into an auditable, enforceable multicloud security control baseline. It: Fetches and sanitizes HTML Uses AI to extract security requirements (strict 3-line TXT blocks) Composes enforceable controls** (strict 7-line TXT blocks with true-equivalence consolidation) Builds the final baseline* (TXT or JSON, see *Outputs) with a Technology: header Returns a downloadable artifact via webhook and can append/create the file in Google Drive Why it’s useful Eliminates manual copy-paste and produces a consistent, portable baseline ready for review, audit, or enforcement tooling—ideal for rapidly generating or refreshing baselines across cloud providers and services. Multicloud support The workflow is multicloud by design. Provide the target cloud in the request and run the same pipeline for: AWS, **Azure, GCP (out of the box) Extensible to other providers/services by adjusting prompts and routing logic How it works (high level) POST /create (Basic Auth) with { cloudProvider, technology, urls[] } Input validation → generate uuid → resolve Google Drive folder (search-or-create) Download & sanitize each URL AI pipeline: Extractor → Composer → Baseline Builder → (optional) Baseline Auditor Append/create file in Drive and return a downloadable artifact (TXT/JSON) via webhook Request (webhook) Method: POST URL: https://<your-n8n>/webhook/create Auth: Basic Auth Headers: Content-Type: application/json Example input (Postman/CLI) { "cloudProvider": "aws", "technology": "Amazon S3", "urls": [ "https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html", "https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/S3/", "https://repost.aws/knowledge-center/secure-s3-resources" ] } Field reference cloudProvider (string, required) — case-insensitive. Supported: aws, azure, gcp. technology (string, required) — e.g., "Amazon S3", "Azure Storage", "Google Cloud Storage". urls (string\[], required) — 1–20 http(s) URLs (official/reputable docs). Optional (Google Drive destination): gdriveTargetId (string) — Google Drive folderId used for append/create. gdrivePath (string) — Path like "DefySec/Baselines" (folders are created if missing). gdriveTargetName (string) — Folder name to find/create under root. Optional (Assistant overrides): assistantExtractorId, assistantComposerId, assistantBaselineId, assistantAuditorId (strings) Resolution precedence Drive: gdriveTargetId → gdrivePath → gdriveTargetName → default folder. Assistants: explicit IDs above → dynamic resolution by name (expects 1_DefySec_Extractor, 2_DefySec_Control_Composer, 3_DefySec Baseline Builder, 4_DefySec_Baseline_Auditor). Validation Rejects empty urls or non-http(s) schemes; normalizes cloudProvider to aws|azure|gcp. Sanitizes fetched HTML (removes scripts/styles/headers) before AI steps. Outputs Primary:* downloadable *TXT** file controls_<technology>_<timestamp>.txt (via webhook). Composer outcomes:** if no groups to consolidate → NO_CONTROLS_TO_BE_CONSOLIDATED; if nothing valid remains → NO_CONTROLS_FOUND.  JSON path:* when the Builder stage is configured for *JSON-only** output (strict schema), the workflow returns a .json artifact and the Auditor validates it (see next section).  Techniques used (from the built-in assistants) Provider-aware extraction with strict TXT contract (3 lines):* Extractor limits itself to the declared provider/technology, outputs only Description/Reference/SecurityObjective, and applies a *reflexive quality check** before emitting.  Normalization & strict header parsing:** Composer normalizes whitespace/fences, requires the CloudProvider/Technology header, and ignores anything outside the exact 3-line block shape.  True-equivalence grouping & consolidation:* Composer groups *only** when intent, enforcement locus/mechanism, scope, and mode/setting all match—otherwise items remain distinct.  7-line enforceable control format:* Composer renders each (consolidated or unique) control in *exactly seven labeled lines** to keep results auditable and automatable.  Builder with JSON-only schema & technology inference:* Builder parses 7-line blocks, infers technology, consolidates true equivalents again if needed, and returns *pure JSON** matching a canonical schema (with counters in meta).  Self-evaluation loop (Auditor):* Auditor *unwraps transport, validates **schema & content, checks provider terminology/scope/automation, and returns either GOOD_ENOUGH or a JSON instruction set for the Builder to fix and re-emit—enabling reflective improvement.  Reference prioritization:** Across stages, official provider documentation is preferred in References (AWS/Azure/GCP).  Customization & extensions Prompt-reflective techniques:** keep (or extend) the Auditor loop to add more review passes and quality gates.  Compliance assistants:* add assistants to analyze/label controls for *HIPAA, PCI DSS, SOX** (and others), emitting mappings, gaps, and remediation notes. Implementation context:* feed internal implementation docs, runbooks, or *Architecture Decision Records (ADRs); use these as **grounding to generate or refine controls (works with local/self-hosted LLMs, too). Local/self-hosted LLMs:** swap OpenAI nodes for your on-prem LLM endpoint while keeping the pipeline. Provider-specific outputs:** extend the final stage to export Policy-as-Code or IaC snippets (Rego/Sentinel, CloudFormation Guard, Bicep/ARM, Terraform validations). Assistant configuration & prompts Full assistant configurations and prompts (Extractor, Composer, Baseline Builder, Baseline Auditor) are available here: https://github.com/followdrabbit/n8nlabs/tree/main/Lab03%20-%20Multicloud%20AI%20Security%20Control%20Baseline%20Builder/Assistants Security & privacy No hardcoded secrets in HTTP nodes; use n8n’s Credential Manager. Drive operations are optional and folder-scoped. For sensitive environments, switch to a local LLM and provide only sanitized/approved inputs. Quick test (curl) curl -X POST "https://<your-n8n>/webhook/create" \ -u "<user>:<pass>" \ -H "Content-Type: application/json" \ -d '{ "cloudProvider":"aws", "technology":"Amazon S3", "urls":[ "https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html" ] }' \ -OJ
by SIENNA
Automated AWS S3 / Azure / Google to local MinIO Object Backup with Scheduling What this workflow does ? This workflow performs automated, periodic backups of objects from an AWS S3 bucket, an Azure Container or a Google Storage Space to a MinIO S3 bucket running locally or on a dedicated container/VM/server. It can also work if the MinIO bucket is running on a remote cloud provider's infrastructure; you just need to change the URL and keys. Who's this intended for ? Storage administrators, cloud architects, or DevOps who need a simple and scalable solution for retrieving data from AWS, Azure or GCP. How it works This workflow uses the official AWS S3 API to list and download objects from a specific bucket, or the Azure BLOB one, then send them to MinIO using their version of the S3 API. Requirements None, just a source Bucket on your Cloud Storage Provider and a destination one on MinIO. You'll also need to get MinIO running. You're using Proxmox VE ? Create a MinIO LXC Container : https://community-scripts.github.io/ProxmoxVE/scripts?id=minio Need a Backup from another Cloud Storage Provider ? Need automated backup from another Cloud Storage Provider ? $\mapsto$ Check out our templates, we've done it with AWS, Azure, and GCP, and we even have a version for FTP/SFTP servers! For a dedicated source Cloud Storage Provider, please contact us ! $\odot$ These workflow can be integrated to bigger ones and modified to best suit your needs ! You can, for example, replace the MinIO node to another S3 Bucket from another Cloud Storage Provider (Backblaze, Wasabi, Scaleway, OVH, ...)
by Trung Tran
Automating AWS S3 Operations with n8n: Buckets, Folders, and Files Watch the demo video below: This tutorial walks you through setting up an automated workflow that generates AI-powered images from prompts and securely stores them in AWS S3. It leverages the new AI Tool Node and OpenAI models for prompt-to-image generation. Who’s it for This workflow is ideal for: Designers & marketers** who need quick, on-demand AI-generated visuals. Developers & automation builders* exploring *AI-driven workflows** integrated with cloud storage. Educators or trainers** creating tutorials or exercises on AI image generation. Businesses* looking to automate *image content pipelines** with AWS S3 storage. How it works / What it does Trigger: The workflow starts manually when you click “Execute Workflow”. Edit Fields: You can provide input fields such as image description, resolution, or naming convention. Create AWS S3 Bucket: Automatically creates a new S3 bucket if it doesn’t exist. Create a Folder: Inside the bucket, a folder is created to organize generated images. Prompt Generation Agent: An AI agent generates or refines the image prompt using the OpenAI Chat Model. Generate an Image: The refined prompt is used to generate an image using AI. Upload File to S3: The generated image is uploaded to the AWS S3 bucket for secure storage. This workflow showcases how to combine AI + Cloud Storage seamlessly in an automated pipeline. How to set up Import the workflow into n8n. Configure the following credentials: AWS S3 (Access Key, Secret Key, Region). OpenAI API Key (for Chat + Image models). Update the Edit Fields node with your preferred input fields (e.g., image size, description). Execute the workflow and test by entering a sample image prompt (e.g., “Futuristic city skyline in watercolor style”). Check your AWS S3 bucket to verify the uploaded image. Requirements n8n** (latest version with AI Tool Node support). AWS account** with S3 permissions to create buckets and upload files. OpenAI API key** (for prompt refinement and image generation). Basic familiarity with AWS S3 structure (buckets, folders, objects). How to customize the workflow Custom Buckets**: Replace the auto-create step with an existing S3 bucket. Image Variations**: Generate multiple image variations per prompt by looping the image generation step. File Naming**: Adjust file naming conventions (e.g., timestamp, user input). Metadata**: Add metadata such as tags, categories, or owner info when uploading to S3. Alternative Storage: Swap AWS S3 with **Google Cloud Storage, Azure Blob, or Dropbox. Trigger Options: Replace manual trigger with **Webhook, Form Submission, or Scheduler for automation. ✅ This workflow is a hands-on example of how to combine AI prompt engineering, image generation, and cloud storage automation into a single streamlined process.
by Sona Labs
Automatically enrich company records with comprehensive firmographic data by pulling domains from Google Sheets, setting up custom HubSpot fields, enriching through Sona API, and syncing complete profiles to HubSpot CRM with custom property mapping. Import company domains from a Google Sheet, configure custom HubSpot fields for Sona data, automatically enrich domains with detailed firmographic intelligence, and create fully populated company records in HubSpot—so you can build rich prospect databases without manual research. How it works Step 1: Get Company List Reads company domains from your Google Sheet Aggregates all domains into a single array Prepares data for batch processing Step 2: Setup HubSpot Fields Creates custom Sona fields in HubSpot CRM Defines all enrichment data fields needed Ensures proper field mapping for incoming data Step 3: Prepare for Processing Converts aggregated domains into individual items Sets data for batch loop processing Readies each company for enrichment Step 4: Enrich & Sync to HubSpot Loops through each company domain Calls Sona API for enrichment data Creates company in HubSpot with standard fields Formats and updates custom Sona properties Combines firmographics + tech data in one profile Includes 2-second wait between operations for rate limiting What you'll get The workflow enriches each company record with: Firmographic Data**: Company size, employee count, revenue estimates, headquarters location, and founding year Contact Information**: Phone numbers, social media profiles, and timezone details Business Intelligence**: Company descriptions and industry positioning Custom HubSpot Properties**: All Sona data mapped to dedicated custom fields Organized CRM Records**: All data automatically synced to HubSpot for immediate use Domain Tracking**: Companies linked to their websites for future reference Why use this Eliminate manual research**: Save 10-15 minutes per company by automating firmographic lookups Build rich databases**: Transform basic domain lists into comprehensive company profiles Custom field management**: Automatically creates and populates HubSpot custom properties Improve targeting**: Segment and prioritize accounts based on size, location, and other firmographics Keep data current**: Run scheduled enrichments to maintain up-to-date company information Scale your prospecting**: Process hundreds of companies in minutes instead of days Better lead qualification**: Make informed decisions with complete company intelligence Streamlined workflow**: One-click enrichment from spreadsheet to CRM with custom field setup Setup instructions Before you start, you'll need: Google Sheets with a column named website_Domain containing company domains (e.g., example.com) HubSpot Account & App Token - Get an app token by creating a legacy app: Go to HubSpot Settings → Integrations → Legacy Apps Click Create Legacy App Select Private (for one account) In the scopes section, enable the following permissions: crm.schemas.companies.write crm.objects.companies.write crm.schemas.companies.read crm.objects.companies.read Click Create Copy the access token from the Auth tab Sona API Key (for company enrichment) Sign up at https://app.sonalabs.com Free tier available for testing Configuration steps: Prepare your data: Create a Google Sheet with a "website_Domain" column and add 2-3 test companies (e.g., example.com, anthropic.com) Connect Google Sheets: In the "Get Company List from Sheet" node, authenticate with Google and select your spreadsheet and sheet name Configure HubSpot field creation: In the "Create Custom HubSpot Fields" node (Step 2), authenticate with your HubSpot access token and review the custom Sona fields that will be created Add Sona credentials: In the "Sona Enrich" node, authenticate with your Sona API key Connect HubSpot for company creation: In the "Create HubSpot Company" and "Update Company with AI Data" nodes, authenticate using your HubSpot access token Test with sample data: Run the workflow with 2-3 test companies and verify: Custom fields are created in HubSpot Company records appear correctly in HubSpot All firmographic data is populated in custom properties Add error handling: Configure notifications for failed enrichments or API errors (optional but recommended) Scale and automate: Process your full company list, then optionally add a Schedule Trigger for automatic daily or weekly enrichment to keep your CRM data fresh