by PUQcloud
Overview The Docker InfluxDB WHMCS module uses a specially designed workflow for n8n to automate deployment processes. The workflow provides an API interface for the module, receives specific commands, and connects via SSH to a server with Docker installed to perform predefined actions. Prerequisites You must have your own n8n server. Alternatively, you can use the official n8n cloud installations available at: n8n Official Site Installation Steps Install the Required Workflow on n8n You have two options: Option 1: Use the Latest Version from the n8n Marketplace The latest workflow templates for our modules are available on the official n8n marketplace. Visit our profile: PUQcloud on n8n Option 2: Manual Installation Each module version comes with a workflow template file. You need to manually import this template into your n8n server. n8n Workflow API Backend Setup for WHMCS/WISECP 1. Configure API Webhook and SSH Access Create a Basic Auth Credential for the Webhook API block in n8n. Create an SSH Credential for accessing a server with Docker installed. 2. Modify Template Parameters In the Parameters block of the template, update the following settings: server_domain – Must match the domain of the WHMCS/WISECP Docker server. clients_dir – Directory where user data related to Docker and disks will be stored. mount_dir – Default mount point for the container disk (recommended not to change). Do not modify the following technical parameters: screen_left screen_right Deploy-docker-compose In the Deploy-docker-compose element, you can modify the Docker Compose configuration. This is generated in the following scenarios: When the service is created When the service is unlocked When the service is updated nginx In the nginx element, you can modify configuration parameters of the web interface proxy server. The main section allows you to add custom parameters to the server block in the proxy server configuration file. The main_location section contains settings that will be added to the location / block of the configuration. Here, you can define custom headers and parameters. Bash Scripts Management of Docker containers and related procedures is done by executing Bash scripts generated in n8n. These scripts return either JSON or plain strings. All scripts are located in elements directly connected to the SSH element. You have full control over any script and can modify or execute it as needed.
by PUQcloud
Setting up n8n workflow Overview The Docker n8n WHMCS module uses a specially designed workflow for n8n to automate deployment processes. The workflow provides an API interface for the module, receives specific commands, and connects via SSH to a server with Docker installed to perform predefined actions. Prerequisites You must have your own n8n server. Alternatively, you can use the official n8n cloud installations available at: n8n Official Site Installation Steps Install the Required Workflow on n8n You have two options: Option 1: Use the Latest Version from the n8n Marketplace The latest workflow templates for our modules are available on the official n8n marketplace. Visit our profile to access all available templates: PUQcloud on n8n Option 2: Manual Installation Each module version comes with a workflow template file. You need to manually import this template into your n8n server. n8n Workflow API Backend Setup for WHMCS/WISECP Configure API Webhook and SSH Access Create a Basic Auth Credential for the Webhook API Block in n8n. Create an SSH Credential for accessing a server with Docker installed. Modify Template Parameters In the Parameters block of the template, update the following settings: server_domain – Must match the domain of the WHMCS/WISECP Docker server. clients_dir – Directory where user data related to Docker and disks will be stored. mount_dir – Default mount point for the container disk (recommended not to change). Do not modify the following technical parameters: screen_left screen_right Deploy-docker-compose In the Deploy-docker-compose element, you have the ability to modify the Docker Compose configuration, which will be generated in the following scenarios: When the service is created When the service is unlocked When the service is updated nginx In the nginx element, you can modify the configuration parameters of the web interface proxy server. The main section allows you to add custom parameters to the server block in the proxy server configuration file. The main\_location section contains settings that will be added to the location / block of the proxy server configuration. Here, you can define custom headers and other parameters specific to the root location. Bash Scripts Management of Docker containers and all related procedures on the server is carried out by executing Bash scripts generated in n8n. These scripts return either a JSON response or a string. All scripts are located in elements directly connected to the SSH element. You have full control over any script and can modify or execute it as needed.
by Jez
Summary This n8n workflow implements an AI-powered agent that intelligently uses the Brave Search API (via an external MCP service like Smithery) to perform both web and local searches. It understands natural language queries, selects the appropriate search tool, and exposes this enhanced capability as a single, callable MCP tool. Key Features 🤖 Intelligent Tool Selection: AI agent decides between Brave's web search and local search tools based on user query context. 🌐 MCP Microservice: Exposes complex search logic as a single, easy-to-integrate MCP tool (call_brave_search_agent). 🧠 Powered by Google Gemini: Utilizes the gemini-2.5-flash-preview-05-20 LLM for advanced reasoning. 🗣️ Conversational Memory: Remembers context within a single execution flow. 📝 Customizable System Prompt: Tailor the AI's behavior and responses. 🧩 Modular Design: Connects to external Brave Search MCP tools (e.g., from Smithery). Benefits 🔌 Simplified Integration: Easily add advanced, AI-driven search capabilities to other applications or agent systems. 💸 Reduced Client-Side LLM Costs: Offloads complex prompting and tool orchestration to n8n, minimizing token usage for client-side LLMs. 🔧 Centralized Logic: Manage and update search strategies and AI behavior in one place. 🚀 Extensible: Can be adapted to use other search tools or incorporate more complex decision-making. Nodes Used @n8n/n8n-nodes-langchain.mcpTrigger (MCP Server Trigger) @n8n/n8n-nodes-langchain.toolWorkflow @n8n/n8n-nodes-langchain.agent (AI Agent) @n8n/n8n-nodes-langchain.lmChatGoogleGemini (Google Gemini Chat Model) n8n-nodes-mcp.mcpClientTool (MCP Client Tool - for Brave Search) @n8n/n8n-nodes-langchain.memoryBufferWindow (Simple Memory) n8n-nodes-base.executeWorkflowTrigger (Workflow Start - for direct execution/testing) Prerequisites An active n8n instance (v1.22.5+ recommended). A Google AI API key for using the Gemini LLM. Access to an external MCP service that provides Brave Search tools (e.g., a Smithery account configured with their Brave Search MCP). This includes the MCP endpoint URL and any necessary authentication (like an API key for Smithery). Setup Instructions Import Workflow: Download the Brave_Search_Smithery_AI_Agent_MCP_Server.json file and import it into your n8n instance. Configure LLM Credential: Locate the 'Google Gemini Chat Model' node. Select or create an n8n credential for "Google Palm API" (used for Gemini), providing your Google AI API key. Configure Brave Search MCP Credential: Locate the 'brave_web_search' and 'brave_local_search' (MCP Client) nodes. Create a new n8n credential of type "MCP Client HTTP API". Name: e.g., Smithery Brave Search Access Base URL: Enter the URL of your Brave Search MCP endpoint from your provider (e.g., https://server.smithery.ai/@YOUR_PROFILE/brave-search/mcp). Authentication: If your MCP provider requires an API key, select "Header Auth". Add a header with the name (e.g., X-API-Key) and value provided by your MCP service. Assign this newly created credential to both the 'brave_web_search' and 'brave_local_search' nodes. Note MCP Trigger Path: Open the 'Brave Search MCP Server Trigger' node. Copy its unique 'Path' (e.g., /cc8cc827-3e72-4029-8a9d-76519d1c136d). You will combine this with your n8n instance's base URL to get the full endpoint URL for clients. How to Use This workflow exposes an MCP tool named call_brave_search_agent. External clients can call this tool via the URL derived from the 'Brave Search MCP Server Trigger'. Example Client MCP Configuration (e.g., for Roo Code): "n8n-brave-search-agent": { "url": "https://YOUR_N8N_INSTANCE/mcp/cc8cc827-3e72-4029-8a9d-76519d1c136d/sse", "alwaysAllow": [ "call_brave_search_agent" ] } Replace YOUR_N8N_INSTANCE with your n8n's public URL and ensure the path matches your trigger node. Example Request: Send a POST request to the trigger URL with a JSON body: { "input": { "query": "best coffee shops in London" } } The agent will stream its response, including the summarized search results. Customization AI Behavior:* Modify the System Prompt within the *'Brave Search AI Agent'** node to fine-tune its decision-making, response style, or how it uses the search tools. LLM Choice:* Replace the *'Google Gemini Chat Model'** node with any other compatible LLM node supported by n8n. Search Tools:** Adapt the workflow to use different or additional search tools by modifying the MCP Client nodes and updating the AI Agent's system prompt and tool definitions. Further Information GitHub Repository: https://github.com/jezweb/n8n The workflow includes extensive sticky notes for in-canvas documentation. Author Jeremy Dawes (Jezweb)
by NeurochainAI
This template provides a workflow to integrate a Telegram bot with NeurochainAI's inference capabilities, supporting both text processing and image generation. Follow these steps to get started: > Purpose: Enables seamless integration between your Telegram bot and NeurochainAI for advanced AI-driven text and image tasks. Requirements Telegram Bot Token. NeurochainAI API Key. Sufficient credits to utilize NeurochainAI services. Features Text processing through NeurochainAI's inference engine. AI-powered image generation (Flux). Easy customization and scalability for your use case. Setup Import the template into N8N. Add your Telegram Bot Token and NeurochainAI API Key where prompted. Follow the step-by-step instructions embedded in the template for configuration. [NeurochainAI Website](https://www.neurochain.ai/ ) NeurochainAI Guides
by Markhah
Overview This workflow generates automated revenue and expense comparison reports from a structured Google Sheet. It enables users to compare financial data across the current period, last month, and last year, then uses an AI agent to analyze and summarize the results for business reporting. 1.Prerequisites A connected Google Sheets OAuth2 credential. A valid DeepSeek AI API (or replaceable with another Chat Model). A sub-workflow (child workflow) that handles processing logic. Properly structured Google Sheets data (see below). 2.Required Google Sheet Structure Column headers must include at least: Date, Amount, Type. Data format for Date must be in dd/MM/yyyy or dd-MM-yyyy. Entries should span over multiple time periods (e.g., current month, last month, last year). 3.Setup Steps Import the workflow into your n8n instance. Connect your Google Sheets and DeepSeek API credentials. Update: Sheet ID and Tab Name (already embedded in node: Get revenual from google sheet). Custom sub-workflow ID (in the Call n8n Workflow Tool node). Optionally configure chatbot webhook in the When chat message received node. 4.What the Workflow Does Accepts date inputs via AI chat interface (ChatTrigger + AI Agent). Fetches raw transaction data from Google Sheets. Segments and pivots revenue by classification for: Current period Last month Last year Aggregates totals and applies custom titles for comparison. Merges all summaries into a final unified JSON report. 5.Customization Options Replace DeepSeek with OpenAI or other LLMs. Change the date fields or cycle comparisons (e.g., quarterly, weekly). Add more AI analysis steps such as sentiment scoring or forecasting. Modify the pivot logic to suit specific KPI tags or labels. 6.Troubleshooting Tips If Google Sheets fetch fails: ensure the document is shared with your n8n Google credential. If parsing errors: verify that all dates follow the expected format. Sub-workflow must be active and configured to accept the correct inputs (6 dates). 7.SEO Keywords: google sheets report, AI financial report, compare revenue by month, expense analysis automation, chatbot n8n report generator, n8n Google Sheet integration
by Nabin Bhandari
This n8n template uses AI to automatically classify incoming Gmail messages into five categories and route them to the right people or departments. It can also reply automatically and send WhatsApp alerts for urgent or relevant messages. This helps ensure high-priority emails never get missed, while other messages are handled efficiently. ##How It Works Trigger A new email in Gmail triggers the workflow. Classification (OpenAI GPT) The email is analyzed by an OpenAI GPT model and classified into one of: High Priority Customer Support Promotion Finance/Billing Random/Other Conditional Logic & Actions High Priority → Create draft reply + send WhatsApp alert. Customer Support → Auto-reply + send WhatsApp confirmation alert. Promotion → Summarize email + send WhatsApp promotional alert. Finance/Billing → Forward to finance team + send WhatsApp finance alert. Random/Other → Label and log only. Multi-Channel Output Responses are sent via Gmail. Alerts are sent via WhatsApp (or another compatible API). ##Setup Instructions Step 1: Gmail Authorization Add a Gmail node in n8n. Connect using OAuth2 and grant read/send permissions. Step 2: OpenAI API Key Get your API key from OpenAI. Add it to n8n credentials for the OpenAI node. Step 3: WhatsApp Integration Use your WhatsApp Business API or a provider like Twilio or 360Dialog. Replace placeholders with your details: [YOUR_WHATSAPP_NUMBER] [YOUR_FINANCE_TEAM_NUMBER] [YOUR_SUPPORT_TEAM_NUMBER] Step 4: Import & Run Import the workflow JSON into n8n. Adjust prompts, labels, and routing logic as needed. Execute and monitor results. ##Good to Know Fully customizable — add or remove categories, adjust responses, and change alert channels. Can be integrated with Slack, Discord, Trello, Notion, Jira, or CRM systems. Scales easily across teams and departments. ##Requirements Gmail account with OAuth2 credentials set up in n8n OpenAI API key for classification and content generation WhatsApp (or other messaging service) integration Optional: Slack, Notion, CRM, or accounting tool integrations ##Customization Ideas Create support tickets in Trello, Notion, or Jira from Customer Support emails. Sync Finance emails with QuickBooks, Stripe, or Google Sheets. Replace WhatsApp alerts with Slack or Discord messages. Use Zapier/Make for cross-platform automations.
by PUQcloud
Setting up n8n workflow Overview The Docker Grafana WHMCS module uses a specially designed workflow for n8n to automate deployment processes. The workflow provides an API interface for the module, receives specific commands, and connects via SSH to a server with Docker installed to perform predefined actions. Prerequisites You must have your own n8n server. Alternatively, you can use the official n8n cloud installations available at: n8n Official Site Installation Steps Install the Required Workflow on n8n You have two options: Option 1: Use the Latest Version from the n8n Marketplace The latest workflow templates for our modules are available on the official n8n marketplace. Visit our profile to access all available templates: PUQcloud on n8n Option 2: Manual Installation Each module version comes with a workflow template file. You need to manually import this template into your n8n server. n8n Workflow API Backend Setup for WHMCS/WISECP Configure API Webhook and SSH Access Create a Basic Auth Credential for the Webhook API Block in n8n. Create an SSH Credential for accessing a server with Docker installed. Modify Template Parameters In the Parameters block of the template, update the following settings: server_domain – Must match the domain of the WHMCS/WISECP Docker server. clients_dir – Directory where user data related to Docker and disks will be stored. mount_dir – Default mount point for the container disk (recommended not to change). Do not modify the following technical parameters: screen_left screen_right Deploy-docker-compose In the Deploy-docker-compose element, you have the ability to modify the Docker Compose configuration, which will be generated in the following scenarios: When the service is created When the service is unlocked When the service is updated nginx In the nginx element, you can modify the configuration parameters of the web interface proxy server. The main section allows you to add custom parameters to the server block in the proxy server configuration file. The main\_location section contains settings that will be added to the location / block of the proxy server configuration. Here, you can define custom headers and other parameters specific to the root location. Bash Scripts Management of Docker containers and all related procedures on the server is carried out by executing Bash scripts generated in n8n. These scripts return either a JSON response or a string. All scripts are located in elements directly connected to the SSH element. You have full control over any script and can modify or execute it as needed.
by Angel Menendez
n8n Workflow: Automate SIEM Alert Enrichment with MITRE ATT&CK & Qdrant Who is this for? This workflow is ideal for: Cybersecurity teams & SOC analysts* who want to automate *SIEM alert enrichment**. IT security professionals* looking to integrate *MITRE ATT&CK intelligence** into their ticketing system. Organizations using Zendesk for security incidents* who need enhanced *contextual threat data**. Anyone using n8n and Qdrant* to build *AI-powered security workflows**. What problem does this workflow solve? Security teams receive large volumes of raw SIEM alerts that lack actionable context. Investigating every alert manually is time-consuming and can lead to delayed response times. This workflow solves this problem by: ✔ Automatically enriching SIEM alerts with MITRE ATT&CK TTPs. ✔ Tagging & classifying alerts based on known attack techniques. ✔ Providing remediation steps to guide the response team. ✔ Enhancing security tickets in Zendesk with relevant threat intelligence. What this workflow does 1️⃣ Ingests SIEM alerts (via chatbot or ticketing system like Zendesk). 2️⃣ Queries a Qdrant vector store containing MITRE ATT&CK techniques. 3️⃣ Extracts relevant TTPs (Tactics, Techniques, & Procedures) from the alert. 4️⃣ Generates remediation steps using AI-powered enrichment. 5️⃣ Updates Zendesk tickets with threat intelligence & recommended actions. 6️⃣ Provides structured alert data for further automation or reporting. Setup Guide Prerequisites n8n instance** (Cloud or Self-hosted). Qdrant vector store** with MITRE ATT&CK data embedded. OpenAI API key** (for AI-based threat processing). Zendesk account** (for ticket enrichment, if applicable). Clean Mitre Data Python Script Cleaned Mitre Data Full Mitre Data Steps to Set Up 1️⃣ Embed MITRE ATT&CK data into Qdrant This workflow pulls MITRE ATT&CK data from Google Drive and loads it into Qdrant. The data is vectorized using OpenAI embeddings for fast retrieval. 2️⃣ Deploy the n8n Chatbot The chatbot listens for SIEM alerts and sends them to the AI processing pipeline. Alerts are analyzed using an AI agent trained on MITRE ATT&CK. 3️⃣ Enrich Zendesk Tickets The workflow extracts MITRE ATT&CK techniques from alerts. It updates Zendesk tickets with contextual threat intelligence. The remediation steps are included as internal notes for SOC teams. How to Customize This Workflow 🔧 Modify the chatbot trigger: Adapt the chatbot node to receive alerts from Slack, Microsoft Teams, or any other tool. 🔧 Change the SIEM input source: Connect your workflow to Splunk, Elastic SIEM, or Chronicle Security. 🔧 Customize remediation steps: Use a custom AI model to tailor remediation responses based on organization-specific security policies. 🔧 Extend ticketing integration: Modify the Zendesk node to also work with Jira, ServiceNow, or another ITSM platform. Why This Workflow is Powerful ✅ Saves time: Automates alert triage & classification. ✅ Improves security posture: Helps SOC teams act faster on threats. ✅ Leverages AI & vector search: Uses LLM-powered enrichment for real-time context. ✅ Works across platforms: Supports n8n Cloud, Self-hosted, and Qdrant. 🚀 Get Started Now! 📖 Watch the Setup Video 💬 Have Questions? Join the Discussion in the YouTube Comments!
by Raphael De Carvalho Florencio
What this template does Transforms provider documentation (URLs) into an auditable, enforceable multicloud security control baseline. It: Fetches and sanitizes HTML Uses AI to extract security requirements (strict 3-line TXT blocks) Composes enforceable controls** (strict 7-line TXT blocks with true-equivalence consolidation) Builds the final baseline* (TXT or JSON, see *Outputs) with a Technology: header Returns a downloadable artifact via webhook and can append/create the file in Google Drive Why it’s useful Eliminates manual copy-paste and produces a consistent, portable baseline ready for review, audit, or enforcement tooling—ideal for rapidly generating or refreshing baselines across cloud providers and services. Multicloud support The workflow is multicloud by design. Provide the target cloud in the request and run the same pipeline for: AWS, **Azure, GCP (out of the box) Extensible to other providers/services by adjusting prompts and routing logic How it works (high level) POST /create (Basic Auth) with { cloudProvider, technology, urls[] } Input validation → generate uuid → resolve Google Drive folder (search-or-create) Download & sanitize each URL AI pipeline: Extractor → Composer → Baseline Builder → (optional) Baseline Auditor Append/create file in Drive and return a downloadable artifact (TXT/JSON) via webhook Request (webhook) Method: POST URL: https://<your-n8n>/webhook/create Auth: Basic Auth Headers: Content-Type: application/json Example input (Postman/CLI) { "cloudProvider": "aws", "technology": "Amazon S3", "urls": [ "https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html", "https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/S3/", "https://repost.aws/knowledge-center/secure-s3-resources" ] } Field reference cloudProvider (string, required) — case-insensitive. Supported: aws, azure, gcp. technology (string, required) — e.g., "Amazon S3", "Azure Storage", "Google Cloud Storage". urls (string\[], required) — 1–20 http(s) URLs (official/reputable docs). Optional (Google Drive destination): gdriveTargetId (string) — Google Drive folderId used for append/create. gdrivePath (string) — Path like "DefySec/Baselines" (folders are created if missing). gdriveTargetName (string) — Folder name to find/create under root. Optional (Assistant overrides): assistantExtractorId, assistantComposerId, assistantBaselineId, assistantAuditorId (strings) Resolution precedence Drive: gdriveTargetId → gdrivePath → gdriveTargetName → default folder. Assistants: explicit IDs above → dynamic resolution by name (expects 1_DefySec_Extractor, 2_DefySec_Control_Composer, 3_DefySec Baseline Builder, 4_DefySec_Baseline_Auditor). Validation Rejects empty urls or non-http(s) schemes; normalizes cloudProvider to aws|azure|gcp. Sanitizes fetched HTML (removes scripts/styles/headers) before AI steps. Outputs Primary:* downloadable *TXT** file controls_<technology>_<timestamp>.txt (via webhook). Composer outcomes:** if no groups to consolidate → NO_CONTROLS_TO_BE_CONSOLIDATED; if nothing valid remains → NO_CONTROLS_FOUND.  JSON path:* when the Builder stage is configured for *JSON-only** output (strict schema), the workflow returns a .json artifact and the Auditor validates it (see next section).  Techniques used (from the built-in assistants) Provider-aware extraction with strict TXT contract (3 lines):* Extractor limits itself to the declared provider/technology, outputs only Description/Reference/SecurityObjective, and applies a *reflexive quality check** before emitting.  Normalization & strict header parsing:** Composer normalizes whitespace/fences, requires the CloudProvider/Technology header, and ignores anything outside the exact 3-line block shape.  True-equivalence grouping & consolidation:* Composer groups *only** when intent, enforcement locus/mechanism, scope, and mode/setting all match—otherwise items remain distinct.  7-line enforceable control format:* Composer renders each (consolidated or unique) control in *exactly seven labeled lines** to keep results auditable and automatable.  Builder with JSON-only schema & technology inference:* Builder parses 7-line blocks, infers technology, consolidates true equivalents again if needed, and returns *pure JSON** matching a canonical schema (with counters in meta).  Self-evaluation loop (Auditor):* Auditor *unwraps transport, validates **schema & content, checks provider terminology/scope/automation, and returns either GOOD_ENOUGH or a JSON instruction set for the Builder to fix and re-emit—enabling reflective improvement.  Reference prioritization:** Across stages, official provider documentation is preferred in References (AWS/Azure/GCP).  Customization & extensions Prompt-reflective techniques:** keep (or extend) the Auditor loop to add more review passes and quality gates.  Compliance assistants:* add assistants to analyze/label controls for *HIPAA, PCI DSS, SOX** (and others), emitting mappings, gaps, and remediation notes. Implementation context:* feed internal implementation docs, runbooks, or *Architecture Decision Records (ADRs); use these as **grounding to generate or refine controls (works with local/self-hosted LLMs, too). Local/self-hosted LLMs:** swap OpenAI nodes for your on-prem LLM endpoint while keeping the pipeline. Provider-specific outputs:** extend the final stage to export Policy-as-Code or IaC snippets (Rego/Sentinel, CloudFormation Guard, Bicep/ARM, Terraform validations). Assistant configuration & prompts Full assistant configurations and prompts (Extractor, Composer, Baseline Builder, Baseline Auditor) are available here: https://github.com/followdrabbit/n8nlabs/tree/main/Lab03%20-%20Multicloud%20AI%20Security%20Control%20Baseline%20Builder/Assistants Security & privacy No hardcoded secrets in HTTP nodes; use n8n’s Credential Manager. Drive operations are optional and folder-scoped. For sensitive environments, switch to a local LLM and provide only sanitized/approved inputs. Quick test (curl) curl -X POST "https://<your-n8n>/webhook/create" \ -u "<user>:<pass>" \ -H "Content-Type: application/json" \ -d '{ "cloudProvider":"aws", "technology":"Amazon S3", "urls":[ "https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html" ] }' \ -OJ
by SIENNA
Automated AWS S3 / Azure / Google to local MinIO Object Backup with Scheduling What this workflow does ? This workflow performs automated, periodic backups of objects from an AWS S3 bucket, an Azure Container or a Google Storage Space to a MinIO S3 bucket running locally or on a dedicated container/VM/server. It can also work if the MinIO bucket is running on a remote cloud provider's infrastructure; you just need to change the URL and keys. Who's this intended for ? Storage administrators, cloud architects, or DevOps who need a simple and scalable solution for retrieving data from AWS, Azure or GCP. How it works This workflow uses the official AWS S3 API to list and download objects from a specific bucket, or the Azure BLOB one, then send them to MinIO using their version of the S3 API. Requirements None, just a source Bucket on your Cloud Storage Provider and a destination one on MinIO. You'll also need to get MinIO running. You're using Proxmox VE ? Create a MinIO LXC Container : https://community-scripts.github.io/ProxmoxVE/scripts?id=minio Need a Backup from another Cloud Storage Provider ? Need automated backup from another Cloud Storage Provider ? $\mapsto$ Check out our templates, we've done it with AWS, Azure, and GCP, and we even have a version for FTP/SFTP servers! For a dedicated source Cloud Storage Provider, please contact us ! $\odot$ These workflow can be integrated to bigger ones and modified to best suit your needs ! You can, for example, replace the MinIO node to another S3 Bucket from another Cloud Storage Provider (Backblaze, Wasabi, Scaleway, OVH, ...)
by Trung Tran
Automating AWS S3 Operations with n8n: Buckets, Folders, and Files Watch the demo video below: This tutorial walks you through setting up an automated workflow that generates AI-powered images from prompts and securely stores them in AWS S3. It leverages the new AI Tool Node and OpenAI models for prompt-to-image generation. Who’s it for This workflow is ideal for: Designers & marketers** who need quick, on-demand AI-generated visuals. Developers & automation builders* exploring *AI-driven workflows** integrated with cloud storage. Educators or trainers** creating tutorials or exercises on AI image generation. Businesses* looking to automate *image content pipelines** with AWS S3 storage. How it works / What it does Trigger: The workflow starts manually when you click “Execute Workflow”. Edit Fields: You can provide input fields such as image description, resolution, or naming convention. Create AWS S3 Bucket: Automatically creates a new S3 bucket if it doesn’t exist. Create a Folder: Inside the bucket, a folder is created to organize generated images. Prompt Generation Agent: An AI agent generates or refines the image prompt using the OpenAI Chat Model. Generate an Image: The refined prompt is used to generate an image using AI. Upload File to S3: The generated image is uploaded to the AWS S3 bucket for secure storage. This workflow showcases how to combine AI + Cloud Storage seamlessly in an automated pipeline. How to set up Import the workflow into n8n. Configure the following credentials: AWS S3 (Access Key, Secret Key, Region). OpenAI API Key (for Chat + Image models). Update the Edit Fields node with your preferred input fields (e.g., image size, description). Execute the workflow and test by entering a sample image prompt (e.g., “Futuristic city skyline in watercolor style”). Check your AWS S3 bucket to verify the uploaded image. Requirements n8n** (latest version with AI Tool Node support). AWS account** with S3 permissions to create buckets and upload files. OpenAI API key** (for prompt refinement and image generation). Basic familiarity with AWS S3 structure (buckets, folders, objects). How to customize the workflow Custom Buckets**: Replace the auto-create step with an existing S3 bucket. Image Variations**: Generate multiple image variations per prompt by looping the image generation step. File Naming**: Adjust file naming conventions (e.g., timestamp, user input). Metadata**: Add metadata such as tags, categories, or owner info when uploading to S3. Alternative Storage: Swap AWS S3 with **Google Cloud Storage, Azure Blob, or Dropbox. Trigger Options: Replace manual trigger with **Webhook, Form Submission, or Scheduler for automation. ✅ This workflow is a hands-on example of how to combine AI prompt engineering, image generation, and cloud storage automation into a single streamlined process.
by Đỗ Thành Nguyên
Automated Facebook Page Story Video Publisher (Google Drive → Facebook → Google Sheet) > Recommended: Self-hosted via tino.vn/vps-n8n?affid=388 — use code VPSN8N for up to 39% off. This workflow is an automated solution for publishing video content from Google Drive to your Facebook Page Stories, while using Google Sheets as a posting queue manager. What This Workflow Does (Workflow Function) This automation orchestrates a complete multi-step process for uploading and publishing videos to Facebook Stories: Queue Management: Every 2 hours and 30 minutes, the workflow checks a Google Sheet (Get Row Sheet node) to find the first video whose Stories column is empty — meaning it hasn’t been posted yet. Conditional Execution: An If node confirms that the video’s File ID exists before proceeding. Video Retrieval: Using the File ID, the workflow downloads the video from Google Drive (Google Drive node) and calculates its binary size (Set to the total size in bytes node). Facebook 3-Step Upload: It performs the Facebook Graph API’s three-step upload process through HTTP Request nodes: Step 1 – Initialize Session: Starts an upload session and retrieves the upload_url and video_id. Step 2 – Upload File: Uploads the binary video data to the provided upload_url. Step 3 – Publish Video: Finalizes and publishes the uploaded video as a Facebook Story. Status Update: Once completed, the workflow updates the same row in Google Sheets (Update upload status in sheet node) using the row_number to mark the video as processed. Prerequisites (What You Need Before Running) 1. n8n Instance > Recommended: Self-hosted via tino.vn/vps-n8n?affid=388 — use code VPSN8N for up to 39% off. 2. Google Services Google Drive Credentials:** OAuth2 credentials for Google Drive to let n8n download video files. Google Sheets Credentials:** OAuth2 credentials for Google Sheets to read the posting queue and update statuses. Google Sheet:** A spreadsheet (ID: 1RnE5O06l7W6TLCLKkwEH5Oyl-EZ3OE-Uc3OWFbDohYI) containing: File ID — the video’s unique ID in Google Drive. Stories — posting status column (leave empty for pending videos). row_number — used for updating the correct row after posting. 3. Facebook Setup Page ID:** Your Facebook Page ID (currently hardcoded as 115432036514099 in the info node). Access Token:* A *Page Access Token** with permissions such as pages_manage_posts and pages_read_engagement. This token is hardcoded in the info node and again in Step 3. Post video. Usage Guide and Implementation Notes How to Use Queue Videos: Add video entries to your Google Sheet. Each entry must include a valid Google Drive File ID. Leave the Stories column empty for videos that haven’t been posted. Activate: Save and activate the workflow. The Schedule Trigger will automatically handle new uploads every 2 hours and 30 minutes. Implementation Notes ⚠️ Token Security:* Hardcoding your *Access Token* inside the info node is *not recommended**. Tokens expire and expose your Page to risk if leaked. 👉 Action: Replace the static token with a secure Credential setup that supports token rotation. Loop Efficiency:* The *“false”** output of the If node currently loops back to the Get Row Sheet node. This creates unnecessary cycles if no videos are found. 👉 Action: Disconnect that branch so the workflow stops gracefully when no unposted videos remain. Status Updates:* To prevent re-posting the same video, the final Update upload status in sheet node must update the *Stories** column (e.g., write "POSTED"). 👉 Action: Add this mapping explicitly to your Google Sheets node. Automated File ID Sync:** This workflow assumes that the Google Sheet already contains valid File IDs. 👉 You can build a secondary workflow (using Schedule Trigger1 → Search files and folders → Append or update row in sheet) to automatically populate new video File IDs from your Google Drive. ✅ Result Once active, this workflow automatically: pulls pending videos from your Google Sheet, uploads them to Facebook Stories, and marks them as posted — all without manual intervention.