by James Li
Summary Onfleet is a last-mile delivery software that provides end-to-end route planning, dispatch, communication, and analytics to handle the heavy lifting while you can focus on your customers. This workflow template listens to an Onfleet event and communicates via a Whatsapp message. You can easily streamline this with the recipient of the delivery or your customer support numbers. Configurations Update the Onfleet trigger node with your own Onfleet credentials, to register for an Onfleet API key, please visit https://onfleet.com/signup to get started You can easily change which Onfleet event to listen to. Learn more about Onfleet webhooks with Onfleet Support Update the Twilio node with your own Twilio credentials, add your own expressions to the to number or simply source the recipient's phone number from the Onfleet event Toggle To Whatsapp to OFF if you want to simply use Twilio's SMS API
by James Li
Summary Onfleet is a last-mile delivery software that provides end-to-end route planning, dispatch, communication, and analytics to handle the heavy lifting while you can focus on your customers. This workflow template listens to an Onfleet event and interacts with the QuickBooks API. You can easily streamline this with your QuickBooks invoices or other entities. Typically, you can create an invoice when an Onfleet task is created to allow your customers to pay ahead of an upcoming delivery. Configurations Update the Onfleet trigger node with your own Onfleet credentials, to register for an Onfleet API key, please visit https://onfleet.com/signup to get started You can easily change which Onfleet event to listen to. Learn more about Onfleet webhooks with Onfleet Support Update the QuickBooks Online node with your QuickBooks credentials
by Vigh Sandor
Overview This n8n workflow provides automated CI/CD testing for Kubernetes applications using KinD (Kubernetes in Docker). It creates temporary infrastructure, runs tests, and cleans up everything automatically. Three-Phase Lifecycle INIT Phase - Infrastructure Setup Installs dependencies (sshpass, Docker, KinD) Creates KinD cluster Installs Helm and Nginx Ingress Installs HAProxy for port forwarding Deploys ArgoCD Applies ApplicationSet TEST Phase - Automated Testing Downloads Robot Framework test script from GitLab Installs Robot Framework and Browser library Executes automated browser tests Packages test results Sends results via Telegram DESTROY Phase - Complete Cleanup Removes HAProxy Deletes KinD cluster Uninstalls KinD Uninstalls Docker Sends completion notification Execution Modes Full Pipeline Mode (progress_only = false) > Automatically progresses through all phases: INIT → TEST → DESTROY Single Phase Mode (progress_only = true) > Executes only the specified phase and stops Prerequisites Local Environment (n8n Host) n8n instance version 1.0 or higher Community node n8n-nodes-robotframework installed Network access to target host and GitLab Minimum 4 GB RAM, 20 GB disk space Remote Target Host Linux server (Ubuntu, Debian, CentOS, Fedora, or Alpine) SSH access with sudo privileges Minimum 8 GB RAM (16 GB recommended) 20 GB** free disk space Open ports: 22, 80, 60080, 60443, 56443 External Services GitLab** account with OAuth2 application Repository with test files (test.robot, config.yaml, demo-applicationSet.yaml) Telegram Bot** for notifications Telegram Chat ID Setup Instructions Step 1: Install Community Node In n8n web interface, navigate to Settings → Community Nodes Install n8n-nodes-robotframework Restart n8n if prompted Step 2: Configure GitLab OAuth2 Create GitLab OAuth2 Application Log in to GitLab Navigate to User Settings → Applications Create new application with redirect URI: https://your-n8n-instance.com/rest/oauth2-credential/callback Grant scopes: read_api, read_repository, read_user Copy Application ID and Secret Configure in n8n Create new GitLab OAuth2 API credential Enter GitLab server URL, Client ID, and Secret Connect and authorize Step 3: Prepare GitLab Repository Create repository structure: your-repo/ ├── test.robot ├── config.yaml ├── demo-applicationSet.yaml └── .gitlab-ci.yml Upload your: Robot Framework test script KinD cluster configuration ArgoCD ApplicationSet manifest Step 4: Configure Telegram Bot Create Bot Open Telegram, search for @BotFather Send /newbot command Save the API token Get Chat ID For personal chat: Send message to your bot Visit: https://api.telegram.org/bot<YOUR_TOKEN>/getUpdates Copy the chat ID (positive number) For group chat: Add bot to group Send message mentioning the bot Visit getUpdates endpoint Copy group chat ID (negative number) Configure in n8n Create Telegram API credential Enter bot token Save credential Step 5: Prepare Target Host Verify SSH access: Test connection: ssh -p <port> <username>@<host_ip> Verify sudo: sudo -v The workflow will automatically install dependencies. Step 6: Import and Configure Workflow Import Workflow Copy workflow JSON In n8n, click Workflows → Import from File/URL Import the JSON Configure Parameters Open Set Parameters node and update: | Parameter | Description | Example | |-----------|-------------|---------| | target_host | IP address of remote host | 192.168.1.100 | | target_port | SSH port | 22 | | target_user | SSH username | ubuntu | | target_password | SSH password | your_password | | progress | Starting phase | INIT, TEST, or DESTROY | | progress_only | Execution mode | true or false | | KIND_CONFIG | Path to config.yaml | config.yaml | | ROBOT_SCRIPT | Path to test.robot | test.robot | | ARGOCD_APPSET | Path to ApplicationSet | demo-applicationSet.yaml | > Security: Use n8n credentials or environment variables instead of storing passwords in the workflow. Configure GitLab Nodes For each of the three GitLab nodes: Set Owner (username or organization) Set Repository name Set File Path (uses parameter from Set Parameters) Set Reference (branch: main or master) Select Credentials (GitLab OAuth2) Configure Telegram Nodes Send ROBOT Script Export Pack node: Set Chat ID Select Credentials Process Finish Report node: Update chat ID in command Step 7: Test and Execute Test individual components first Run full workflow Monitor execution (30-60 minutes total) How to Use Execution Examples Complete Testing Pipeline progress = "INIT" progress_only = "false" Flow: INIT → TEST → DESTROY Setup Infrastructure Only progress = "INIT" progress_only = "true" Flow: INIT → Stop Test Existing Infrastructure progress = "TEST" progress_only = "false" Flow: TEST → DESTROY Cleanup Only progress = "DESTROY" Flow: DESTROY → Complete Trigger Methods 1. Manual Execution Open workflow in n8n Set parameters Click Execute Workflow 2. Scheduled Execution Open Schedule Trigger node Configure time (default: 1 AM daily) Ensure workflow is Active 3. Webhook Trigger Configure webhook in GitLab repository Add webhook URL to GitLab CI Monitoring Execution In n8n Interface: View progress in Executions tab Watch node-by-node execution Check output details Via Telegram: Receive test results after TEST phase Receive completion notification after DESTROY phase Execution Timeline: | Phase | Duration | |-------|----------| | INIT | 15-25 minutes | | TEST | 5-10 minutes | | DESTROY | 5-10 minutes | Understanding Test Results After TEST phase, receive testing-export-pack.tar.gz via Telegram containing: log.html - Detailed test execution log report.html - Test summary report output.xml - Machine-readable results screenshots/ - Browser screenshots To view: Download .tar.gz from Telegram Extract: tar -xzf testing-export-pack.tar.gz Open report.html for summary Open log.html for detailed steps Success indicators: All tests marked PASS Screenshots show expected UI states No error messages in logs Failure indicators: Tests marked FAIL Error messages in logs Unexpected UI states in screenshots Configuration Files test.robot Robot Framework test script structure: Uses Browser library Connects to http://autotest.innersite Logs in with autotest/autotest Takes screenshots Runs in headless Chromium config.yaml KinD cluster configuration: 1 control-plane node** 1 worker node** Port mappings: 60080 (HTTP), 60443 (HTTPS), 56443 (API) Kubernetes version: v1.30.2 demo-applicationSet.yaml ArgoCD Application manifest: Points to Git repository Automatic sync enabled Deploys to default namespace gitlab-ci.yml Triggers n8n workflow on commits: Installs curl Sends POST request to webhook Troubleshooting SSH Permission Denied Symptoms: Error: Permission denied (publickey,password) Solutions: Verify password is correct Check SSH authentication method Ensure user has sudo privileges Use SSH keys instead of passwords Docker Installation Fails Symptoms: Error: Package docker-ce is not available Solutions: Check OS version compatibility Verify network connectivity Manually add Docker repository KinD Cluster Creation Timeout Symptoms: Error: Failed to create cluster: timed out Solutions: Check available resources (RAM/CPU/disk) Verify Docker daemon status Pre-pull images Increase timeout ArgoCD Not Accessible Symptoms: Error: Failed to connect to autotest.innersite Solutions: Check HAProxy status: systemctl status haproxy Verify /etc/hosts entry Check Ingress: kubectl get ingress -n argocd Test port forwarding: curl http://127.0.0.1:60080 Robot Framework Tests Fail Symptoms: Error: Chrome failed to start Solutions: Verify Chromium installation Check Browser library: rfbrowser show-trace Ensure correct executablePath in test.robot Install missing dependencies Telegram Notification Not Received Symptoms: Workflow completes but no message Solutions: Verify Chat ID Test Telegram API manually Check bot status Re-add bot to group Workflow Hangs Symptoms: Node shows "Executing..." indefinitely Solutions: Check n8n logs Test SSH connection manually Verify target host status Add timeouts to commands Best Practices Development Workflow Test locally first Run Robot Framework tests on local machine Verify test script syntax Version control Keep all files in Git Use branches for experiments Tag stable versions Incremental changes Make small testable changes Test each change separately Backup data Export workflow regularly Save test results Store credentials securely Production Deployment Separate environments Dev: Frequent testing Staging: Pre-production validation Production: Stable scheduled runs Monitoring Set up execution alerts Monitor host resources Track success/failure rates Disaster recovery Document cleanup procedures Keep backup host ready Test restoration process Security Use SSH keys Rotate credentials quarterly Implement network segmentation Maintenance Schedule | Frequency | Tasks | |-----------|-------| | Daily | Review logs, check notifications | | Weekly | Review failures, check disk space | | Monthly | Update dependencies, test recovery | | Quarterly | Rotate credentials, security audit | Advanced Topics Custom Configurations Multi-node clusters: Add more worker nodes for production-like environments Configure resource limits Add custom port mappings Advanced testing: Load testing with multiple iterations Integration testing for full deployment pipeline Chaos engineering with failure injection Integration with Other Tools Monitoring: Prometheus for metrics collection Grafana for visualization Logging: ELK stack for log aggregation Custom dashboards CI/CD Integration: Jenkins pipelines GitHub Actions Custom webhooks Resource Requirements Minimum | Component | CPU | RAM | Disk | |-----------|-----|-----|------| | n8n Host | 2 | 4 GB | 20 GB | | Target Host | 4 | 8 GB | 20 GB | Recommended | Component | CPU | RAM | Disk | |-----------|-----|-----|------| | n8n Host | 4 | 8 GB | 50 GB | | Target Host | 8 | 16 GB | 50 GB | Useful Commands KinD List clusters: kind get clusters Get kubeconfig: kind get kubeconfig --name automate-tst Export logs: kind export logs --name automate-tst Docker List containers: docker ps -a --filter "name=automate-tst" Enter control plane: docker exec -it automate-tst-control-plane bash View logs: docker logs automate-tst-control-plane Kubernetes Get all resources: kubectl get all -A Describe pod: kubectl describe pod -n argocd <pod-name> View logs: kubectl logs -n argocd <pod-name> --follow Port forward: kubectl port-forward -n argocd svc/argocd-server 8080:80 Robot Framework Run tests: robot test.robot Run specific test: robot -t "Test Name" test.robot Generate report: robot --outputdir results test.robot Additional Resources Official Documentation n8n**: https://docs.n8n.io KinD**: https://kind.sigs.k8s.io ArgoCD**: https://argo-cd.readthedocs.io Robot Framework**: https://robotframework.org Browser Library**: https://marketsquare.github.io/robotframework-browser Community n8n Community**: https://community.n8n.io Kubernetes Slack**: https://kubernetes.slack.com ArgoCD Slack**: https://argoproj.github.io/community/join-slack Robot Framework Forum**: https://forum.robotframework.org Related Projects k3s**: Lightweight Kubernetes distribution minikube**: Local Kubernetes alternative Flux CD**: Alternative GitOps tool Playwright**: Alternative browser automation
by Jan Oberhauser
This workflow allows creating a new Asana task via bash-dash Example usage: \- asana My new task Example bash-dash config: commands[asana]="http://localhost:5678/webhook/asana"
by James Li
Summary Onfleet is a last-mile delivery software that provides end-to-end route planning, dispatch, communication, and analytics to handle the heavy lifting while you can focus on your customers. This workflow template listens to an Onfleet event and communicates via a Discord message. You can easily streamline this with your Discord servers and users. Configurations Update the Onfleet trigger node with your own Onfleet credentials, to register for an Onfleet API key, please visit https://onfleet.com/signup to get started You can easily change which Onfleet event to listen to. Learn more about Onfleet webhooks with Onfleet Support Update the Discord node with your Discord server webhook URL, add your own expressions to the Text field
by James Li
Summary Onfleet is a last-mile delivery software that provides end-to-end route planning, dispatch, communication, and analytics to handle the heavy lifting while you can focus on your customers. This workflow template automatically updates the tags for a Shopify Order when an Onfleet event occurs. Configurations Update the Onfleet trigger node with your own Onfleet credentials, to register for an Onfleet API key, please visit https://onfleet.com/signup to get started You can easily change which Onfleet event to listen to. Learn more about Onfleet webhooks with Onfleet Support Update the Shopify node with your Shopify credentials and add your own tags to the Shopify Order
by David Olusola
📅 Auto-Log Calendly Bookings to Google Sheets This workflow automatically captures new Calendly bookings and saves them into a structured Google Sheet. It records all important details like invitee name, email, phone, event type, date, time, status, meeting link, and notes. No more manual copy-pasting from Calendly into your CRM or sheets. ⚙️ How It Works Calendly Booking Webhook Listens for new bookings (invitee.created event). Triggers every time someone schedules a meeting. Normalize Booking Data A Code node parses Calendly’s payload. Extracts invitee name, email, phone number, event type, time, notes, and meeting link. Ensures consistent data format for Sheets. Save Booking to Google Sheets The Google Sheets node appends a new row with the booking details. Prevents duplicate entries using append/update mode. Log Booking Success A Code node logs the successful save. Can be extended to send confirmation emails, Slack alerts, or calendar invites. 🛠️ Setup Steps 1. Create Google Sheet In Google Sheets, create a new sheet with headers: Copy the Sheet ID from the URL. Replace YOUR_GOOGLE_SHEET_ID in the workflow with your actual ID. 2. Calendly Webhook In your Calendly account: Go to Integrations → Webhooks Add a new webhook with the URL from the Webhook node in n8n. Select event type: invitee.created. 3. Google Sheets OAuth In n8n, connect your Google account credentials. Grant permission for reading/writing Sheets. 📊 Example Output (Google Sheets Row) | Name | Email | Phone | Event Type | Date | Time | Status | Meeting Link | Notes | |------------|--------------------|------------|------------|------------|-------------------|------------|-----------------------------|---------------------| | David mark | john@example.com | +123456789 | Demo Call | 2025-08-29 | 3:00 PM - 3:30 PM | Scheduled | https://zoom.us/j/123456789 | Wants to discuss AI | ⚡ With this workflow, every new Calendly booking is instantly logged into your Google Sheet, keeping your scheduling records accurate and centralized.
by mariskarthick
🚨Are alert storms overwhelming your Security Operations workflows? This n8n workflow supercharges your SOC by fully automating triage, analysis, and notification for Wazuh alerts—blending event-driven automation, OpenAI-powered contextual analysis, and real-time collaboration for incident response. 🔑 Key Features: ✅ Automated Triage: Instantly filters Wazuh alerts by severity to focus analyst effort on the signals that matter. 🤖 AI-Driven Investigation Reports: Uses OpenAI's GPT-4o-mini to auto-generate context-rich incident reports, including: MITRE Tactic & Technique mapping Impacted scope (IP addresses, hostnames) External artifact reputation checks Actionable security recommendations Fully customizable prompt format aligned with your SOC playbooks 📡 Multi-Channel Notification Delivers clean, actionable reports directly to your SOC team via Telegram. Easily extendable to Slack, Outlook, Gmail, Discord, or any other preferred channel. 🔇 Noise Reduction Eliminates alert fatigue using smart filters and custom AI prompts that suppress false positives and highlight real threats. 🔧 Fully Customizable Tweak severity thresholds, update prompt logic, or integrate additional data sources and channels — all with minimal effort ⚙️ How It Works Webhook Listens for incoming Wazuh alerts in real time. If Condition Filters based on severity (1 low, 2 medium, etc.) or other logic you define. AI Investigation (LangChain + OpenAI) Summarizes full alert logs and context using custom prompts to generate: Incident Overview Key Indicators Log Analysis Threat Classification Risk Assessment Security Recommendations Notification Delivery The report is parsed, cleaned, and sent to your SOC team in real-time, enabling rapid response — even during high-alert volumes. No-Op Path Efficiently discards irrelevant alerts without breaking the flow. 🧠 Why n8n + AI? Traditional alert triage is manual, slow, and error-prone — leading to analyst burnout and missed critical threats. This workflow shows how combining workflow automation with a tailored AI model enables your SOC to shift from reactive to proactive. Analysts can now: Focus on critical investigations Respond to alerts faster Eliminate copy-paste fatigue Get instant contextual summaries > ⚠️ Note: We learned that generic AI isn’t enough. Context-rich prompts and alignment with your actual SOC processes are key to meaningful, scalable automation. 🚀 Ready to build a smarter, less stressful SOC? Clone this workflow, adapt it to your processes, and never miss a critical alert again. 📬 Contributions welcome! Feel free to raise PRs, suggest new enhancements, or fork for your own use cases. Created by Mariskarthick M Senior Security Analyst | Detection Engineer | Threat Hunter | Open-Source Enthusiast
by Patrick Siewert
🧾 Smart Sales Invoice Processor (Data tables Edition) Transform uploaded sales CSV files into validated, enriched invoices, all handled natively inside n8n using Data tables, validation logic, enrichment, duplicate detection, and automated email notifications. This workflow demonstrates a full ETL + business automation pattern, turning raw CSV data into structured, auditable records ready for storage and customer notifications. ✨ Features ✅ Multi-format CSV input (file upload or raw text) ✅ Validation for email, quantity, date, and required fields ✅ Automatic error handling with 400 Bad Request JSON response for invalid CSVs ✅ Product enrichment from Products Datatable ✅ Invoice creation and storage in Invoices Datatable ✅ Automated subtotal, tax, and total calculation ✅ Duplicate order detection with 409 Conflict response ✅ Ready-to-send email confirmations (simulated in this version) ✅ Fully native, no external integrations required 🧩 Use Cases E-commerce order and invoice automation Internal accounting or ERP data ingestion Migrating CSV-based legacy systems into n8n Automated business logic for B2B integrations ⚙️ Setup Instructions 1️⃣ Create two n8n Data tables Products Stores your product catalog with SKU-based pricing and tax details. | Column | Type | Example | | -------- | ------ | -------------- | | sku | String | PROD-001 | | name | String | Premium Widget | | price | Number | 49.99 | | tax_rate | Number | 0.10 | Invoices Stores validated, calculated invoices created by this workflow. | Column | Type | Example | | -------------- | -------- | ------------------------------------------- | | invoice_id | String | INV-20251103-001 | | customer_email | String | john@example.com | | order_date | Date | 2025-01-15 | | subtotal | Number | 99.98 | | total_tax | Number | 10.00 | | grand_total | Number | 109.98 | | created_at | DateTime | 2025-11-03T08:00:00Z | 2️⃣ Import Workflow Import the provided workflow JSON file into your n8n instance. 3️⃣ Test the Workflow Use cURL or Postman to send a test CSV to your endpoint. curl -X POST \ -H "Content-Type: text/csv" \ --data-binary $'sku,quantity,customer_email,order_date\nPROD-001,2,john@example.com,2025-01-15\nPROD-002,1,jane@example.com,2025-01-15' \ https://<your-n8n-url>/webhook/process-sales 📦 Example Responses ✅ Success (HTTP 200) { "success": true, "processed_at": "2025-11-04T15:36:52.899Z", "invoice_count": 1, "invoices": { "to": "john@example.com", "subject": "Invoice INV-1762270612772-1 - Order Confirmation", "body": "Dear Customer,\n\nThank you for your order!\n\nInvoice ID: INV-1762270612772-1\nOrder Date: 1/14/2025\n\nSubtotal: $99.98\nTax: $10.00\nGrand Total: $109.98\n\nThank you for your business!\n\nBest regards,\nSales Team" }, "email_notifications": [ { "to": "jane@example.com", "subject": "Invoice INV-1762270612772-2 - Order Confirmation", "body": "Dear Customer,\n\nThank you for your order!\n\nInvoice ID: INV-1762270612772-2\nOrder Date: 1/14/2025\n\nSubtotal: $89.99\nTax: $9.00\nGrand Total: $98.99\n\nThank you for your business!\n\nBest regards,\nSales Team" } ], "message": "All invoices processed and customers notified" } ❌ Validation Error (HTTP 400) Occurs when the CSV file is missing required columns or contains invalid data. { "success": false, "message": "CSV validation failed", "error": "Validation failed: [ { \"row\": 2, \"errors\": [\"Valid email is required\"] } ]" } 🧠 How It Works Webhook receives uploaded CSV or raw text Code node parses and validates data Data table node loads product info (price, tax rate) Calculation node generates invoice totals per customer Duplicate check prevents reprocessing Data table insert saves invoices Email preparation creates personalized confirmations Webhook response returns structured JSON (200 / 400 / 409) 🔐 Requirements n8n version ≥ 1.41.0 Data tables** feature enabled Publicly accessible webhook URL (for testing) (Optional) Connect a real email node (Gmail or SMTP) to send messages 🏁 Result Highlights Full CSV → Validation → Data tables → Email → JSON Response pipeline Includes built-in structured error handling (400 / 409) 100% native n8n functionality** Perfect example of Data tables + logic-based automation for business use cases
by Dariusz Koryto
Automated FTP File Migration with Smart Cleanup and Email Notifications Overview This n8n workflow automates the secure transfer of files between FTP servers on a scheduled basis, providing enterprise-grade reliability with comprehensive error handling and dual notification systems (email + webhook). Perfect for data migrations, automated backups, and multi-server file synchronization. What it does This workflow automatically discovers, filters, transfers, and safely removes files between FTP servers while maintaining complete audit trails and sending detailed notifications about every operation. Key Features: Scheduled Execution**: Configurable timing (daily, hourly, weekly, or custom cron expressions) Smart File Filtering**: Regex-based filtering by file type, size, date, or name patterns Safe Transfer Protocol**: Downloads → Uploads → Validates → Cleans up source Dual Notifications**: Email alerts + webhook integration for both success and errors Comprehensive Logging**: Detailed audit trail of all operations with timestamps Error Recovery**: Automatic retry logic with exponential backoff for network issues Production Ready**: Built-in safety measures and extensive documentation Use Cases 🏢 Enterprise & IT Operations Data Center Migration**: Moving files between different hosting environments Backup Automation**: Scheduled transfers to secondary storage locations Multi-Site Synchronization**: Keeping files in sync across geographic locations Legacy System Integration**: Bridging old and new systems through automated transfers 📊 Business Operations Document Management**: Automated transfer of contracts, reports, and business documents Media Asset Distribution**: Moving images, videos, and marketing materials between systems Data Pipeline**: Part of larger ETL processes for business intelligence Compliance Archiving**: Moving files to compliance-approved storage systems 🔧 Development & DevOps Build Artifact Distribution**: Deploying compiled applications across environments Configuration Management**: Synchronizing config files between servers Log File Aggregation**: Collecting logs from multiple servers for analysis Automated Deployment**: Moving release packages to production servers How it works 📋 Workflow Steps Schedule Trigger → Initiates workflow at specified intervals File Discovery → Lists files from source FTP server with optional recursion Smart Filtering → Applies customizable filters (type, size, date, name patterns) Secure Download → Retrieves files to temporary n8n storage with retry logic Safe Upload → Transfers files to destination with directory auto-creation Transfer Validation → Verifies successful upload before proceeding Source Cleanup → Removes original files only after confirmed success Comprehensive Logging → Records all operations with detailed metadata Dual Notifications → Sends email + webhook notifications for success/failure 🔄 Error Handling Flow Network Issues** → Automatic retry with exponential backoff (3 attempts) Authentication Problems** → Immediate email alert with troubleshooting steps Permission Errors** → Detailed logging with recommended actions Disk Space Issues** → Safe failure with source file preservation File Corruption** → Integrity validation with rollback capability Setup Requirements 🔑 Credentials Needed Source FTP Server Host, port, username, password Read permissions required SFTP recommended for security Destination FTP Server Host, port, username, password Write permissions required Directory creation permissions SMTP Email Server SMTP host and port (e.g., smtp.gmail.com:587) Authentication credentials For success and error notifications Monitoring API (Optional) Webhook URL for system integration Authentication tokens if required ⚙️ Configuration Steps Import Workflow → Load the JSON template into your n8n instance Configure Credentials → Set up all required FTP and SMTP connections Customize Schedule → Adjust cron expression for your timing needs Set File Filters → Configure regex patterns for your file types Configure Paths → Set source and destination directory structures Test Thoroughly → Run with test files before production deployment Enable Monitoring → Activate email notifications and logging Customization Options 📅 Scheduling Examples 0 2 * * * # Daily at 2 AM 0 */6 * * * # Every 6 hours 0 8 * * 1-5 # Weekdays at 8 AM 0 0 1 * * # Monthly on 1st */15 * * * * # Every 15 minutes 🔍 File Filter Patterns Documents \\.(pdf|doc|docx|xls|xlsx)$ Images \\.(jpg|jpeg|png|gif|svg)$ Data Files \\.(csv|json|xml|sql)$ Archives \\.(zip|rar|7z|tar|gz)$ Size-based (add as condition) {{ $json.size > 1048576 }} # Files > 1MB Date-based (recent files only) {{ $json.date > $now.minus({days: 7}) }} 📁 Directory Organization // Date-based structure /files/{{ $now.format('YYYY/MM/DD') }}/ // Type-based structure /files/{{ $json.name.split('.').pop() }}/ // User-based structure /users/{{ $json.owner || 'system' }}/ // Hybrid approach /{{ $now.format('YYYY-MM') }}/{{ $json.type }}/ Template Features 🛡️ Safety & Security Transfer Validation**: Confirms successful upload before source deletion Error Preservation**: Source files remain intact on any failure Audit Trail**: Complete logging of all operations with timestamps Credential Security**: Secure storage using n8n's credential system SFTP Support**: Encrypted transfers when available Retry Logic**: Automatic recovery from transient network issues 📧 Notification System Success Notifications: Confirmation email with transfer details File metadata (name, size, transfer time) Next scheduled execution information Webhook payload for monitoring systems Error Notifications: Immediate email alerts with error details Troubleshooting steps and recommendations Failed file information for manual intervention Webhook integration for incident management 📊 Monitoring & Analytics Execution Logs**: Detailed history of all workflow runs Performance Metrics**: Transfer speeds and success rates Error Tracking**: Categorized failure analysis Audit Reports**: Compliance-ready activity logs Production Considerations 🚀 Performance Optimization File Size Limits**: Configure timeouts based on expected file sizes Batch Processing**: Handle multiple files efficiently Network Optimization**: Schedule transfers during off-peak hours Resource Monitoring**: Track n8n server CPU, memory, and disk usage 🔧 Maintenance Regular Testing**: Validate credentials and connectivity Log Review**: Monitor for patterns in errors or performance Credential Rotation**: Update passwords and keys regularly Documentation Updates**: Keep configuration notes current Testing Protocol 🧪 Pre-Production Testing Phase 1: Test with 1-2 small files (< 1MB) Phase 2: Test error scenarios (invalid credentials, network issues) Phase 3: Test with representative file sizes and volumes Phase 4: Validate email notifications and logging Phase 5: Full production deployment with monitoring ⚠️ Important Testing Notes Disable Source Deletion** during initial testing Use test directories to avoid production data impact Monitor execution logs** carefully during testing Validate email delivery** to ensure notifications work Test rollback procedures** before production use Support & Documentation This template includes: 8 Comprehensive Sticky Notes** with visual documentation Detailed Node Comments** explaining every configuration option Error Handling Guide** with common troubleshooting steps Security Best Practices** for production deployment Performance Tuning** recommendations for different scenarios Technical Specifications n8n Version**: 1.0.0+ Node Count**: 17 functional nodes + 8 documentation sticky notes Execution Time**: 2-10 minutes (depending on file sizes and network speed) Memory Usage**: 50-200MB (scales with file sizes) Supported Protocols**: FTP, SFTP (recommended) File Size Limit**: Up to 150MB per file (configurable) Concurrent Files**: Processes files sequentially for stability Who is this for? 🎯 Primary Users System Administrators** managing file transfers between servers DevOps Engineers** automating deployment and backup processes IT Operations Teams** handling data migration projects Business Process Owners** requiring automated file management 💼 Industries & Use Cases Healthcare**: Patient data archiving and compliance reporting Financial Services**: Secure document transfer and regulatory reporting Manufacturing**: CAD file distribution and inventory data sync E-commerce**: Product image and catalog management Media**: Asset distribution and content delivery automation
by Anthony
Description This workflow automates video distribution to 9 social platforms simultaneously using Blotato's API. It includes both a scheduled publisher (checks Google Sheets for videos marked "Ready") and a subworkflow (can be called from other workflows). Perfect for creators and marketers who want to eliminate manual posting across Instagram, YouTube, TikTok, Facebook, LinkedIn, Threads, Twitter, Bluesky, and Pinterest. How It Works Scheduled Publisher Workflow Schedule Trigger – Runs daily at 10 PM (configurable). Fetch Video – Pulls video URL and description from Google Sheets where "ReadyToPost" = "Ready". Upload to Blotato – Sends video to Blotato's media service. Broadcast to 9 Platforms – Publishes simultaneously to all connected social accounts. Update Sheet – Changes "ReadyToPost" to "Finished" so it won't repost. Subworkflow: Video Publisher (Reusable) Receive Input – Gets URL, title, and description from parent workflow. Fetch Credentials – Pulls Blotato API key from n8n Data Table. Upload & Distribute – Uploads to Blotato, then posts to all platforms. Completion Signal – Returns to parent workflow when done. > 💡 Tip: The subworkflow can be called from ANY workflow - great for posting videos generated by AI workflows, webhook triggers, or manual forms. Test Workflow (Optional) Form Submission – Upload a video file with title and description. Upload to Dropbox – Generates shareable URL via "[SUB] Dropbox Upload Link" subworkflow. Trigger Publisher – Calls the subworkflow above to distribute the video. Setup Instructions Estimated Setup Time: 20-25 minutes Step 1: Blotato Account Setup Create account at Blotato Dashboard Connect all your social media accounts (most time-consuming step) Go to Settings and copy your account IDs for each platform Go to API Settings and copy your API key Step 2: Configure Workflow Update Social IDs: Open "Assign Social Media IDs" node Replace placeholder IDs with your actual Blotato account IDs: { "instagram_id": "YOUR_ID", "youtube_id": "YOUR_ID", "tiktok_id": "YOUR_ID", ... } Create Data Table: Create n8n Data Table named "Credentials" Add columns: service and token Add row: service = blotato, token = YOUR_API_KEY Set Up Google Sheet: Create sheet with columns: URL VIDEO, ReadyToPost, Description, Titre (Title) Add video data Set ReadyToPost to "Ready" for videos you want to post Connect Your Sheet: Update "Get my video" node with your Google Sheet ID > ⚙️ Pro Tip: If you don't need the scheduled version, just use the subworkflow and call it from other workflows. Use Cases AI Video Workflows:** Automatically post videos generated by Veo, Sora, or other AI models to all platforms. Content Schedulers:** Queue videos in Google Sheets, let the scheduler post them automatically. Batch Publishing:** Generate 10 videos, mark them all "Ready", and let the workflow distribute them. Marketing Campaigns:** Coordinate multi-platform launches with a single click. Agencies:** Manage multiple client accounts by swapping Blotato credentials in the Data Table. Customization Options Remove Unused Platforms:** Disconnect any social media nodes you don't use (speeds up execution). Change Schedule:** Modify the Schedule Trigger to run multiple times per day or on specific days. Different File Hosts:** Replace Dropbox with Google Drive, S3, or Cloudinary in the test workflow. Platform-Specific Captions:** Add IF nodes before each platform to customize descriptions or add hashtags. Add Approval Step:** Insert a WhatsApp or Telegram notification before posting for manual review. Watermarks:** Add a Code node to overlay branding before uploading to Blotato. Important Notes ⚠️ Two Workflows in One File: Lines 1-600: Scheduled publisher (checks Google Sheets) Lines 600+: Subworkflow (called by other workflows) ⚠️ Data Table vs. Hardcoding: Scheduled workflow: Hardcoded API keys in HTTP nodes Subworkflow: Uses Data Table for API keys (recommended approach) ⚠️ Why Use the Subworkflow? Can be called from ANY workflow Easier to manage API keys (one place to update) More flexible for complex automation systems
by Jimleuk
Tired of being let down by the Google Drive Trigger? Rather not exhaust system resources by polling every minute? Then this workflow is for you! Google drive is a great storage option for automation due to its relative simplicity, cheap costs and readily-available integrations. Using Google Drive as a trigger is the next logically step but many n8n users quickly realise the built-in Google Drive trigger just isn't that reliable. Disaster! Typically, the workaround is to poll the Google Drive search API in short intervals but the trade off is wasted server resources during inactivity. The ideal solution is of course, push notifications but they seem quite complicated to implement... or are they? This template demonstrates that setting up Google Push Notifications for Google Drive File Changes actually isn't that hard! Using this approach, Google sends a POST request every time something in a drive changes which solves reliability of events and efficiency of resources. How it works We begin with registering a Notification channel (webhook) with the Google Drive API. 2 key pieces of information is (a) the webhook URL which notifications will be pushed to and (b) because we want to scope to a single location, the driveId. Good to know that you can register as many as you like using http calls but you have to manage them yourself, there's no google dashboard for notification channels! The registration data along with the startPageToken are saved in workflowStaticData - This is a convenient persistence which we can use to hold small bits of data between executions. Now, whenever files or folders are created or updated in our target Google Drive, Google will send push notifications to our webhook trigger in this template. Once triggered, we need still need to call Google Drive's Changes.list to get the actual change events which were detected. we can do this with the HTTP request node. The Changes API will also return the nextPageToken - a marker to establish where next to get the new batch of changes. It's important that we use this token the next time we request from the changes API and so, we'll update the workflowStaticData with this updated value. Unfortunately, the changes.list API isn't able to filter change events by folder or action and so be sure to do your own set of filtering steps to get the files you want. Finally with the valid change events, optionally fetch the file metadata which gives you more attributes to play with. For example, you may want to know if the change event was triggered by n8n, in which case you'll want to check "ModifiedByMe" value. How to use Start with Step 1 and fill in the "Set Variables" node and Click on the Manual Execute Trigger. This will create a single Google Drive Notification Channel for a specific drive. Activate the workflow to start recieving events from Google Drive. To test, perform an action eg. create a file, on the target drive. Watch the webhook calls come pouring in! Once you have the desired events, finish off this template to do something with the changed files. Requirements Google Drive Credentials. Note this workflow also works on Shared Drives. Optimising This Workflow With bulk actions, you'll notice that Google gradually starts to send increasingly large amounts of push notifications - sometimes numbering in the hundreds! For cloud plan users, this could easily exhaust execution limits if lots of changes are made in the same drive daily. One approach is to implement a throttling mechanism externally to batch events before sending them to n8n. This throttling mechanism is outside the scope of this template but quite easy to achieve with something like Supabase Edge Functions.