by James Li
Summary Onfleet is a last-mile delivery software that provides end-to-end route planning, dispatch, communication, and analytics to handle the heavy lifting while you can focus on your customers. This workflow template listens to an Onfleet event and communicates via a Discord message. You can easily streamline this with your Discord servers and users. Configurations Update the Onfleet trigger node with your own Onfleet credentials, to register for an Onfleet API key, please visit https://onfleet.com/signup to get started You can easily change which Onfleet event to listen to. Learn more about Onfleet webhooks with Onfleet Support Update the Discord node with your Discord server webhook URL, add your own expressions to the Text field
by James Li
Summary Onfleet is a last-mile delivery software that provides end-to-end route planning, dispatch, communication, and analytics to handle the heavy lifting while you can focus on your customers. This workflow template automatically updates the tags for a Shopify Order when an Onfleet event occurs. Configurations Update the Onfleet trigger node with your own Onfleet credentials, to register for an Onfleet API key, please visit https://onfleet.com/signup to get started You can easily change which Onfleet event to listen to. Learn more about Onfleet webhooks with Onfleet Support Update the Shopify node with your Shopify credentials and add your own tags to the Shopify Order
by Vigh Sandor
Overview This n8n workflow provides automated CI/CD testing for Kubernetes applications using KinD (Kubernetes in Docker). It creates temporary infrastructure, runs tests, and cleans up everything automatically. Three-Phase Lifecycle INIT Phase - Infrastructure Setup Installs dependencies (sshpass, Docker, KinD) Creates KinD cluster Installs Helm and Nginx Ingress Installs HAProxy for port forwarding Deploys ArgoCD Applies ApplicationSet TEST Phase - Automated Testing Downloads Robot Framework test script from GitLab Installs Robot Framework and Browser library Executes automated browser tests Packages test results Sends results via Telegram DESTROY Phase - Complete Cleanup Removes HAProxy Deletes KinD cluster Uninstalls KinD Uninstalls Docker Sends completion notification Execution Modes Full Pipeline Mode (progress_only = false) > Automatically progresses through all phases: INIT → TEST → DESTROY Single Phase Mode (progress_only = true) > Executes only the specified phase and stops Prerequisites Local Environment (n8n Host) n8n instance version 1.0 or higher Community node n8n-nodes-robotframework installed Network access to target host and GitLab Minimum 4 GB RAM, 20 GB disk space Remote Target Host Linux server (Ubuntu, Debian, CentOS, Fedora, or Alpine) SSH access with sudo privileges Minimum 8 GB RAM (16 GB recommended) 20 GB** free disk space Open ports: 22, 80, 60080, 60443, 56443 External Services GitLab** account with OAuth2 application Repository with test files (test.robot, config.yaml, demo-applicationSet.yaml) Telegram Bot** for notifications Telegram Chat ID Setup Instructions Step 1: Install Community Node In n8n web interface, navigate to Settings → Community Nodes Install n8n-nodes-robotframework Restart n8n if prompted Step 2: Configure GitLab OAuth2 Create GitLab OAuth2 Application Log in to GitLab Navigate to User Settings → Applications Create new application with redirect URI: https://your-n8n-instance.com/rest/oauth2-credential/callback Grant scopes: read_api, read_repository, read_user Copy Application ID and Secret Configure in n8n Create new GitLab OAuth2 API credential Enter GitLab server URL, Client ID, and Secret Connect and authorize Step 3: Prepare GitLab Repository Create repository structure: your-repo/ ├── test.robot ├── config.yaml ├── demo-applicationSet.yaml └── .gitlab-ci.yml Upload your: Robot Framework test script KinD cluster configuration ArgoCD ApplicationSet manifest Step 4: Configure Telegram Bot Create Bot Open Telegram, search for @BotFather Send /newbot command Save the API token Get Chat ID For personal chat: Send message to your bot Visit: https://api.telegram.org/bot<YOUR_TOKEN>/getUpdates Copy the chat ID (positive number) For group chat: Add bot to group Send message mentioning the bot Visit getUpdates endpoint Copy group chat ID (negative number) Configure in n8n Create Telegram API credential Enter bot token Save credential Step 5: Prepare Target Host Verify SSH access: Test connection: ssh -p <port> <username>@<host_ip> Verify sudo: sudo -v The workflow will automatically install dependencies. Step 6: Import and Configure Workflow Import Workflow Copy workflow JSON In n8n, click Workflows → Import from File/URL Import the JSON Configure Parameters Open Set Parameters node and update: | Parameter | Description | Example | |-----------|-------------|---------| | target_host | IP address of remote host | 192.168.1.100 | | target_port | SSH port | 22 | | target_user | SSH username | ubuntu | | target_password | SSH password | your_password | | progress | Starting phase | INIT, TEST, or DESTROY | | progress_only | Execution mode | true or false | | KIND_CONFIG | Path to config.yaml | config.yaml | | ROBOT_SCRIPT | Path to test.robot | test.robot | | ARGOCD_APPSET | Path to ApplicationSet | demo-applicationSet.yaml | > Security: Use n8n credentials or environment variables instead of storing passwords in the workflow. Configure GitLab Nodes For each of the three GitLab nodes: Set Owner (username or organization) Set Repository name Set File Path (uses parameter from Set Parameters) Set Reference (branch: main or master) Select Credentials (GitLab OAuth2) Configure Telegram Nodes Send ROBOT Script Export Pack node: Set Chat ID Select Credentials Process Finish Report node: Update chat ID in command Step 7: Test and Execute Test individual components first Run full workflow Monitor execution (30-60 minutes total) How to Use Execution Examples Complete Testing Pipeline progress = "INIT" progress_only = "false" Flow: INIT → TEST → DESTROY Setup Infrastructure Only progress = "INIT" progress_only = "true" Flow: INIT → Stop Test Existing Infrastructure progress = "TEST" progress_only = "false" Flow: TEST → DESTROY Cleanup Only progress = "DESTROY" Flow: DESTROY → Complete Trigger Methods 1. Manual Execution Open workflow in n8n Set parameters Click Execute Workflow 2. Scheduled Execution Open Schedule Trigger node Configure time (default: 1 AM daily) Ensure workflow is Active 3. Webhook Trigger Configure webhook in GitLab repository Add webhook URL to GitLab CI Monitoring Execution In n8n Interface: View progress in Executions tab Watch node-by-node execution Check output details Via Telegram: Receive test results after TEST phase Receive completion notification after DESTROY phase Execution Timeline: | Phase | Duration | |-------|----------| | INIT | 15-25 minutes | | TEST | 5-10 minutes | | DESTROY | 5-10 minutes | Understanding Test Results After TEST phase, receive testing-export-pack.tar.gz via Telegram containing: log.html - Detailed test execution log report.html - Test summary report output.xml - Machine-readable results screenshots/ - Browser screenshots To view: Download .tar.gz from Telegram Extract: tar -xzf testing-export-pack.tar.gz Open report.html for summary Open log.html for detailed steps Success indicators: All tests marked PASS Screenshots show expected UI states No error messages in logs Failure indicators: Tests marked FAIL Error messages in logs Unexpected UI states in screenshots Configuration Files test.robot Robot Framework test script structure: Uses Browser library Connects to http://autotest.innersite Logs in with autotest/autotest Takes screenshots Runs in headless Chromium config.yaml KinD cluster configuration: 1 control-plane node** 1 worker node** Port mappings: 60080 (HTTP), 60443 (HTTPS), 56443 (API) Kubernetes version: v1.30.2 demo-applicationSet.yaml ArgoCD Application manifest: Points to Git repository Automatic sync enabled Deploys to default namespace gitlab-ci.yml Triggers n8n workflow on commits: Installs curl Sends POST request to webhook Troubleshooting SSH Permission Denied Symptoms: Error: Permission denied (publickey,password) Solutions: Verify password is correct Check SSH authentication method Ensure user has sudo privileges Use SSH keys instead of passwords Docker Installation Fails Symptoms: Error: Package docker-ce is not available Solutions: Check OS version compatibility Verify network connectivity Manually add Docker repository KinD Cluster Creation Timeout Symptoms: Error: Failed to create cluster: timed out Solutions: Check available resources (RAM/CPU/disk) Verify Docker daemon status Pre-pull images Increase timeout ArgoCD Not Accessible Symptoms: Error: Failed to connect to autotest.innersite Solutions: Check HAProxy status: systemctl status haproxy Verify /etc/hosts entry Check Ingress: kubectl get ingress -n argocd Test port forwarding: curl http://127.0.0.1:60080 Robot Framework Tests Fail Symptoms: Error: Chrome failed to start Solutions: Verify Chromium installation Check Browser library: rfbrowser show-trace Ensure correct executablePath in test.robot Install missing dependencies Telegram Notification Not Received Symptoms: Workflow completes but no message Solutions: Verify Chat ID Test Telegram API manually Check bot status Re-add bot to group Workflow Hangs Symptoms: Node shows "Executing..." indefinitely Solutions: Check n8n logs Test SSH connection manually Verify target host status Add timeouts to commands Best Practices Development Workflow Test locally first Run Robot Framework tests on local machine Verify test script syntax Version control Keep all files in Git Use branches for experiments Tag stable versions Incremental changes Make small testable changes Test each change separately Backup data Export workflow regularly Save test results Store credentials securely Production Deployment Separate environments Dev: Frequent testing Staging: Pre-production validation Production: Stable scheduled runs Monitoring Set up execution alerts Monitor host resources Track success/failure rates Disaster recovery Document cleanup procedures Keep backup host ready Test restoration process Security Use SSH keys Rotate credentials quarterly Implement network segmentation Maintenance Schedule | Frequency | Tasks | |-----------|-------| | Daily | Review logs, check notifications | | Weekly | Review failures, check disk space | | Monthly | Update dependencies, test recovery | | Quarterly | Rotate credentials, security audit | Advanced Topics Custom Configurations Multi-node clusters: Add more worker nodes for production-like environments Configure resource limits Add custom port mappings Advanced testing: Load testing with multiple iterations Integration testing for full deployment pipeline Chaos engineering with failure injection Integration with Other Tools Monitoring: Prometheus for metrics collection Grafana for visualization Logging: ELK stack for log aggregation Custom dashboards CI/CD Integration: Jenkins pipelines GitHub Actions Custom webhooks Resource Requirements Minimum | Component | CPU | RAM | Disk | |-----------|-----|-----|------| | n8n Host | 2 | 4 GB | 20 GB | | Target Host | 4 | 8 GB | 20 GB | Recommended | Component | CPU | RAM | Disk | |-----------|-----|-----|------| | n8n Host | 4 | 8 GB | 50 GB | | Target Host | 8 | 16 GB | 50 GB | Useful Commands KinD List clusters: kind get clusters Get kubeconfig: kind get kubeconfig --name automate-tst Export logs: kind export logs --name automate-tst Docker List containers: docker ps -a --filter "name=automate-tst" Enter control plane: docker exec -it automate-tst-control-plane bash View logs: docker logs automate-tst-control-plane Kubernetes Get all resources: kubectl get all -A Describe pod: kubectl describe pod -n argocd <pod-name> View logs: kubectl logs -n argocd <pod-name> --follow Port forward: kubectl port-forward -n argocd svc/argocd-server 8080:80 Robot Framework Run tests: robot test.robot Run specific test: robot -t "Test Name" test.robot Generate report: robot --outputdir results test.robot Additional Resources Official Documentation n8n**: https://docs.n8n.io KinD**: https://kind.sigs.k8s.io ArgoCD**: https://argo-cd.readthedocs.io Robot Framework**: https://robotframework.org Browser Library**: https://marketsquare.github.io/robotframework-browser Community n8n Community**: https://community.n8n.io Kubernetes Slack**: https://kubernetes.slack.com ArgoCD Slack**: https://argoproj.github.io/community/join-slack Robot Framework Forum**: https://forum.robotframework.org Related Projects k3s**: Lightweight Kubernetes distribution minikube**: Local Kubernetes alternative Flux CD**: Alternative GitOps tool Playwright**: Alternative browser automation
by kartik ramachandran
Monitor Azure subscription resources with cost and usage tracking Template Name Monitor Azure subscription resources with cost and usage tracking Description Automatically connect to your Azure subscription to retrieve all resources and track costs. Generates formatted reports with total spending, top expensive resources, and cost breakdown by type. Who's it for DevOps Engineers, Cloud Architects, Finance Teams, IT Managers, and organizations implementing FinOps practices. How it works Authentication: OAuth2 Service Principal with Azure Resource Discovery: Queries Azure Resource Graph API Cost Retrieval: Fetches data from Cost Management API Data Processing: Merges resources with costs Report Generation: Creates text/HTML reports Output: Excel export, Power BI streaming, or API/Webhook response Set up steps Prerequisites Azure Subscription with appropriate permissions Azure Service Principal with Reader and Cost Management Reader roles Setup 1. Create Azure Service Principal Azure CLI: az ad sp create-for-rbac --name "n8n-cost-monitor" --role "Reader" --scopes /subscriptions/{sub-id} az role assignment create --assignee {client-id} --role "Cost Management Reader" --scope /subscriptions/{sub-id} Or use Azure Portal: Azure AD → App registrations → New registration → Assign roles 2. Configure n8n OAuth2 Credential Credentials → OAuth2 API Grant Type: Client Credentials Access Token URL: https://login.microsoftonline.com/{TENANT_ID}/oauth2/v2.0/token Scope: https://management.azure.com/.default 3. Update Workflow Configuration Open "Set Configuration" node Update subscriptionId and tenantId Set timeRange (see options below) Assign OAuth2 credential to HTTP nodes Time Range Options: currentMonth - Current billing month (default) lastMonth - Previous full month last30Days - Last 30 days last90Days - Last 90 days (3 months) last6Months - Last 6 months yearToDate - From Jan 1 to today lastYear - Full previous year (365 days) custom - Manually set startDate and endDate 4. Enable Output Options (Optional) Excel**: Enable "Export to Excel" node for .xlsx downloads Power BI**: Enable "Prepare Power BI Data" and "Send to Power BI" nodes with Push URL API/Webhook**: Enable "Respond to Webhook" node and change trigger to Webhook 5. Schedule (Optional) Replace Manual Trigger with Schedule Trigger (daily: 0 9 * * *) Requirements Azure Requirements Active Azure subscription Permissions to create Service Principal Permissions to assign Reader and Cost Management Reader roles n8n Requirements n8n instance (cloud or self-hosted version 1.0+) Ability to create OAuth2 credentials Optional Requirements Slack workspace (for Slack notifications) Email service credentials (for email reports) Database instance (for data storage) How to customize the workflow Enable Output Options The workflow includes three disabled output nodes. To enable them: Click on the node (Excel Export, Power BI, or Webhook Response) Click the three dots menu → Enable Configure the node settings as needed Excel Export Configuration The Excel export node is pre-configured but disabled. To use it: // Already configured to export: All resources with their costs Formatted as Excel (.xlsx) Filename includes current date Headers included automatically To customize: Change filename pattern in the node settings Add/remove columns by modifying the data mapping Export to CSV instead by changing the file extension Power BI Integration Step 1: Create Power BI Streaming Dataset Go to Power BI workspace Create New → Streaming dataset → API Define schema: { "summary": { "totalCost": "string", "resourceCount": "number", "period": "string" }, "resources": [ { "resourceName": "string", "resourceType": "string", "cost": "string" } ], "timestamp": "datetime", "reportType": "string" } Copy the Push URL Step 2: Configure n8n Workflow Enable "Prepare Power BI Data" node Enable "Send to Power BI" node Update the URL in "Send to Power BI" node with your Push URL Test the workflow Step 3: Create Power BI Dashboard Create visualizations from the streaming dataset Add cards for Total Cost, Resource Count Add tables for Top Resources Add charts for Cost by Type API Response / Webhook Mode To use the workflow as an API endpoint: Step 1: Change Trigger Delete "Manual Trigger" node Add "Webhook" trigger node Choose GET or POST method Copy the webhook URL Step 2: Enable Response Node Azure subscription with Service Principal permissions n8n 1.0+ (cloud or self-hosted) OAuth2 credentials capability import requests response = requests.get("https://your-n8n-instance.com/webhook/azure-cost-report") data = response.json() print(f"Total Cost: ${data'data'}") Response Format: { "status": "success", "data": { Enable Output Options Right-click disabled nodes → Enable → Configure settings Filter Resources Modify query in "Query Azure Resources" node: Resources | where resourceGroup contains 'production' | project name, type, location, resourceGroup, tags, id Change Time Range Set timeRange in "Set Configuration" node: timeRange: "last30Days" // Last 30 days timeRange: "last90Days" // Last quarter timeRange: "last6Months" // Last 6 months timeRange: "yearToDate" // YTD from Jan 1 timeRange: "lastYear" // Previous 365 days timeRange: "custom" // Use custom dates For custom periods, set timeRange to "custom" and manually update: startDate: "2026-01-01" endDate: "2026-01-31" Add Cost Alerts {{ parseFloat($json.summary.totalCost) > 1000 }} Troubleshooting Common Issues "Authentication failed" Verify your Tenant ID is correct Check that Client ID and Secret are valid Ensure the Service Principal has the required roles "No resources returned" Verify the Subscription ID is correct Check that Service Principal has Reader role Try running the query in Azure Resource Graph Explorer first "No cost data available" Cost data may take 24-48 hours to appear VeAdjust Cost Period In "Set Configuration" node: // Last 7 days startDate: {{ $now.minus({days: 7}).format('yyyy-MM-dd') }} endDate: {{ $now.format('yyyy-MM-dd') }} Add Cost Alerts Add IF node after "Format Report": {{ parseFloat($json.summary.totalCost) > 1000 }2. Microsoft.Sql/servers/databases - 3 resources - $345.67 ... Generated by n8n on 1/19/2026, 10:30:00 AM HTML Report Format Includes styled HTML with: Professional color scheme (Azure blue #0078D4) Responsive tables Summary cards with highlighted metrics Sortable columns Even row highlighting for readability Excel Export Columns | Column | Type | Description | |--------|------|-------------| | resourceName | String | Name of the Azure resource | | resourceType | String | Full resource type (e.g., Microsoft.Compute/virtualMachines) | | resourceGroup | String | Resource group name | | location | String | Azure region (e.g., eastus, westus2) | | sku | Object | SKU information (name, tier) | | tags | Object | All resource tags | | cost | Number | Total cost for the period | | costDetails | Array | Detailed daily cost breakdown | Power BI Data Schema Recommended Power BI measures: Total Cost = SUM('CostData'[cost]) Avg Cost Per Resource = DIVIDE([Total Cost], COUNT('CostData'[resourceName])) Cost Variance = [Total Cost] - CALCULATE([Total Cost], DATEADD('CostData'[timestamp], -1, MONTH)) Top 5 Expensive Resources = TOPN(5, 'CostData', 'CostData'[cost], DESC) Integration Examples Python Integration import requests import pandas as pd Call the n8n webhook url = "https://your-n8n-instance.com/webhook/azure-cost-report" response = requests.get(url) data = response.json() Convert to DataFrame resources_df = pd.DataFrame(data'data'['allResources']) Analyze costs print(f"Total Cost: ${data'data'}") print(f"Most expensive resource: {resources_df.iloc0}") Export to local Excel resources_df.to_excel('azure_costs.xlsx', index=False) PowerShell Integration Call the webhook $url = "https://your-n8n-instance.com/webhook/azure-cost-report" $response = Invoke-RestMethod -Uri $url -Method Get Display summary Write-Host "Total Cost: $($response.data.totalCost)" -ForegroundColor Green Write-Host "Resource Count: $($response.data.resourceCount)" Export to CSV $response.data.summary.allResources | Export-Csv -Path "azure-costs.csv" -NoTypeInformation Send alert if cost exceeds threshold if ([decimal]$response.data.totalCost -gt 1000) { Send-MailMessage -To "admin@company.com" ` -Subject "Azure Cost Alert" ` -Body "Current costs: $($response.data.totalCost)" ` -SmtpServer "smtp.company.com" } JavaScript/Node.js Integration const axios = require('axios'); "Authentication failed": Verify Tenant ID, Client ID, Secret, and Service Principal roles "No resources returned": Check Subscription ID and Reader role assignment "No cost data": Cost data takes 24-48 hours to appear. Verify Cost Management Reader role. "Rate limiting (429)": Add Wait node between API calls or reduce query frequency Resources Azure Resource Graph Cost Management API [n8n Documentation](https://docs.n8n.io/--- Category: Cloud Management, DevOps, FinOps Difficulty: Intermediate Setup Time: 10-15 minutes n8n Version: 1.0+
by sudarshan
How it works Create a user for doing Hybrid Search. Clear Existing Data, if present. Add Documents into the table. Create a hybrid index. Run Semantic search on Documents table for "prioritize teamwork and leadership experience". Run Hybrid search for the text inputted in Chat interface. Setup Steps Download the ONNX model all_MiniLM_L12_v2_augmented.zip Extract the ZIP file on the database server into a directory, for example /opt/oracle/onnx. After extraction, the folder contents should look like: bash-4.4$ pwd /opt/oracle/onnx bash-4.4$ ls all_MiniLM_L12_v2.onnx Connect as SYSDBA and create the DBA user -- Create DBA user CREATE USER app_admin IDENTIFIED BY "StrongPassword123" DEFAULT TABLESPACE users TEMPORARY TABLESPACE temp QUOTA UNLIMITED ON users; -- Grant privileges GRANT DBA TO app_admin; GRANT CREATE TABLESPACE, ALTER TABLESPACE, DROP TABLESPACE TO app_admin; Create n8n Oracle DB credentials hybridsearchuser → for hybrid search operations dbadocuser → for DBA setup (user and tablespace creation) Run the workflow Click the manual Trigger It displays Pure semantic search results Enter search text in Chat interface It displays results for vector and keyword search. Note The workflow currently creates the hybrid search user, docuser with the password visible in plain text inside the n8n Execute SQL node. For better security, consider performing the user creation manually outside n8n. Oracle 23ai or 26ai Database has to be used` Reference Hybrid Search End-End Example
by Dariusz Koryto
Automated FTP File Migration with Smart Cleanup and Email Notifications Overview This n8n workflow automates the secure transfer of files between FTP servers on a scheduled basis, providing enterprise-grade reliability with comprehensive error handling and dual notification systems (email + webhook). Perfect for data migrations, automated backups, and multi-server file synchronization. What it does This workflow automatically discovers, filters, transfers, and safely removes files between FTP servers while maintaining complete audit trails and sending detailed notifications about every operation. Key Features: Scheduled Execution**: Configurable timing (daily, hourly, weekly, or custom cron expressions) Smart File Filtering**: Regex-based filtering by file type, size, date, or name patterns Safe Transfer Protocol**: Downloads → Uploads → Validates → Cleans up source Dual Notifications**: Email alerts + webhook integration for both success and errors Comprehensive Logging**: Detailed audit trail of all operations with timestamps Error Recovery**: Automatic retry logic with exponential backoff for network issues Production Ready**: Built-in safety measures and extensive documentation Use Cases 🏢 Enterprise & IT Operations Data Center Migration**: Moving files between different hosting environments Backup Automation**: Scheduled transfers to secondary storage locations Multi-Site Synchronization**: Keeping files in sync across geographic locations Legacy System Integration**: Bridging old and new systems through automated transfers 📊 Business Operations Document Management**: Automated transfer of contracts, reports, and business documents Media Asset Distribution**: Moving images, videos, and marketing materials between systems Data Pipeline**: Part of larger ETL processes for business intelligence Compliance Archiving**: Moving files to compliance-approved storage systems 🔧 Development & DevOps Build Artifact Distribution**: Deploying compiled applications across environments Configuration Management**: Synchronizing config files between servers Log File Aggregation**: Collecting logs from multiple servers for analysis Automated Deployment**: Moving release packages to production servers How it works 📋 Workflow Steps Schedule Trigger → Initiates workflow at specified intervals File Discovery → Lists files from source FTP server with optional recursion Smart Filtering → Applies customizable filters (type, size, date, name patterns) Secure Download → Retrieves files to temporary n8n storage with retry logic Safe Upload → Transfers files to destination with directory auto-creation Transfer Validation → Verifies successful upload before proceeding Source Cleanup → Removes original files only after confirmed success Comprehensive Logging → Records all operations with detailed metadata Dual Notifications → Sends email + webhook notifications for success/failure 🔄 Error Handling Flow Network Issues** → Automatic retry with exponential backoff (3 attempts) Authentication Problems** → Immediate email alert with troubleshooting steps Permission Errors** → Detailed logging with recommended actions Disk Space Issues** → Safe failure with source file preservation File Corruption** → Integrity validation with rollback capability Setup Requirements 🔑 Credentials Needed Source FTP Server Host, port, username, password Read permissions required SFTP recommended for security Destination FTP Server Host, port, username, password Write permissions required Directory creation permissions SMTP Email Server SMTP host and port (e.g., smtp.gmail.com:587) Authentication credentials For success and error notifications Monitoring API (Optional) Webhook URL for system integration Authentication tokens if required ⚙️ Configuration Steps Import Workflow → Load the JSON template into your n8n instance Configure Credentials → Set up all required FTP and SMTP connections Customize Schedule → Adjust cron expression for your timing needs Set File Filters → Configure regex patterns for your file types Configure Paths → Set source and destination directory structures Test Thoroughly → Run with test files before production deployment Enable Monitoring → Activate email notifications and logging Customization Options 📅 Scheduling Examples 0 2 * * * # Daily at 2 AM 0 */6 * * * # Every 6 hours 0 8 * * 1-5 # Weekdays at 8 AM 0 0 1 * * # Monthly on 1st */15 * * * * # Every 15 minutes 🔍 File Filter Patterns Documents \\.(pdf|doc|docx|xls|xlsx)$ Images \\.(jpg|jpeg|png|gif|svg)$ Data Files \\.(csv|json|xml|sql)$ Archives \\.(zip|rar|7z|tar|gz)$ Size-based (add as condition) {{ $json.size > 1048576 }} # Files > 1MB Date-based (recent files only) {{ $json.date > $now.minus({days: 7}) }} 📁 Directory Organization // Date-based structure /files/{{ $now.format('YYYY/MM/DD') }}/ // Type-based structure /files/{{ $json.name.split('.').pop() }}/ // User-based structure /users/{{ $json.owner || 'system' }}/ // Hybrid approach /{{ $now.format('YYYY-MM') }}/{{ $json.type }}/ Template Features 🛡️ Safety & Security Transfer Validation**: Confirms successful upload before source deletion Error Preservation**: Source files remain intact on any failure Audit Trail**: Complete logging of all operations with timestamps Credential Security**: Secure storage using n8n's credential system SFTP Support**: Encrypted transfers when available Retry Logic**: Automatic recovery from transient network issues 📧 Notification System Success Notifications: Confirmation email with transfer details File metadata (name, size, transfer time) Next scheduled execution information Webhook payload for monitoring systems Error Notifications: Immediate email alerts with error details Troubleshooting steps and recommendations Failed file information for manual intervention Webhook integration for incident management 📊 Monitoring & Analytics Execution Logs**: Detailed history of all workflow runs Performance Metrics**: Transfer speeds and success rates Error Tracking**: Categorized failure analysis Audit Reports**: Compliance-ready activity logs Production Considerations 🚀 Performance Optimization File Size Limits**: Configure timeouts based on expected file sizes Batch Processing**: Handle multiple files efficiently Network Optimization**: Schedule transfers during off-peak hours Resource Monitoring**: Track n8n server CPU, memory, and disk usage 🔧 Maintenance Regular Testing**: Validate credentials and connectivity Log Review**: Monitor for patterns in errors or performance Credential Rotation**: Update passwords and keys regularly Documentation Updates**: Keep configuration notes current Testing Protocol 🧪 Pre-Production Testing Phase 1: Test with 1-2 small files (< 1MB) Phase 2: Test error scenarios (invalid credentials, network issues) Phase 3: Test with representative file sizes and volumes Phase 4: Validate email notifications and logging Phase 5: Full production deployment with monitoring ⚠️ Important Testing Notes Disable Source Deletion** during initial testing Use test directories to avoid production data impact Monitor execution logs** carefully during testing Validate email delivery** to ensure notifications work Test rollback procedures** before production use Support & Documentation This template includes: 8 Comprehensive Sticky Notes** with visual documentation Detailed Node Comments** explaining every configuration option Error Handling Guide** with common troubleshooting steps Security Best Practices** for production deployment Performance Tuning** recommendations for different scenarios Technical Specifications n8n Version**: 1.0.0+ Node Count**: 17 functional nodes + 8 documentation sticky notes Execution Time**: 2-10 minutes (depending on file sizes and network speed) Memory Usage**: 50-200MB (scales with file sizes) Supported Protocols**: FTP, SFTP (recommended) File Size Limit**: Up to 150MB per file (configurable) Concurrent Files**: Processes files sequentially for stability Who is this for? 🎯 Primary Users System Administrators** managing file transfers between servers DevOps Engineers** automating deployment and backup processes IT Operations Teams** handling data migration projects Business Process Owners** requiring automated file management 💼 Industries & Use Cases Healthcare**: Patient data archiving and compliance reporting Financial Services**: Secure document transfer and regulatory reporting Manufacturing**: CAD file distribution and inventory data sync E-commerce**: Product image and catalog management Media**: Asset distribution and content delivery automation
by oka hironobu
Who is this for Development teams and project maintainers who receive high volumes of GitHub issues and want to automate classification and team notifications. Perfect for open source projects, product teams, and DevOps engineers managing multiple repositories. What it does This workflow automatically triages new GitHub issues using Gemini AI classification. When an issue is created, it extracts the title and description, sends them to Gemini AI for analysis, then automatically adds appropriate labels (bug, feature, documentation, etc.) and priority tags to the issue. Critical and high-priority issues trigger alerts in your #urgent-issues Slack channel, while medium and low-priority items go to #issue-tracker. All classifications are logged to Google Sheets with timestamps for analytics. How to set up Connect your GitHub repository using webhook credentials Get a free Gemini API key from ai.google.dev and add to credentials Set up Slack bot credentials for your workspace Create a Google Sheet with columns: Date, Repository, Issue Number, Title, Author, AI Type, AI Priority, Urgency Score, Summary, URL Replace the Google Sheet ID in the final node with your sheet's ID Configure your Slack channel names (#urgent-issues and #issue-tracker) Requirements GitHub repository with admin access Google Gemini API account (free tier available) Slack workspace with bot permissions Google Sheets access for logging How to customize Modify the Gemini prompt to change classification categories or add custom labels. Adjust priority thresholds in the Switch node to change routing logic. Add additional Slack channels for different teams or severity levels. Configure the Google Sheets columns to capture additional metadata or metrics specific to your workflow.
by Matheus Pedrosa
Workflow Overview Keeping API documentation updated is a challenge, especially when your endpoints are powerful n8n webhooks. This project solves that problem by turning your n8n instance into a self-documenting API platform. This workflow acts as a central engine that scans your entire n8n instance for designated webhooks and automatically generates a single, beautiful, and interactive HTML documentation page. By simply adding a standard Set node with specific metadata to any of your webhook workflows, you can make it instantly appear in your live documentation portal, complete with code examples and response schemas. The final output is a single, callable URL that serves a professional, dark-themed, and easy-to-navigate documentation page for all your automated webhook endpoints. Key Features: Automatic Discovery:** Scans all active workflows on your instance to find endpoints designated for documentation. Simple Configuration via a Set Node:** No custom nodes needed! Just add a Set node named API_DOCS to any workflow you want to document and fill in a simple JSON structure. Rich HTML Output:** Dynamically generates a single, responsive, dark-mode HTML page that looks professional right out of the box. Interactive UI:** Uses Bootstrap accordions, allowing users to expand and collapse each endpoint to keep the view clean and organized. Developer-Friendly:** Automatically generates a ready-to-use cURL command for each endpoint, making testing and integration incredibly fast. Zero Dependencies:** The entire solution runs within n8n. No need to set up or maintain external documentation tools like Swagger UI or Redoc. Setup Instructions: This solution has two parts: configuring the workflows you want to document, and setting up this generator workflow. Part 1: In Each Workflow You Want to Document Next to your Webhook trigger node, add a Set node. Change its name to API_DOCS. Create a single variable named jsonOutput (or docsData) and set its type to JSON. Paste the following JSON structure into the value field and customize it with your endpoint's details: { "expose": true, "webhookPath": "PASTE_YOUR_WEBHOOK_PATH_HERE", "method": "POST", "summary": "Your Endpoint Summary", "description": "A clear description of what this webhook does.", "tags": [ "Sales", "Automation" ], "requestBody": { "exampleKey": "exampleValue" }, "successCode": 200, "successResponse": { "status": "success", "message": "Webhook processed correctly." }, "errorCode": 400, "errorResponse": { "status": "error", "message": "Invalid input." } } Part 2: In This Generator Workflow n8n API Node: Configure the GetWorkflows node with your n8n API credentials. It needs permission to read workflows. Configs Node: Customize the main settings for your documentation page, like the title (name_doc), version, and a short description. Webhook Trigger: The Webhook node at the start (default path is /api-doc) provides the final URL for your documentation page. Copy this URL and open it in your browser. Required Credentials: n8n API Credentials: To allow this workflow to read your other workflows.
by Jimleuk
Tired of being let down by the Google Drive Trigger? Rather not exhaust system resources by polling every minute? Then this workflow is for you! Google drive is a great storage option for automation due to its relative simplicity, cheap costs and readily-available integrations. Using Google Drive as a trigger is the next logically step but many n8n users quickly realise the built-in Google Drive trigger just isn't that reliable. Disaster! Typically, the workaround is to poll the Google Drive search API in short intervals but the trade off is wasted server resources during inactivity. The ideal solution is of course, push notifications but they seem quite complicated to implement... or are they? This template demonstrates that setting up Google Push Notifications for Google Drive File Changes actually isn't that hard! Using this approach, Google sends a POST request every time something in a drive changes which solves reliability of events and efficiency of resources. How it works We begin with registering a Notification channel (webhook) with the Google Drive API. 2 key pieces of information is (a) the webhook URL which notifications will be pushed to and (b) because we want to scope to a single location, the driveId. Good to know that you can register as many as you like using http calls but you have to manage them yourself, there's no google dashboard for notification channels! The registration data along with the startPageToken are saved in workflowStaticData - This is a convenient persistence which we can use to hold small bits of data between executions. Now, whenever files or folders are created or updated in our target Google Drive, Google will send push notifications to our webhook trigger in this template. Once triggered, we need still need to call Google Drive's Changes.list to get the actual change events which were detected. we can do this with the HTTP request node. The Changes API will also return the nextPageToken - a marker to establish where next to get the new batch of changes. It's important that we use this token the next time we request from the changes API and so, we'll update the workflowStaticData with this updated value. Unfortunately, the changes.list API isn't able to filter change events by folder or action and so be sure to do your own set of filtering steps to get the files you want. Finally with the valid change events, optionally fetch the file metadata which gives you more attributes to play with. For example, you may want to know if the change event was triggered by n8n, in which case you'll want to check "ModifiedByMe" value. How to use Start with Step 1 and fill in the "Set Variables" node and Click on the Manual Execute Trigger. This will create a single Google Drive Notification Channel for a specific drive. Activate the workflow to start recieving events from Google Drive. To test, perform an action eg. create a file, on the target drive. Watch the webhook calls come pouring in! Once you have the desired events, finish off this template to do something with the changed files. Requirements Google Drive Credentials. Note this workflow also works on Shared Drives. Optimising This Workflow With bulk actions, you'll notice that Google gradually starts to send increasingly large amounts of push notifications - sometimes numbering in the hundreds! For cloud plan users, this could easily exhaust execution limits if lots of changes are made in the same drive daily. One approach is to implement a throttling mechanism externally to batch events before sending them to n8n. This throttling mechanism is outside the scope of this template but quite easy to achieve with something like Supabase Edge Functions.
by Gaetano Castaldo
Web-to-Odoo Lead Funnel (UTM-ready) Create crm.lead records in Odoo from any webform via a secure webhook. The workflow validates required fields, resolves UTMs by name (source, medium, campaign) and writes standard lead fields in Odoo. Clean, portable, and production-ready. Key features ✅ Secure Webhook with Header Auth (x-webhook-token) ✅ Required fields validation (firstname, lastname, email) ✅ UTM lookup by name (utm.source, utm.medium, utm.campaign) ✅ Clean consolidation before create (name, contact_name, email_from, phone, description, type, UTM IDs) ✅ Clear HTTP responses: 200 success / 400 bad request Prerequisites Odoo with Leads enabled (CRM → Settings → Leads) Odoo API Key** for your user (use it as the password) n8n Odoo credentials: URL, DB name, Login, API Key Public URL** for the webhook (ngrok/Cloudflare/reverse proxy). Ensure WEBHOOK_URL / N8N_HOST / N8N_PROTOCOL / N8N_PORT are consistent Header Auth secret** (e.g., x-webhook-token: <your-secret>) How it works Ingest – The Webhook receives a POST at /webhook(-test)/lead-webform with Header Auth. Validate – An IF node checks required fields; if missing → respond with 400 Bad Request. UTM lookup – Three Odoo getAll queries fetch IDs by name: utm.source → source_id utm.medium → medium_id utm.campaign → campaign_id If a record is not found, the corresponding ID remains null. Consolidate – Merge + Code nodes produce a single clean object: { name, contact_name, email_from, phone, description, type: "lead", campaign_id, source_id, medium_id } Create in Odoo – Odoo node (crm.lead → create) writes the lead with standard fields + UTM Many2one IDs. Respond – Success node returns 200 with { status: "ok", lead_id }. Payload (JSON) Required: firstname, lastname, email Optional: phone, notes, source, medium, campaign { "firstname": "John", "lastname": "Doe", "email": "john.doe@example.com", "phone": "+393331234567", "notes": "Wants a demo", "source": "Ads", "medium": "Website", "campaign": "Spring 2025" } Quick test curl -X POST "https://<host>/webhook-test/lead-webform" \ -H "Content-Type: application/json" \ -H "x-webhook-token: <secret>" \ -d '{"firstname":"John","lastname":"Doe","email":"john@ex.com", "phone":"+39333...", "notes":"Demo", "source":"Ads","medium":"Website","campaign":"Spring 2025"}' Notes Recent Odoo versions do not use the mobile field on leads/partners: use phone instead. Keep secrets and credentials out of the template; the user will set their own after import. If you want to auto-create missing UTM records, add an IF after each getAll and a create on utm.*.
by Artem Makarov
About this template This template is to demonstrate how to trace the observations per execution ID in Langfuse via ingestion API. Good to know Endpoint: https://cloud.langfuse.com/api/public/ingestion Auth is a Generic Credential Type with a Basic Auth: username = you_public_key, password = your_secret_key. How it works Trigger**: the workflow is executed by another workflow after an AI run finishes (input parameter execution_id). Remove duplicates** Ensures we only process each execution_id once (optional but recommended). Wait to get execution data** Delay (60-80 secs) so totals and per-step metrics are available. Get execution** Fetches workflow metadata and token totals. Code: structure execution data** Normalizes your run into an array of perModelRuns with model, tokens, latency, and text previews. Split Out* → *Loop Over Items** Iterates each run step. Code: prepare JSON for Langfuse** Builds a batch with: trace-create (stable id trace-<executionId>, grouped into session-<workflowId>) generation-create (model, input/output, usage, timings from latency) HTTP Request to Langfuse** Posts the batch. Optional short Wait between sends. Requirements Langfuse Cloud project and API keys n8n instance with the HTTP node Customizing Add span-create and set parentObservationId on the generation to nest under spans. Add scores or feedback later via score-create. Replace sessionId strategy (per workflow, per user, etc.). If some steps don’t produce tokens, compute and set usage yourself before sending.
by Masaki Go
About This Template This workflow automatically fetches the Nikkei 225 closing price every weekday and sends a formatted message to a list of users on LINE. This is perfect for individuals or teams who need to track the market's daily performance without manual data checking. How It Works Schedule Trigger: Runs the workflow automatically every weekday at 4 PM JST (Tokyo time), just after the market closes. Get Data: An HTTP Request node fetches the latest Nikkei 225 data (closing price, change, %) from a data API. Prepare Payload: A Code node formats this data into a user-friendly message and prepares the JSON payload for the LINE Messaging API, including a list of user IDs. Send to LINE: An HTTP Request node sends the formatted message to all specified users via the LINE multicast API endpoint. Who It’s For Anyone who wants to receive daily stock market alerts. Teams that need to share financial data internally. Developers looking for a simple example of an API-to-LINE workflow. Requirements An n8n account. A LINE Official Account & Messaging API access token. An API endpoint to get Nikkei 225 data. (The one in the template is a temporary example). Setup Steps Add LINE Credentials: In the "Send to LINE via HTTP" node, edit the "Authorization" header to include your own LINE Messaging API Bearer Token. Add User IDs: In the "Prepare LINE API Payload" (Code) node, edit the userIds array to add all the LINE User IDs you want to send messages to. Update Data API: The URL in the "Get Nikkei 225 Data" node is a temporary example. Replace it with your own persistent API URL (e.g., from a public provider or your own server). Customization Options Change Schedule:** Edit the "Every Weekday at 4 PM JST" node to run at a different time. (Note: 4 PM JST is 07:00 UTC, which is what the Cron 0 7 * * 1-5 means). Change Message Format:** Edit the message variable inside the "Prepare LINE API Payload" (Code) node to change the text of the LINE message.