by James Li
Summary Onfleet is a last-mile delivery software that provides end-to-end route planning, dispatch, communication, and analytics to handle the heavy lifting while you can focus on your customers. This workflow template automatically updates the tags for a Shopify Order when an Onfleet event occurs. Configurations Update the Onfleet trigger node with your own Onfleet credentials, to register for an Onfleet API key, please visit https://onfleet.com/signup to get started You can easily change which Onfleet event to listen to. Learn more about Onfleet webhooks with Onfleet Support Update the Shopify node with your Shopify credentials and add your own tags to the Shopify Order
by Harshil Agrawal
This workflow allows you to translate cocktail instructions using DeepL. HTTP Request node: This node will make a GET request to the API https://www.thecocktaildb.com/api/json/v1/1/random.php to fetch a random cocktail. This information gets passed on to the next node in the workflow. Based on your use case, replace the node with the node from where you might receive the data. DeepL node: This node will translate the cocktail instructions that we got from the previous node to French. To translate the instructions in your language, select your language instead.
by Vigh Sandor
Overview This n8n workflow provides automated CI/CD testing for Kubernetes applications using KinD (Kubernetes in Docker). It creates temporary infrastructure, runs tests, and cleans up everything automatically. Three-Phase Lifecycle INIT Phase - Infrastructure Setup Installs dependencies (sshpass, Docker, KinD) Creates KinD cluster Installs Helm and Nginx Ingress Installs HAProxy for port forwarding Deploys ArgoCD Applies ApplicationSet TEST Phase - Automated Testing Downloads Robot Framework test script from GitLab Installs Robot Framework and Browser library Executes automated browser tests Packages test results Sends results via Telegram DESTROY Phase - Complete Cleanup Removes HAProxy Deletes KinD cluster Uninstalls KinD Uninstalls Docker Sends completion notification Execution Modes Full Pipeline Mode (progress_only = false) > Automatically progresses through all phases: INIT → TEST → DESTROY Single Phase Mode (progress_only = true) > Executes only the specified phase and stops Prerequisites Local Environment (n8n Host) n8n instance version 1.0 or higher Community node n8n-nodes-robotframework installed Network access to target host and GitLab Minimum 4 GB RAM, 20 GB disk space Remote Target Host Linux server (Ubuntu, Debian, CentOS, Fedora, or Alpine) SSH access with sudo privileges Minimum 8 GB RAM (16 GB recommended) 20 GB** free disk space Open ports: 22, 80, 60080, 60443, 56443 External Services GitLab** account with OAuth2 application Repository with test files (test.robot, config.yaml, demo-applicationSet.yaml) Telegram Bot** for notifications Telegram Chat ID Setup Instructions Step 1: Install Community Node In n8n web interface, navigate to Settings → Community Nodes Install n8n-nodes-robotframework Restart n8n if prompted Step 2: Configure GitLab OAuth2 Create GitLab OAuth2 Application Log in to GitLab Navigate to User Settings → Applications Create new application with redirect URI: https://your-n8n-instance.com/rest/oauth2-credential/callback Grant scopes: read_api, read_repository, read_user Copy Application ID and Secret Configure in n8n Create new GitLab OAuth2 API credential Enter GitLab server URL, Client ID, and Secret Connect and authorize Step 3: Prepare GitLab Repository Create repository structure: your-repo/ ├── test.robot ├── config.yaml ├── demo-applicationSet.yaml └── .gitlab-ci.yml Upload your: Robot Framework test script KinD cluster configuration ArgoCD ApplicationSet manifest Step 4: Configure Telegram Bot Create Bot Open Telegram, search for @BotFather Send /newbot command Save the API token Get Chat ID For personal chat: Send message to your bot Visit: https://api.telegram.org/bot<YOUR_TOKEN>/getUpdates Copy the chat ID (positive number) For group chat: Add bot to group Send message mentioning the bot Visit getUpdates endpoint Copy group chat ID (negative number) Configure in n8n Create Telegram API credential Enter bot token Save credential Step 5: Prepare Target Host Verify SSH access: Test connection: ssh -p <port> <username>@<host_ip> Verify sudo: sudo -v The workflow will automatically install dependencies. Step 6: Import and Configure Workflow Import Workflow Copy workflow JSON In n8n, click Workflows → Import from File/URL Import the JSON Configure Parameters Open Set Parameters node and update: | Parameter | Description | Example | |-----------|-------------|---------| | target_host | IP address of remote host | 192.168.1.100 | | target_port | SSH port | 22 | | target_user | SSH username | ubuntu | | target_password | SSH password | your_password | | progress | Starting phase | INIT, TEST, or DESTROY | | progress_only | Execution mode | true or false | | KIND_CONFIG | Path to config.yaml | config.yaml | | ROBOT_SCRIPT | Path to test.robot | test.robot | | ARGOCD_APPSET | Path to ApplicationSet | demo-applicationSet.yaml | > Security: Use n8n credentials or environment variables instead of storing passwords in the workflow. Configure GitLab Nodes For each of the three GitLab nodes: Set Owner (username or organization) Set Repository name Set File Path (uses parameter from Set Parameters) Set Reference (branch: main or master) Select Credentials (GitLab OAuth2) Configure Telegram Nodes Send ROBOT Script Export Pack node: Set Chat ID Select Credentials Process Finish Report node: Update chat ID in command Step 7: Test and Execute Test individual components first Run full workflow Monitor execution (30-60 minutes total) How to Use Execution Examples Complete Testing Pipeline progress = "INIT" progress_only = "false" Flow: INIT → TEST → DESTROY Setup Infrastructure Only progress = "INIT" progress_only = "true" Flow: INIT → Stop Test Existing Infrastructure progress = "TEST" progress_only = "false" Flow: TEST → DESTROY Cleanup Only progress = "DESTROY" Flow: DESTROY → Complete Trigger Methods 1. Manual Execution Open workflow in n8n Set parameters Click Execute Workflow 2. Scheduled Execution Open Schedule Trigger node Configure time (default: 1 AM daily) Ensure workflow is Active 3. Webhook Trigger Configure webhook in GitLab repository Add webhook URL to GitLab CI Monitoring Execution In n8n Interface: View progress in Executions tab Watch node-by-node execution Check output details Via Telegram: Receive test results after TEST phase Receive completion notification after DESTROY phase Execution Timeline: | Phase | Duration | |-------|----------| | INIT | 15-25 minutes | | TEST | 5-10 minutes | | DESTROY | 5-10 minutes | Understanding Test Results After TEST phase, receive testing-export-pack.tar.gz via Telegram containing: log.html - Detailed test execution log report.html - Test summary report output.xml - Machine-readable results screenshots/ - Browser screenshots To view: Download .tar.gz from Telegram Extract: tar -xzf testing-export-pack.tar.gz Open report.html for summary Open log.html for detailed steps Success indicators: All tests marked PASS Screenshots show expected UI states No error messages in logs Failure indicators: Tests marked FAIL Error messages in logs Unexpected UI states in screenshots Configuration Files test.robot Robot Framework test script structure: Uses Browser library Connects to http://autotest.innersite Logs in with autotest/autotest Takes screenshots Runs in headless Chromium config.yaml KinD cluster configuration: 1 control-plane node** 1 worker node** Port mappings: 60080 (HTTP), 60443 (HTTPS), 56443 (API) Kubernetes version: v1.30.2 demo-applicationSet.yaml ArgoCD Application manifest: Points to Git repository Automatic sync enabled Deploys to default namespace gitlab-ci.yml Triggers n8n workflow on commits: Installs curl Sends POST request to webhook Troubleshooting SSH Permission Denied Symptoms: Error: Permission denied (publickey,password) Solutions: Verify password is correct Check SSH authentication method Ensure user has sudo privileges Use SSH keys instead of passwords Docker Installation Fails Symptoms: Error: Package docker-ce is not available Solutions: Check OS version compatibility Verify network connectivity Manually add Docker repository KinD Cluster Creation Timeout Symptoms: Error: Failed to create cluster: timed out Solutions: Check available resources (RAM/CPU/disk) Verify Docker daemon status Pre-pull images Increase timeout ArgoCD Not Accessible Symptoms: Error: Failed to connect to autotest.innersite Solutions: Check HAProxy status: systemctl status haproxy Verify /etc/hosts entry Check Ingress: kubectl get ingress -n argocd Test port forwarding: curl http://127.0.0.1:60080 Robot Framework Tests Fail Symptoms: Error: Chrome failed to start Solutions: Verify Chromium installation Check Browser library: rfbrowser show-trace Ensure correct executablePath in test.robot Install missing dependencies Telegram Notification Not Received Symptoms: Workflow completes but no message Solutions: Verify Chat ID Test Telegram API manually Check bot status Re-add bot to group Workflow Hangs Symptoms: Node shows "Executing..." indefinitely Solutions: Check n8n logs Test SSH connection manually Verify target host status Add timeouts to commands Best Practices Development Workflow Test locally first Run Robot Framework tests on local machine Verify test script syntax Version control Keep all files in Git Use branches for experiments Tag stable versions Incremental changes Make small testable changes Test each change separately Backup data Export workflow regularly Save test results Store credentials securely Production Deployment Separate environments Dev: Frequent testing Staging: Pre-production validation Production: Stable scheduled runs Monitoring Set up execution alerts Monitor host resources Track success/failure rates Disaster recovery Document cleanup procedures Keep backup host ready Test restoration process Security Use SSH keys Rotate credentials quarterly Implement network segmentation Maintenance Schedule | Frequency | Tasks | |-----------|-------| | Daily | Review logs, check notifications | | Weekly | Review failures, check disk space | | Monthly | Update dependencies, test recovery | | Quarterly | Rotate credentials, security audit | Advanced Topics Custom Configurations Multi-node clusters: Add more worker nodes for production-like environments Configure resource limits Add custom port mappings Advanced testing: Load testing with multiple iterations Integration testing for full deployment pipeline Chaos engineering with failure injection Integration with Other Tools Monitoring: Prometheus for metrics collection Grafana for visualization Logging: ELK stack for log aggregation Custom dashboards CI/CD Integration: Jenkins pipelines GitHub Actions Custom webhooks Resource Requirements Minimum | Component | CPU | RAM | Disk | |-----------|-----|-----|------| | n8n Host | 2 | 4 GB | 20 GB | | Target Host | 4 | 8 GB | 20 GB | Recommended | Component | CPU | RAM | Disk | |-----------|-----|-----|------| | n8n Host | 4 | 8 GB | 50 GB | | Target Host | 8 | 16 GB | 50 GB | Useful Commands KinD List clusters: kind get clusters Get kubeconfig: kind get kubeconfig --name automate-tst Export logs: kind export logs --name automate-tst Docker List containers: docker ps -a --filter "name=automate-tst" Enter control plane: docker exec -it automate-tst-control-plane bash View logs: docker logs automate-tst-control-plane Kubernetes Get all resources: kubectl get all -A Describe pod: kubectl describe pod -n argocd <pod-name> View logs: kubectl logs -n argocd <pod-name> --follow Port forward: kubectl port-forward -n argocd svc/argocd-server 8080:80 Robot Framework Run tests: robot test.robot Run specific test: robot -t "Test Name" test.robot Generate report: robot --outputdir results test.robot Additional Resources Official Documentation n8n**: https://docs.n8n.io KinD**: https://kind.sigs.k8s.io ArgoCD**: https://argo-cd.readthedocs.io Robot Framework**: https://robotframework.org Browser Library**: https://marketsquare.github.io/robotframework-browser Community n8n Community**: https://community.n8n.io Kubernetes Slack**: https://kubernetes.slack.com ArgoCD Slack**: https://argoproj.github.io/community/join-slack Robot Framework Forum**: https://forum.robotframework.org Related Projects k3s**: Lightweight Kubernetes distribution minikube**: Local Kubernetes alternative Flux CD**: Alternative GitOps tool Playwright**: Alternative browser automation
by malgamves
A workflow which allows you to receive daily affirmations via Telegram by querying a REST API triggered by a Cron node. I used the affirmations.dev API
by James Li
Summary Onfleet is a last-mile delivery software that provides end-to-end route planning, dispatch, communication, and analytics to handle the heavy lifting while you can focus on your customers. This workflow template automatically creates an Onfleet delivery task when you add in a new row in Airtable. Configurations Update the Airtable trigger node with your own Airtable Base ID, and the table name accordingly You will also need to configure how often this Airtable trigger polls, the default in this template is every 10 minutes Update the Onfleet node with your own Onfleet credentials, to register for an Onfleet API key, please visit https://onfleet.com/signup to get started You can easily change how the Onfleet task is created by mapping to additional data in the Airtable Airtable format should adhere to Onfleet's task import functionalities, for more details please visit the Onfleet Support Center.
by James Li
Summary Onfleet is a last-mile delivery software that provides end-to-end route planning, dispatch, communication, and analytics to handle the heavy lifting while you can focus on your customers. This workflow template automatically creates an Onfleet delivery task when a new fulfillment is created for a Shopify order. Configurations Update the Shopify trigger node with your own Shopify credentials Update the Onfleet node with your own Onfleet credentials, to register for an Onfleet API key, please visit https://onfleet.com/signup to get started You can easily change how the Onfleet task is created by mapping to additional data in the Shopify fulfillment object
by James Li
Summary Onfleet is a last-mile delivery software that provides end-to-end route planning, dispatch, communication, and analytics to handle the heavy lifting while you can focus on your customers. This workflow template listens to an Onfleet event and communicates via a Whatsapp message. You can easily streamline this with the recipient of the delivery or your customer support numbers. Configurations Update the Onfleet trigger node with your own Onfleet credentials, to register for an Onfleet API key, please visit https://onfleet.com/signup to get started You can easily change which Onfleet event to listen to. Learn more about Onfleet webhooks with Onfleet Support Update the Twilio node with your own Twilio credentials, add your own expressions to the to number or simply source the recipient's phone number from the Onfleet event Toggle To Whatsapp to OFF if you want to simply use Twilio's SMS API
by James Li
Summary Onfleet is a last-mile delivery software that provides end-to-end route planning, dispatch, communication, and analytics to handle the heavy lifting while you can focus on your customers. This workflow template listens to an Onfleet event and interacts with the QuickBooks API. You can easily streamline this with your QuickBooks invoices or other entities. Typically, you can create an invoice when an Onfleet task is created to allow your customers to pay ahead of an upcoming delivery. Configurations Update the Onfleet trigger node with your own Onfleet credentials, to register for an Onfleet API key, please visit https://onfleet.com/signup to get started You can easily change which Onfleet event to listen to. Learn more about Onfleet webhooks with Onfleet Support Update the QuickBooks Online node with your QuickBooks credentials
by sudarshan
How it works Create a user for doing Hybrid Search. Clear Existing Data, if present. Add Documents into the table. Create a hybrid index. Run Semantic search on Documents table for "prioritize teamwork and leadership experience". Run Hybrid search for the text inputted in Chat interface. Setup Steps Download the ONNX model all_MiniLM_L12_v2_augmented.zip Extract the ZIP file on the database server into a directory, for example /opt/oracle/onnx. After extraction, the folder contents should look like: bash-4.4$ pwd /opt/oracle/onnx bash-4.4$ ls all_MiniLM_L12_v2.onnx Connect as SYSDBA and create the DBA user -- Create DBA user CREATE USER app_admin IDENTIFIED BY "StrongPassword123" DEFAULT TABLESPACE users TEMPORARY TABLESPACE temp QUOTA UNLIMITED ON users; -- Grant privileges GRANT DBA TO app_admin; GRANT CREATE TABLESPACE, ALTER TABLESPACE, DROP TABLESPACE TO app_admin; Create n8n Oracle DB credentials hybridsearchuser → for hybrid search operations dbadocuser → for DBA setup (user and tablespace creation) Run the workflow Click the manual Trigger It displays Pure semantic search results Enter search text in Chat interface It displays results for vector and keyword search. Note The workflow currently creates the hybrid search user, docuser with the password visible in plain text inside the n8n Execute SQL node. For better security, consider performing the user creation manually outside n8n. Oracle 23ai or 26ai Database has to be used` Reference Hybrid Search End-End Example
by Dariusz Koryto
Automated FTP File Migration with Smart Cleanup and Email Notifications Overview This n8n workflow automates the secure transfer of files between FTP servers on a scheduled basis, providing enterprise-grade reliability with comprehensive error handling and dual notification systems (email + webhook). Perfect for data migrations, automated backups, and multi-server file synchronization. What it does This workflow automatically discovers, filters, transfers, and safely removes files between FTP servers while maintaining complete audit trails and sending detailed notifications about every operation. Key Features: Scheduled Execution**: Configurable timing (daily, hourly, weekly, or custom cron expressions) Smart File Filtering**: Regex-based filtering by file type, size, date, or name patterns Safe Transfer Protocol**: Downloads → Uploads → Validates → Cleans up source Dual Notifications**: Email alerts + webhook integration for both success and errors Comprehensive Logging**: Detailed audit trail of all operations with timestamps Error Recovery**: Automatic retry logic with exponential backoff for network issues Production Ready**: Built-in safety measures and extensive documentation Use Cases 🏢 Enterprise & IT Operations Data Center Migration**: Moving files between different hosting environments Backup Automation**: Scheduled transfers to secondary storage locations Multi-Site Synchronization**: Keeping files in sync across geographic locations Legacy System Integration**: Bridging old and new systems through automated transfers 📊 Business Operations Document Management**: Automated transfer of contracts, reports, and business documents Media Asset Distribution**: Moving images, videos, and marketing materials between systems Data Pipeline**: Part of larger ETL processes for business intelligence Compliance Archiving**: Moving files to compliance-approved storage systems 🔧 Development & DevOps Build Artifact Distribution**: Deploying compiled applications across environments Configuration Management**: Synchronizing config files between servers Log File Aggregation**: Collecting logs from multiple servers for analysis Automated Deployment**: Moving release packages to production servers How it works 📋 Workflow Steps Schedule Trigger → Initiates workflow at specified intervals File Discovery → Lists files from source FTP server with optional recursion Smart Filtering → Applies customizable filters (type, size, date, name patterns) Secure Download → Retrieves files to temporary n8n storage with retry logic Safe Upload → Transfers files to destination with directory auto-creation Transfer Validation → Verifies successful upload before proceeding Source Cleanup → Removes original files only after confirmed success Comprehensive Logging → Records all operations with detailed metadata Dual Notifications → Sends email + webhook notifications for success/failure 🔄 Error Handling Flow Network Issues** → Automatic retry with exponential backoff (3 attempts) Authentication Problems** → Immediate email alert with troubleshooting steps Permission Errors** → Detailed logging with recommended actions Disk Space Issues** → Safe failure with source file preservation File Corruption** → Integrity validation with rollback capability Setup Requirements 🔑 Credentials Needed Source FTP Server Host, port, username, password Read permissions required SFTP recommended for security Destination FTP Server Host, port, username, password Write permissions required Directory creation permissions SMTP Email Server SMTP host and port (e.g., smtp.gmail.com:587) Authentication credentials For success and error notifications Monitoring API (Optional) Webhook URL for system integration Authentication tokens if required ⚙️ Configuration Steps Import Workflow → Load the JSON template into your n8n instance Configure Credentials → Set up all required FTP and SMTP connections Customize Schedule → Adjust cron expression for your timing needs Set File Filters → Configure regex patterns for your file types Configure Paths → Set source and destination directory structures Test Thoroughly → Run with test files before production deployment Enable Monitoring → Activate email notifications and logging Customization Options 📅 Scheduling Examples 0 2 * * * # Daily at 2 AM 0 */6 * * * # Every 6 hours 0 8 * * 1-5 # Weekdays at 8 AM 0 0 1 * * # Monthly on 1st */15 * * * * # Every 15 minutes 🔍 File Filter Patterns Documents \\.(pdf|doc|docx|xls|xlsx)$ Images \\.(jpg|jpeg|png|gif|svg)$ Data Files \\.(csv|json|xml|sql)$ Archives \\.(zip|rar|7z|tar|gz)$ Size-based (add as condition) {{ $json.size > 1048576 }} # Files > 1MB Date-based (recent files only) {{ $json.date > $now.minus({days: 7}) }} 📁 Directory Organization // Date-based structure /files/{{ $now.format('YYYY/MM/DD') }}/ // Type-based structure /files/{{ $json.name.split('.').pop() }}/ // User-based structure /users/{{ $json.owner || 'system' }}/ // Hybrid approach /{{ $now.format('YYYY-MM') }}/{{ $json.type }}/ Template Features 🛡️ Safety & Security Transfer Validation**: Confirms successful upload before source deletion Error Preservation**: Source files remain intact on any failure Audit Trail**: Complete logging of all operations with timestamps Credential Security**: Secure storage using n8n's credential system SFTP Support**: Encrypted transfers when available Retry Logic**: Automatic recovery from transient network issues 📧 Notification System Success Notifications: Confirmation email with transfer details File metadata (name, size, transfer time) Next scheduled execution information Webhook payload for monitoring systems Error Notifications: Immediate email alerts with error details Troubleshooting steps and recommendations Failed file information for manual intervention Webhook integration for incident management 📊 Monitoring & Analytics Execution Logs**: Detailed history of all workflow runs Performance Metrics**: Transfer speeds and success rates Error Tracking**: Categorized failure analysis Audit Reports**: Compliance-ready activity logs Production Considerations 🚀 Performance Optimization File Size Limits**: Configure timeouts based on expected file sizes Batch Processing**: Handle multiple files efficiently Network Optimization**: Schedule transfers during off-peak hours Resource Monitoring**: Track n8n server CPU, memory, and disk usage 🔧 Maintenance Regular Testing**: Validate credentials and connectivity Log Review**: Monitor for patterns in errors or performance Credential Rotation**: Update passwords and keys regularly Documentation Updates**: Keep configuration notes current Testing Protocol 🧪 Pre-Production Testing Phase 1: Test with 1-2 small files (< 1MB) Phase 2: Test error scenarios (invalid credentials, network issues) Phase 3: Test with representative file sizes and volumes Phase 4: Validate email notifications and logging Phase 5: Full production deployment with monitoring ⚠️ Important Testing Notes Disable Source Deletion** during initial testing Use test directories to avoid production data impact Monitor execution logs** carefully during testing Validate email delivery** to ensure notifications work Test rollback procedures** before production use Support & Documentation This template includes: 8 Comprehensive Sticky Notes** with visual documentation Detailed Node Comments** explaining every configuration option Error Handling Guide** with common troubleshooting steps Security Best Practices** for production deployment Performance Tuning** recommendations for different scenarios Technical Specifications n8n Version**: 1.0.0+ Node Count**: 17 functional nodes + 8 documentation sticky notes Execution Time**: 2-10 minutes (depending on file sizes and network speed) Memory Usage**: 50-200MB (scales with file sizes) Supported Protocols**: FTP, SFTP (recommended) File Size Limit**: Up to 150MB per file (configurable) Concurrent Files**: Processes files sequentially for stability Who is this for? 🎯 Primary Users System Administrators** managing file transfers between servers DevOps Engineers** automating deployment and backup processes IT Operations Teams** handling data migration projects Business Process Owners** requiring automated file management 💼 Industries & Use Cases Healthcare**: Patient data archiving and compliance reporting Financial Services**: Secure document transfer and regulatory reporting Manufacturing**: CAD file distribution and inventory data sync E-commerce**: Product image and catalog management Media**: Asset distribution and content delivery automation
by Matheus Pedrosa
Workflow Overview Keeping API documentation updated is a challenge, especially when your endpoints are powerful n8n webhooks. This project solves that problem by turning your n8n instance into a self-documenting API platform. This workflow acts as a central engine that scans your entire n8n instance for designated webhooks and automatically generates a single, beautiful, and interactive HTML documentation page. By simply adding a standard Set node with specific metadata to any of your webhook workflows, you can make it instantly appear in your live documentation portal, complete with code examples and response schemas. The final output is a single, callable URL that serves a professional, dark-themed, and easy-to-navigate documentation page for all your automated webhook endpoints. Key Features: Automatic Discovery:** Scans all active workflows on your instance to find endpoints designated for documentation. Simple Configuration via a Set Node:** No custom nodes needed! Just add a Set node named API_DOCS to any workflow you want to document and fill in a simple JSON structure. Rich HTML Output:** Dynamically generates a single, responsive, dark-mode HTML page that looks professional right out of the box. Interactive UI:** Uses Bootstrap accordions, allowing users to expand and collapse each endpoint to keep the view clean and organized. Developer-Friendly:** Automatically generates a ready-to-use cURL command for each endpoint, making testing and integration incredibly fast. Zero Dependencies:** The entire solution runs within n8n. No need to set up or maintain external documentation tools like Swagger UI or Redoc. Setup Instructions: This solution has two parts: configuring the workflows you want to document, and setting up this generator workflow. Part 1: In Each Workflow You Want to Document Next to your Webhook trigger node, add a Set node. Change its name to API_DOCS. Create a single variable named jsonOutput (or docsData) and set its type to JSON. Paste the following JSON structure into the value field and customize it with your endpoint's details: { "expose": true, "webhookPath": "PASTE_YOUR_WEBHOOK_PATH_HERE", "method": "POST", "summary": "Your Endpoint Summary", "description": "A clear description of what this webhook does.", "tags": [ "Sales", "Automation" ], "requestBody": { "exampleKey": "exampleValue" }, "successCode": 200, "successResponse": { "status": "success", "message": "Webhook processed correctly." }, "errorCode": 400, "errorResponse": { "status": "error", "message": "Invalid input." } } Part 2: In This Generator Workflow n8n API Node: Configure the GetWorkflows node with your n8n API credentials. It needs permission to read workflows. Configs Node: Customize the main settings for your documentation page, like the title (name_doc), version, and a short description. Webhook Trigger: The Webhook node at the start (default path is /api-doc) provides the final URL for your documentation page. Copy this URL and open it in your browser. Required Credentials: n8n API Credentials: To allow this workflow to read your other workflows.
by Jimleuk
Tired of being let down by the Google Drive Trigger? Rather not exhaust system resources by polling every minute? Then this workflow is for you! Google drive is a great storage option for automation due to its relative simplicity, cheap costs and readily-available integrations. Using Google Drive as a trigger is the next logically step but many n8n users quickly realise the built-in Google Drive trigger just isn't that reliable. Disaster! Typically, the workaround is to poll the Google Drive search API in short intervals but the trade off is wasted server resources during inactivity. The ideal solution is of course, push notifications but they seem quite complicated to implement... or are they? This template demonstrates that setting up Google Push Notifications for Google Drive File Changes actually isn't that hard! Using this approach, Google sends a POST request every time something in a drive changes which solves reliability of events and efficiency of resources. How it works We begin with registering a Notification channel (webhook) with the Google Drive API. 2 key pieces of information is (a) the webhook URL which notifications will be pushed to and (b) because we want to scope to a single location, the driveId. Good to know that you can register as many as you like using http calls but you have to manage them yourself, there's no google dashboard for notification channels! The registration data along with the startPageToken are saved in workflowStaticData - This is a convenient persistence which we can use to hold small bits of data between executions. Now, whenever files or folders are created or updated in our target Google Drive, Google will send push notifications to our webhook trigger in this template. Once triggered, we need still need to call Google Drive's Changes.list to get the actual change events which were detected. we can do this with the HTTP request node. The Changes API will also return the nextPageToken - a marker to establish where next to get the new batch of changes. It's important that we use this token the next time we request from the changes API and so, we'll update the workflowStaticData with this updated value. Unfortunately, the changes.list API isn't able to filter change events by folder or action and so be sure to do your own set of filtering steps to get the files you want. Finally with the valid change events, optionally fetch the file metadata which gives you more attributes to play with. For example, you may want to know if the change event was triggered by n8n, in which case you'll want to check "ModifiedByMe" value. How to use Start with Step 1 and fill in the "Set Variables" node and Click on the Manual Execute Trigger. This will create a single Google Drive Notification Channel for a specific drive. Activate the workflow to start recieving events from Google Drive. To test, perform an action eg. create a file, on the target drive. Watch the webhook calls come pouring in! Once you have the desired events, finish off this template to do something with the changed files. Requirements Google Drive Credentials. Note this workflow also works on Shared Drives. Optimising This Workflow With bulk actions, you'll notice that Google gradually starts to send increasingly large amounts of push notifications - sometimes numbering in the hundreds! For cloud plan users, this could easily exhaust execution limits if lots of changes are made in the same drive daily. One approach is to implement a throttling mechanism externally to batch events before sending them to n8n. This throttling mechanism is outside the scope of this template but quite easy to achieve with something like Supabase Edge Functions.