by Alex Kim
Overview The n8n Workflow Cloner is a powerful automation tool designed to copy, sync, and migrate workflows across different n8n instances or projects. Whether you're managing multiple environments (development, staging, production) or organizing workflows within a team, this workflow automates the transfer process, ensuring seamless workflow deployment with minimal manual effort. By automatically detecting and copying only the missing workflows, this tool helps maintain consistency, improve collaboration, and streamline workflow migration between projects or instances. How to Use 1️⃣ Set Up API Credentials Configure API credentials for both source and destination n8n instances. Ensure the credentials have read and write access to manage workflows. 2️⃣ Select Source & Destination Update the "GET - Workflows" node to define the source instance. Set the "CREATE - Workflow" node to specify the destination instance. 3️⃣ Run the Workflow Click "Test Workflow" to start the transfer. The system will fetch all workflows from the source, compare them with the destination, and copy any missing workflows. 4️⃣ Change the Destination Project (Optional) By default, workflows are moved to the "KBB Workflows" project. Modify the "Filter" node to transfer workflows to a different project. 5️⃣ Monitor & Verify The Loop Over Items node ensures batch processing for multiple workflows. Log outputs provide details on transferred workflows and statuses. Key Benefits ✅ Automate Workflow Transfers – No more manual exports/imports. ✅ Sync Workflows Across Environments – Keep workflows up to date in dev, staging, and production. ✅ Effortless Team Collaboration – Share workflows across projects seamlessly. ✅ Backup & Migration Ready – Easily move workflows between n8n instances. Use Cases 🔹 CI/CD for Workflows – Deploy workflows between development and production environments. 🔹 Team Workflow Sharing – Share workflows across multiple n8n projects. 🔹 Workflow Backup Solution – Store copies of workflows in a dedicated backup project. Tags 🚀 Workflow Migration 🚀 n8n Automation 🚀 Sync Workflows 🚀 Backup & Deployment
by Nskha
Overview The [n8n] YouTube Channel Advanced RSS Feeds Generator workflow facilitates the generation of various RSS feed formats for YouTube channels without requiring API access or administrative permissions. It utilizes third-party services to extract data, making it extremely user-friendly and accessible. Key Use Cases and Benefits Content Aggregation**: Easily gather and syndicate content from any public YouTube channel. No API Key Required**: Avoid the complexities and limitations of Google's API. Multiple Formats**: Supports ATOM, JSON, MRSS, Plaintext, Sfeed, and direct YouTube XML feeds. Flexibility**: Input can be a YouTube channel or video URL, ID, or username. Services/APIs Utilized This workflow integrates with: commentpicker.com**: For retrieving YouTube channel IDs. rss-bridge.org**: To generate various RSS formats. Configuration Instructions Start the Workflow: Activate the workflow in your n8n instance. Input Details: Enter the YouTube channel or video URL, ID, or username via the provided form trigger. Run the Workflow: Execute the workflow to receive links to 13 different RSS feeds, including community and video content feeds. Screenshots Additional Notes Customization**: You can modify the RSS feed formats or integrate additional services as needed. Support and Contributions For support, questions, or contributions, please visit the n8n community forum or the GitHub repository. We welcome contributions from the community!
by Omar Akoudad
This n8n workflow helps eCommerce businesses (especially in the Cash on Delivery space) send real-time order events to the Meta (Facebook) Conversions API, ensuring accurate event tracking and better ad attribution. Features Webhook Listener**: Accepts incoming order data (name, phone, IP, user-agent, etc.) via HTTP POST/GET. Data Normalization**: Cleans and formats first_name, last_name, phone, and event_time according to Facebook's strict specs. Data Hashing**: Securely hashes sensitive user data (SHA256), as required by Meta. Full Custom Data Suppor**t: Pass order value, currency, and more. Ideal For: Shopify, WooCommerce, custom stores (Laravel, Node, etc.) Businesses using Meta Ads and needing high-quality server-side tracking Teams without access to full dev resources, but using n8n for automation How It Works: Receive Order from your store via Webhook or API. Format & Normalize fields to match Facebook’s expected structure. Encrypt Sensitive Fields using SHA256 (name, phone, email). Send to Facebook via the Conversions API endpoint. Requirements: A Meta Business Manager account with Conversions API access Your Access Token and Pixel ID set up in n8n credentials
by bswlife
Disclaimer The Execute Command node is only supported on self-hosted (local) instances of n8n. Introduction KOKORO TTS - Kokoro TTS is a compact yet powerful text-to-speech model, currently available on Hugging Face and GitHub. Despite its modest size—trained on less than 100 hours of audio—it delivers impressive results, consistently topping the TTS leaderboard on Hugging Face. Unlike larger systems, Kokoro TTS offers the advantage of running locally, even on devices without GPUs, making it accessible for a wide range of users. Who will benefit from this integration? This will be useful for video bloggers, TikTokers, and it will also enable the creation of a free voice chat bot. Currently, TTS models are mostly paid, but this integration will allow for fully free voice generation. The possibilities are limited only by your imagination. Note Unfortunately, we can't interact with the KOKORO API via browser URL (GET/POST), but we can run a Python script through n8n and pass any variables to it. In the tutorial, the D drive is used, but you can rewrite this for any paths, including the C drive. Step 1 You need to have Python installed. link Also, download and extract the portable version of KOKORO from GitHub. Create a file named voicegen.py with the following code in the KOKORO folder: (C:\KOKORO). As you can see, the output path is: (D:\output.mp3). import sys import shutil from gradio_client import Client Set UTF-8 encoding for stdout sys.stdout.reconfigure(encoding='utf-8') Get arguments from command line text = sys.argv[1] # First argument: input text voice = sys.argv[2] # Second argument: voice speed = float(sys.argv[3]) # Third argument: speed (converted to float) print(f"Received text: {text}") print(f"Voice: {voice}") print(f"Speed: {speed}") Connect to local Gradio server client = Client("http://localhost:7860/") Generate speech using the API result = client.predict( text=text, voice=voice, speed=speed, api_name="/generate_speech" ) Define output path output_path = r"D:\output.mp3" Move the generated file shutil.move(result[1], output_path) Print output path print(output_path) Step 2 Go to n8n and create the following workflow. Step 3 Edit Field Module. { "voice": "af_sarah", "text": "Hello world!" } Step 4 We’ll need an Execute Command module with the command: python C:\KOKORO\voicegen.py “{{ $json.text }}” “{{ $json.voice }}” 1 Step 5 The script is already working, but to listen to it, you can connect a Binary module with the path to the generated MP3 file D:/output.mp3 Step 6 Click “Text workflow” and enjoy the result. There are more voices and accents than in ChatGPT, plus it’s free. P.S. If you want, there is a detailed tutorial on my blog.
by CustomJS
This n8n template shows how to extract selected pages from a generated PDF with the PDF Toolkit by www.customjs.space. @custom-js/n8n-nodes-pdf-toolkit Notice Community nodes can only be installed on self-hosted instances of n8n. What this workflow does Downloads** each PDF using an HTTP Request. Extract** pages from the PDF file as needed. Requirements Self-hosted** n8n instance CustomJS API key** for extracting PDF files. PDF files to be merged** to be converted into a PDF Workflow Steps: Manual Trigger: Runs with user interaction. Download PDF File: Pass urls for PDF files to merge. Extract Pages from PDF: Extract selected pages from a generated PDF Usage Get API key from customJS Sign up to customJS platform. Navigate to your profile page Press "Show" button to get API key Set Credentials for CustomJS API on n8n Copy and paste your API key generated from CustomJS here. Design workflow A Manual Trigger for starting workflow. HTTP Request Nodes for downloading PDF files. Extract Pages from PDF. You can replace logic for triggering and returning results. For example, you can trigger this workflow by calling a webhook and get a result as a response from webhook. Simply replace Manual Trigger and Write to Disk nodes. Perfect for Taking a note of specific pages from PDF files. Splitting PDF file into multiple parts.
by jason
This workflow looks for a Close Date value using REGEX in the IF node. If it finds the correct value, it will pass that value on. If it does not find the correct value, it will generate a value based on the present time plus three weeks. The final result will show up in the NoOps node. You can text this execution by enabling and disabling the Set node when you run the execution.
by Harshil Agrawal
This workflow demonstrates how to use the Netlify Trigger node to capture form submissions and add it Airtable. You can reuse the workflow to add the data to another similar database by replacing the Airtable node with the corresponding node. Netlify Trigger node: This node triggers the workflow when a new form is submitted. Select your site from the Site Name/ID dropdown list and the form from the Form ID dropdown list. Set node: This node extract the required data from the Netlify Trigger node. In this example, we only want to add the Name, Email, and Role of the user. Airtable node: This node appends the data to Airtable. If you want the data to Google Sheets or a database, replace this node with the corresponding node.
by Harshil Agrawal
This workflow allows you to trigger a build in Travis CI when code changes are pushed to a GitHub repo or a pull request gets opened. GitHub Trigger node: This node will trigger the workflow when changes are pushed or when a pull request is created, updated, or deleted. IF node: This node checks for the action type. We want to trigger a build when code changes are pushed or when a pull request is opened. We don't want to build the project when a PR is closed or updated. TravisCI node: This node will trigger the build in Travis CI. If you're using CircleCI in your pipeline, replace the node with the CircleCI node. NoOp node: Adding this node is optional.
by Harshil Agrawal
This workflow allows you to compress binary files to zip format. HTTP Request node: The workflow uses the HTTP Request node to fetch files from the internet. If you want to fetch files from your local machine, replace it with the Read Binary File or Read Binary Files node. Compression node: The Compression node compresses the file into a zip. If you want to compress the files to gzip, then select the gzip format instead. Based on your use-case, you may want to write the files to your disk or upload it to Google Drive or Box. If you want to write the compressed file to your disk, replace the Dropbox node with the Write Binary File node, or if you want to upload the file to a different service, use the respective node.
by chaufnet
Webhook to report through Mailgun phishing websites to Steam and CloudFlare (if the domain is on CloudFlare) You have to set the Credentials for webhook and Mailgun. You have to set the email from for Mailgun. This assumes it is running in n8n's Docker image where bind-tools is not readily available but installable.
by Harshil Agrawal
This workflow allows you to store the output of a phantom in Airtable. This workflow uses the LinkedIn Profile Scraper phantom. Configure and launch this phantom from your Phantombuster account before executing the workflow. The workflow uses the following node: Phantombuster node: The Phantombuster node gets the output of the LinkedIn Profile Scraper phantom that ran earlier. You can select a different phantom from the Agent dropdown list, but make sure to configure the workflow accordingly. Set node: Using the Set node we are setting the data for the workflow. The data that we set in this node will be used by the next nodes in the workflow. Based on your use-case, you can modify the node. Airtable node: The Airtable node allows us to append the data in an Airtable. Based on your use-case you can replace this node with any other node. Instead of storing the data in Airtable, you can store the data in a database or Google Sheet, or send it as an email using the Send Email node, Gmail node, or Microsoft Outlook node.
by Marcel Claus-Ahrens
Instructions This automation overlays a background image with another image, making it easy to add watermarks or logos. You can use this automation to watermark your images by overlaying them with a transparent version of your logo. If you'd like to place your logo in a specific corner, feel free to adjust the position of the overlay image in the code node. How it Works Both images are downloaded, so we can process binary files (you can modify the source, tho.) We extract metadata, focusing on the dimensions of each image. The position of the overlay image is calculated (default: dead center of the background image). The two images are composited together. Limitations and Optimisation Opportunities The overlay image must be the same size or smaller than the background image for proper alignment. The overlay image does not automatically scale to match the proportions of the background image. Enjoy the workflow! ❤️ let the work flow — Workflow Automation & Development