by Nazmy
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. OAuth Token Generator and Validator This n8n template helps you generate, validate, and store tokens for your customers securely using: n8n** as your backend automation engine Airtable** as your lightweight client and token store 🚀 What It Does Accepts client_id and client_secret via POST webhook. Validates client credentials against Airtable. Generates a long token on success. Stores the generated token in Airtable with metadata. Responds with a JSON containing the token, expiry, and type. Returns clear error messages if validation fails. How It Works Webhook node receives client_id and client_secret. Validator (Code node) checks: Body contains only client_id and client_secret. Rejects missing or extra fields. Airtable search: Looks up the client_id. Rejects if not found. Secret validation (If node): Compares provided client_secret with stored value. Rejects if incorrect. Token generation (Code node): Generates a 128-character secure token. Airtable create: Stores token, client ID, creation date, and type. Webhook response: Returns JSON { access_token, expires_in, token_type } on success. Returns appropriate JSON error messages on failure. Related Workflow You can also use it with the published Bearer Token Validation workflow: 👉 Validate API Requests with Bearer Token Authentication and Airtable to securely validate tokens you generate with this workflow across your protected endpoints. Why Use This Provides OAuth-like flows without a complex backend. Uses n8n + Airtable for client management and token storage. Clean, modular, and ready for your SaaS or internal API automations. Extendable for token expiry, refresh, and rotation handling. Enjoy building secure token-based APIs using n8n + Airtable! 🚀 Built by: Nazmy
by Edoardo Guzzi
This template integrates OpenAI's image generation and editing endpoints via the GPT-Image-1 model to visually create and manipulate images based on prompts. It features base64 conversion, binary handling, and prompt chaining. Perfect for marketing, design, product visuals and creative workflows. 🛠️ Requirements OpenAI account with access to gpt-image-1(probably you need organizations verifications for access to that model) OpenAI API credentials configured in n8n A self-hosted or cloud n8n instance Basic familiarity with the n8n UI (no programming required) 🔧 Step-by-step Instructions Step 1: Manual Trigger Starts the workflow on click. Ideal for testing the generation and edit logic. Step 2: Generate Image The Create image call node sends a prompt to OpenAI and returns a base64 image. Example prompt: A cyberpunk city at night with flying cars and neon lights Step 3: Convert to Binary The base64 image is converted into a usable binary PNG file with the Convert json binary to File node. Step 4: Edit the Image The binary file is passed to OpenAI’s /images/edits endpoint. A new prompt applies changes to the image. Example: Add a glowing robot in the foreground with a neon sword ✅ Supports model: gpt-image-1 ⚠️ Requires binary file (not base64) Step 5: Final Conversion Converts the final edited image from base64 to file so it can be downloaded or used in other nodes. 🎯 Real-World Use Cases 🎨 Artists & Creators: concept art and illustration variations 🛍️ E-commerce: auto-generate product mockups 📰 Marketing: create eye-catching blog or social visuals 💡 Bonus Ideas Add a Telegram or Slack node to generate or edit images via chat Use a Webhook to feed prompts from a form or frontend Add a mask to restrict edits to specific areas (e.g., background only)
by Hamed Nickmehr
This n8n workflow template uses community nodes and is only compatible with the self-hosted version of n8n. Title: n8n Credentials and Workflows Backup on Change Detection Purpose: Never lose track of your n8n changes again! This workflow smartly backs up all your workflows and credentials, automatically detects any changes using hash comparison, and pushes updates to GitHub—but only when something has actually changed. Set your own interval and stop cluttering your repo with redundant commits. Walkthrough Video on YouTube Trigger: Schedule Trigger**: Executes the entire process at a user-defined interval. No need to worry about traceability or managing countless backups, as the workflow only commits changes when a difference is detected. Workflow Backup Process: Set Workflow Path: Defines the local backup file path for workflows. Get Old Workflow Hash: Executes a helper workflow to retrieve the previous hash. Execute Workflow Backup: Runs n8n export:workflow to export all workflows to the defined file path. Get New Workflow Hash: Executes a helper workflow to generate the new hash from the exported file. Compare Hashes (If Workflow Updated): Checks if the new hash differs from the old one. If Updated: Read Workflow Data → Extract Text → Push to GitHub: Reads, extracts, and commits the updated workflow JSON to GitHub under a timestamped filename. Credential Backup Process: Set Credential Path: Defines the local backup file path for credentials. Get Old Credential Hash: Executes a helper workflow to retrieve the previous hash. Execute Credential Backup: Runs n8n export:credentials to export all credentials. Get New Credential Hash: Executes a helper workflow to generate the new hash from the exported file. Compare Hashes (If Credential Updated): Checks for changes. If Updated: Read Credential Data → Extract Text → Push to GitHub: Commits the new credentials JSON to GitHub if changes are found. Hash Generator (Helper Flow): Used in both workflow and credential backup paths: Read File* → *Extract Text* → *Hash Data** Outputs SHA-256 hash used for comparison GitHub Integration: Commits are created with ISO timestamp in the filename and message. Repository: https://github.com/your-github-name/n8n-onchange-bachup File paths: backups/WorkFlow Backup -timestamp-.json and backups/Credential Backup -timestamp-.json Change Detection Logic: Only commits files when hash changes are detected (i.e., actual content change). Avoids unnecessary GitHub commits and storage use. Error Handling: GitHub nodes are set to continue workflow execution on error, avoiding full process interruption.
by Danielle Gomes
This n8n workflow collects and summarizes news from multiple RSS feeds, using OpenAI to generate a concise summary that can be sent to WhatsApp or other destinations. Perfect for automating your daily news digest. 🔁 Workflow Breakdown: Schedule Trigger Start the workflow on your desired schedule (daily, hourly, etc.). 🟨 Note: Set the trigger however you wish. RSS Feeds (My RSS 01–04) Fetches articles from four different RSS sources. 🟨 Note: You can add as many RSS feeds as you want. Edit Fields (Edit Fields1–3) Normalizes RSS fields (title, link, etc.) to ensure consistency across different sources. Merge (append mode) Combines the RSS items into a single unified list. Filter Optionally filter articles by keywords, date, or categories. Limit Limits the analysis to the 10 most recent articles. 🟨 Note: This keeps the result concise and avoids overloading the summary. Aggregate Prepares the selected news for summarization by combining them into a single content block. OpenAI (Message Assistant) Summarizes the aggregated news items in a clean and readable format using AI. Send Summary to WhatsApp Sends the AI-generated summary to a WhatsApp endpoint via webhook (yoururlapi.com). You can replace this with an email service, Google Drive, or any other destination. 🟨 Note: You can send it to your WhatsApp API, email, drive, etc. No Operation (End) Final placeholder to safely close the workflow. You may expand from here if needed.
by Alex Halfborg
BACKGROUND Malaysia's Inland Revenue (LHDN) provides an API to get the tax id number for a business entity, based on a given Business Registration number (BRN or SSM), or NRIC (MyKad). PROBLEM However, the API only allows one search at a time. SOLUTION This free workflow lets you do a batch search to get TIN for multiple SSM or NRIC. This is useful if you need to prepare your internal DB for e-invoicing PRE-REQUISITES 1) Get your connection client id and client secret from myhasil.gov.my website 2) Prepare your Google Sheet containing a list of SSM and NRIC you want to get the TIN 3) Create N8N credential to connect to your google sheet above SUPPORT Questions? Ask alex at halfborg dot com
by Sarfaraz Muhammad Sajib
This n8n workflow sends SMS messages through the Textbelt API by accepting phone numbers, messages, and API keys as inputs. It uses a manual trigger to start the process, sets the necessary data, and executes an HTTP POST request to deliver the SMS. Step-by-Step Explanation: Manual Trigger: Starts the workflow manually by clicking ‘Execute workflow’. Set Data Node: Defines the required input parameters (phone, message, and key) that will be sent to the SMS API. You can populate these fields with your target phone number, the text message, and your Textbelt API key. HTTP Request Node: Sends a POST request to https://textbelt.com/tex with the phone number, message, and API key in the request body to send the SMS. The response from the API confirms whether the message was successfully sent.
by Dataki
This workflow helps you generate an llms.txt file (if you're unfamiliar with it, check out this article) using a Screaming Frog export. Screaming Frog is a well-known website crawler. You can easily crawl a website. Then, export the "internal_html" section in CSV format. How It Works: A form allows you to enter: The name of the website A short description The internal_html.csv file from your Screaming Frog export Once the form is submitted, the workflow is triggered automatically, and you can download the llms.txt file directly from n8n. Downloading the File Since the last node in this workflow is "Convert to File", you will need to download the file directly from the n8n UI. However, you can easily add a node (e.g., Google Drive, OneDrive) to automatically upload the file wherever you want. AI-Powered Filtering (Optional): This workflow includes a text classifier node, which is deactivated by default. You can activate it to apply a more intelligent filter to select URLs for the llms.txt file. Consider modifying the description in the classifier node to specify the type of URLs you want to include. How to Use This Workflow Crawl the website you want to generate an llms.txt file for using Screaming Frog. Export the "internal_html" section in CSV format. In n8n, click "Test Workflow", fill in the form, and upload the internal_html.csv file. Once the workflow is complete, go to the "Export to File" node and download the output. That's it! You now have your llms.txt file! Recommended Usage: Use this workflow directly in the n8n UI by clicking 'Test Workflow' and uploading the file in the form.
by Vincent
Automate Actions After PDF Generation with PDFMonkey in n8n Overview This n8n workflow template allows you to automatically react to PDF generation events from PDFMonkey. When a new PDF is successfully created, this workflow retrieves the file and processes it based on your needs—whether it’s sending it via email, saving it to cloud storage, or integrating it with other apps. How It Works Trigger: The workflow listens for a PDFMonkey webhook event when a new PDF is generated. Retrieve PDF: It fetches the newly generated PDF file from PDFMonkey. Process & Action: Depending on the outcome: ✅ On success: The workflow downloads the PDF and can distribute or store it. ❌ On failure: It handles errors accordingly (e.g., sending alerts, retrying, or logging the issue). Configuration To set up this workflow, follow these steps: Copy the Webhook URL generated by n8n. Go to your PDFMonkey Webhooks dashboard and paste the URL in the appropriate field to define the callback URL. Save your settings and trigger a test to ensure proper integration. 📖 For detailed setup instructions, visit: PDFMonkey Webhooks Documentation Use Cases This workflow is ideal for: Automating invoice processing (e.g., sending PDFs to customers via email). Archiving reports** in cloud storage (e.g., Google Drive, Dropbox, or AWS S3). Sending notifications** via Slack, Microsoft Teams, or WhatsApp when a new PDF is available. Logging generated PDFs** in Airtable, Notion, or a database for tracking. Customization You can customize this workflow to: Add conditional logic** (e.g., different actions based on the document type). Enhance security** (e.g., encrypting PDFs before sharing). Extend integrations** by connecting with CRM tools, task managers, or analytics platforms. Need Help? If you need assistance setting up or customizing this workflow, feel free to reach out to us via chat on pdfmonkey.io—we’ll be happy to help! 🚀
by Alex Kim
Overview The n8n Workflow Cloner is a powerful automation tool designed to copy, sync, and migrate workflows across different n8n instances or projects. Whether you're managing multiple environments (development, staging, production) or organizing workflows within a team, this workflow automates the transfer process, ensuring seamless workflow deployment with minimal manual effort. By automatically detecting and copying only the missing workflows, this tool helps maintain consistency, improve collaboration, and streamline workflow migration between projects or instances. How to Use 1️⃣ Set Up API Credentials Configure API credentials for both source and destination n8n instances. Ensure the credentials have read and write access to manage workflows. 2️⃣ Select Source & Destination Update the "GET - Workflows" node to define the source instance. Set the "CREATE - Workflow" node to specify the destination instance. 3️⃣ Run the Workflow Click "Test Workflow" to start the transfer. The system will fetch all workflows from the source, compare them with the destination, and copy any missing workflows. 4️⃣ Change the Destination Project (Optional) By default, workflows are moved to the "KBB Workflows" project. Modify the "Filter" node to transfer workflows to a different project. 5️⃣ Monitor & Verify The Loop Over Items node ensures batch processing for multiple workflows. Log outputs provide details on transferred workflows and statuses. Key Benefits ✅ Automate Workflow Transfers – No more manual exports/imports. ✅ Sync Workflows Across Environments – Keep workflows up to date in dev, staging, and production. ✅ Effortless Team Collaboration – Share workflows across projects seamlessly. ✅ Backup & Migration Ready – Easily move workflows between n8n instances. Use Cases 🔹 CI/CD for Workflows – Deploy workflows between development and production environments. 🔹 Team Workflow Sharing – Share workflows across multiple n8n projects. 🔹 Workflow Backup Solution – Store copies of workflows in a dedicated backup project. Tags 🚀 Workflow Migration 🚀 n8n Automation 🚀 Sync Workflows 🚀 Backup & Deployment
by Nskha
Overview The [n8n] YouTube Channel Advanced RSS Feeds Generator workflow facilitates the generation of various RSS feed formats for YouTube channels without requiring API access or administrative permissions. It utilizes third-party services to extract data, making it extremely user-friendly and accessible. Key Use Cases and Benefits Content Aggregation**: Easily gather and syndicate content from any public YouTube channel. No API Key Required**: Avoid the complexities and limitations of Google's API. Multiple Formats**: Supports ATOM, JSON, MRSS, Plaintext, Sfeed, and direct YouTube XML feeds. Flexibility**: Input can be a YouTube channel or video URL, ID, or username. Services/APIs Utilized This workflow integrates with: commentpicker.com**: For retrieving YouTube channel IDs. rss-bridge.org**: To generate various RSS formats. Configuration Instructions Start the Workflow: Activate the workflow in your n8n instance. Input Details: Enter the YouTube channel or video URL, ID, or username via the provided form trigger. Run the Workflow: Execute the workflow to receive links to 13 different RSS feeds, including community and video content feeds. Screenshots Additional Notes Customization**: You can modify the RSS feed formats or integrate additional services as needed. Support and Contributions For support, questions, or contributions, please visit the n8n community forum or the GitHub repository. We welcome contributions from the community!
by Omar Akoudad
This n8n workflow helps eCommerce businesses (especially in the Cash on Delivery space) send real-time order events to the Meta (Facebook) Conversions API, ensuring accurate event tracking and better ad attribution. Features Webhook Listener**: Accepts incoming order data (name, phone, IP, user-agent, etc.) via HTTP POST/GET. Data Normalization**: Cleans and formats first_name, last_name, phone, and event_time according to Facebook's strict specs. Data Hashing**: Securely hashes sensitive user data (SHA256), as required by Meta. Full Custom Data Suppor**t: Pass order value, currency, and more. Ideal For: Shopify, WooCommerce, custom stores (Laravel, Node, etc.) Businesses using Meta Ads and needing high-quality server-side tracking Teams without access to full dev resources, but using n8n for automation How It Works: Receive Order from your store via Webhook or API. Format & Normalize fields to match Facebook’s expected structure. Encrypt Sensitive Fields using SHA256 (name, phone, email). Send to Facebook via the Conversions API endpoint. Requirements: A Meta Business Manager account with Conversions API access Your Access Token and Pixel ID set up in n8n credentials
by bswlife
Disclaimer The Execute Command node is only supported on self-hosted (local) instances of n8n. Introduction KOKORO TTS - Kokoro TTS is a compact yet powerful text-to-speech model, currently available on Hugging Face and GitHub. Despite its modest size—trained on less than 100 hours of audio—it delivers impressive results, consistently topping the TTS leaderboard on Hugging Face. Unlike larger systems, Kokoro TTS offers the advantage of running locally, even on devices without GPUs, making it accessible for a wide range of users. Who will benefit from this integration? This will be useful for video bloggers, TikTokers, and it will also enable the creation of a free voice chat bot. Currently, TTS models are mostly paid, but this integration will allow for fully free voice generation. The possibilities are limited only by your imagination. Note Unfortunately, we can't interact with the KOKORO API via browser URL (GET/POST), but we can run a Python script through n8n and pass any variables to it. In the tutorial, the D drive is used, but you can rewrite this for any paths, including the C drive. Step 1 You need to have Python installed. link Also, download and extract the portable version of KOKORO from GitHub. Create a file named voicegen.py with the following code in the KOKORO folder: (C:\KOKORO). As you can see, the output path is: (D:\output.mp3). import sys import shutil from gradio_client import Client Set UTF-8 encoding for stdout sys.stdout.reconfigure(encoding='utf-8') Get arguments from command line text = sys.argv[1] # First argument: input text voice = sys.argv[2] # Second argument: voice speed = float(sys.argv[3]) # Third argument: speed (converted to float) print(f"Received text: {text}") print(f"Voice: {voice}") print(f"Speed: {speed}") Connect to local Gradio server client = Client("http://localhost:7860/") Generate speech using the API result = client.predict( text=text, voice=voice, speed=speed, api_name="/generate_speech" ) Define output path output_path = r"D:\output.mp3" Move the generated file shutil.move(result[1], output_path) Print output path print(output_path) Step 2 Go to n8n and create the following workflow. Step 3 Edit Field Module. { "voice": "af_sarah", "text": "Hello world!" } Step 4 We’ll need an Execute Command module with the command: python C:\KOKORO\voicegen.py “{{ $json.text }}” “{{ $json.voice }}” 1 Step 5 The script is already working, but to listen to it, you can connect a Binary module with the path to the generated MP3 file D:/output.mp3 Step 6 Click “Text workflow” and enjoy the result. There are more voices and accents than in ChatGPT, plus it’s free. P.S. If you want, there is a detailed tutorial on my blog.