by Hamed Nickmehr
This n8n workflow template uses community nodes and is only compatible with the self-hosted version of n8n. Title: n8n Credentials and Workflows Backup on Change Detection Purpose: Never lose track of your n8n changes again! This workflow smartly backs up all your workflows and credentials, automatically detects any changes using hash comparison, and pushes updates to GitHub—but only when something has actually changed. Set your own interval and stop cluttering your repo with redundant commits. Walkthrough Video on YouTube Trigger: Schedule Trigger**: Executes the entire process at a user-defined interval. No need to worry about traceability or managing countless backups, as the workflow only commits changes when a difference is detected. Workflow Backup Process: Set Workflow Path: Defines the local backup file path for workflows. Get Old Workflow Hash: Executes a helper workflow to retrieve the previous hash. Execute Workflow Backup: Runs n8n export:workflow to export all workflows to the defined file path. Get New Workflow Hash: Executes a helper workflow to generate the new hash from the exported file. Compare Hashes (If Workflow Updated): Checks if the new hash differs from the old one. If Updated: Read Workflow Data → Extract Text → Push to GitHub: Reads, extracts, and commits the updated workflow JSON to GitHub under a timestamped filename. Credential Backup Process: Set Credential Path: Defines the local backup file path for credentials. Get Old Credential Hash: Executes a helper workflow to retrieve the previous hash. Execute Credential Backup: Runs n8n export:credentials to export all credentials. Get New Credential Hash: Executes a helper workflow to generate the new hash from the exported file. Compare Hashes (If Credential Updated): Checks for changes. If Updated: Read Credential Data → Extract Text → Push to GitHub: Commits the new credentials JSON to GitHub if changes are found. Hash Generator (Helper Flow): Used in both workflow and credential backup paths: Read File* → *Extract Text* → *Hash Data** Outputs SHA-256 hash used for comparison GitHub Integration: Commits are created with ISO timestamp in the filename and message. Repository: https://github.com/your-github-name/n8n-onchange-bachup File paths: backups/WorkFlow Backup -timestamp-.json and backups/Credential Backup -timestamp-.json Change Detection Logic: Only commits files when hash changes are detected (i.e., actual content change). Avoids unnecessary GitHub commits and storage use. Error Handling: GitHub nodes are set to continue workflow execution on error, avoiding full process interruption.
by ObisDev
**Get Started ** Creator: @obisdev This workflow powers a fully automated WhatsApp chatbot using a self-hosted Venom Bot instead of the official WhatsApp Business API. It integrates Google Gemini AI to generate intelligent, conversational responses and optionally pulls factual information from a Google Docs-based knowledge base. Designed for small businesses and creators, the bot can maintain contextual memory across messages and act as a smart virtual assistant for sales, support, and lead generation. Overview This n8n workflow connects with a custom-hosted Venom Bot that simulates WhatsApp Web to send and receive messages. It uses a Webhook trigger to receive incoming messages, processes them with an AI Agent powered by Gemini, optionally pulls extra data from a Google Doc or Google Sheet, and sends a smart reply back through the Venom Bot. The workflow also includes a memory system to retain user context, making it capable of handling follow-up questions and dynamic conversations. Who this workflow is for Small Business Owners: Offer 24/7 customer service on WhatsApp without paying for Meta’s Business API. Freelancers & Developers: Build, test, and monetize intelligent bots without the approval process of WhatsApp’s API. Online Sellers & Creators: Handle FAQs, orders, and customer inquiries via WhatsApp on autopilot. Marketers: Deploy campaign bots that respond to DMs with personalized product suggestions or lead captures. Hackers & Builders: Experiment with unofficial APIs to control WhatsApp reliably without breaking TOS for small-scale use. Tools Used n8n: The automation platform managing flow, context, and decision logic. Venom Bot: A Node.js-based, self-hosted WhatsApp Web bot used to send/receive messages. Google Gemini: AI engine for generating context-aware replies. Google Docs (Optional): Acts as a structured knowledge base for business info or FAQs. Google Sheets (Optional): Feeds real-time or structured data into your AI responses. How to Install Import the Workflow: Download the .json and import it into your n8n instance. Set Up Venom Bot: Deploy Venom Bot (on VPS or local) and set it to send messages to your Webhook URL. Webhook Configuration: Update the Webhook node in n8n and set 'Respond' to "Using Respond to Webhook Node". Connect Google Gemini: Add your Gemini API key in n8n credentials. Set Up Google Docs (Optional): Link the document containing your knowledge base. Enable Conversational Memory: Use ={{ $("Process Message").first().json.from }} as the session ID. Check API Key Matching: Ensure the API_SECRET_KEY in Venom .env matches the authorization header in n8n. Customize Persona & Prompts: Update the AI Agent system message to fit your brand tone. Use Cases Customer service without WhatsApp Business API Smart lead generation bots E-commerce order responders AI-powered chatbot for DMs FAQ responder with knowledge base support Connect with Me Email: obisdev@gmail.com Twitter/X: @obisdev GitHub: github.com/obisdev Visit: obisdev.vercel.app #n8n #whatsappautomation #venombot #chatbots #noapi #geminiapi #googleworkspace #aiassistant #nocode #vpsautomation #chatbotwithoutapi #automationtools #customerbot #salesautomation #googleintegration
by David Roberts
This AI agent can access data provided by another n8n workflow. Since that workflow can be used to retrieve any data from any service, this template can be used give an agent access to any data. Note that to use this template, you need to be on n8n version 1.19.4 or later.
by Alberto
PersonalNotesAssistant – Organize and Understand Your Thoughts with Local AI PersonalNotesAssistant is an offline-capable, AI-powered agent that helps you store, summarize, retrieve, and reflect on your personal notes and voice memos — all processed locally and sent via Telegram. Built to run efficiently on a Raspberry Pi 5, this agent supports a variety of note-taking styles and acts as your private memory extension. 🧠 What It Can Do Accept voice or text notes via Telegram Transcribe audio messages into clean, structured text (using Whisper) Automatically summarize or categorize notes with a local LLM Answer questions based on your past notes Retrieve relevant entries by topic, date, or keyword Help you journal or reflect by asking follow-up questions Work completely offline — no cloud or external APIs 🔧 How It Works Capture Notes via Telegram You send a voice message or text to your Telegram bot. The assistant supports both quick thoughts and long-form content. Transcription with Whisper (Local) If the input is a voice message, it is transcribed into text using Whisper running locally on your Raspberry Pi. AI Summarization & Tagging The transcribed or typed note is sent to LLaMA 3.2 via Ollama, which summarizes it, suggests tags, and stores it with metadata (e.g., timestamp, mood, theme). Storage & Retrieval Notes are stored in a local database (e.g., SQLite or JSON). You can later query the assistant with prompts like: “What did I say about stress last week?” “Summarize my ideas from this month.” “Show notes tagged with 'travel'.” Follow-Up & Reflection The agent can optionally engage with reflective prompts to help you deepen your thoughts or gain insight from what you’ve recorded. 💡 Use Cases Track personal growth, habits, or therapy progress Create voice memos while walking or commuting Maintain a structured journal without typing Use as a second brain to help you remember and revisit important thoughts 🔐 Privacy by Default Everything runs locally: No notes are uploaded to cloud platforms No audio is sent to third-party transcription services No LLM processing happens outside your device Ideal for privacy-minded users, psychologists, researchers, or digital minimalists who want AI assistance without surveillance. ⚙️ Technical Stack Raspberry Pi 5: Low-power edge device Whisper (local): For voice-to-text conversion Ollama + LLaMA 3.2: For summarization, classification, and retrieval Telegram Bot API: For input/output Custom Database (e.g., JSON/SQLite): For storing and querying notes 🧪 Real-Life Use This agent is actively used daily by the developer to log ideas, emotions, and plans. It has proven effective for lightweight journaling and context-aware memory assistance, even when offline.
by Alex Huang
Use case This workflow is designed for e-commerce brands and content teams who: Need to scale SEO content production without sacrificing quality Want to eliminate manual keyword filtering (saves 10+ hours/week) Aim to dominate niche search terms (e.g., "vegan leather crossbody bags") What this workflow does Automates the end-to-end process from keyword discovery to publish-ready articles: Keyword Harvesting: Pulls 1,000+ keywords/day from SEMrush/Ahrefs Smart Filtering:Blocks competitor brands (e.g., "Zara alternatives") Detects irrelevant demographics ("kids", "petite") AI Content Generation:Flags non-compliant colors (non-black/white terms) Multi-Channel Output: Formats content for blogs, product descriptions, and email campaigns setup Add Google,SEMrush and OpenAI credentials Set the rules excel of google drive Test workflow by testing workflow Review generated opportunity report in Google Sheets How to adjust this template Change scenario: Replace the rules and define different target
by Yaron Been
This workflow automatically identifies trending topics and hashtags across social media platforms to keep you informed of current trends and viral content. It saves you time by eliminating the need to manually research trending topics and provides data-driven insights for content strategy and social media planning. Overview This workflow automatically scrapes trending hashtag platforms and social media sites to extract currently trending topics, hashtags, and viral content themes. It uses Bright Data to access trend data sources without restrictions and AI to intelligently analyze trending content and provide actionable insights for content creators and marketers. Tools Used n8n**: The automation platform that orchestrates the workflow Bright Data**: For scraping trend platforms and social media without being blocked OpenAI**: AI agent for intelligent trend analysis and content insights Google Sheets**: For storing trending topics data and analysis results How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Bright Data: Add your Bright Data credentials to the MCP Client node Set Up OpenAI: Configure your OpenAI API credentials Configure Google Sheets: Connect your Google Sheets account and set up your trending topics tracking spreadsheet Customize: Define target trend platforms and topics of interest Use Cases Content Marketing**: Discover trending topics for timely and relevant content creation Social Media Strategy**: Plan posts around viral hashtags and trending themes Brand Monitoring**: Track if your brand or industry topics are trending Influencer Marketing**: Identify trending content opportunities for collaborations Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Bright Data**: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #trendingtopics #hashtags #brightdata #webscraping #contentmarketing #n8nworkflow #workflow #nocode #socialmediatrends #trendanalysis #viralcontent #contentresearch #socialmediamonitoring #trendtracking #contentdiscovery #hashtagresearch #socialmediamarketing #contentautomation #trendmonitoring #socialmediainsights #contentplanning #trendalerts #viralmarketing #socialtrends #contentoptimization #trendingcontent #socialmediadata #contentintelligence
by Liam
Ever wanted to just tell your 3d printer what to do remotely? This game changer let's you converse with OpenAI agents to manage OctoPrint connected 3d printers. Great for remote management and monitoring with easy to read responses. Query what jobs are available, Load a print job (upload workflow in development, will be released soon for community use), start the print job and request detailed reports on your printers performance as it runs. Chat > AI > OctoPrint API > Your 3d Printer ++Example 1:++ Human Input: "What print job is running?" AI Response: Current Print Job Details, Job Name: Stand v1.gcode File Size: 2,533,233 bytes Estimated Print Time: Approximately 6 hours 15 minutes (22536 seconds) Progress: Completion: 1.97%, Current Position in File: 49,940 bytes, Time Spent Printing: 2,611 seconds (~43.5 minutes), Time Remaining: Approximately 5 hours 34 minutes (20,123 seconds), Status, Current State: Printing." ++Example 2++ Human Input: "Is my 3d Printer ok?" AI Response: "Status: Printing, Operational: Yes, ++Temperature:++ Bed Temperature: Actual: 50°C, Target: 50°C, Nozzle (Tool 0) Temperature: Actual: 200°C, Target: 200°C, ++Connection Status:++ Connection State: Currently connected to the printer, Port: /dev/ttyUSB0, Baudrate: 115200, Summary, Your 3D printer is currently operational and printing without errors. The bed and nozzle temperatures are both at their target values."
by Alex Kim
Overview The n8n Workflow Cloner is a powerful automation tool designed to copy, sync, and migrate workflows across different n8n instances or projects. Whether you're managing multiple environments (development, staging, production) or organizing workflows within a team, this workflow automates the transfer process, ensuring seamless workflow deployment with minimal manual effort. By automatically detecting and copying only the missing workflows, this tool helps maintain consistency, improve collaboration, and streamline workflow migration between projects or instances. How to Use 1️⃣ Set Up API Credentials Configure API credentials for both source and destination n8n instances. Ensure the credentials have read and write access to manage workflows. 2️⃣ Select Source & Destination Update the "GET - Workflows" node to define the source instance. Set the "CREATE - Workflow" node to specify the destination instance. 3️⃣ Run the Workflow Click "Test Workflow" to start the transfer. The system will fetch all workflows from the source, compare them with the destination, and copy any missing workflows. 4️⃣ Change the Destination Project (Optional) By default, workflows are moved to the "KBB Workflows" project. Modify the "Filter" node to transfer workflows to a different project. 5️⃣ Monitor & Verify The Loop Over Items node ensures batch processing for multiple workflows. Log outputs provide details on transferred workflows and statuses. Key Benefits ✅ Automate Workflow Transfers – No more manual exports/imports. ✅ Sync Workflows Across Environments – Keep workflows up to date in dev, staging, and production. ✅ Effortless Team Collaboration – Share workflows across projects seamlessly. ✅ Backup & Migration Ready – Easily move workflows between n8n instances. Use Cases 🔹 CI/CD for Workflows – Deploy workflows between development and production environments. 🔹 Team Workflow Sharing – Share workflows across multiple n8n projects. 🔹 Workflow Backup Solution – Store copies of workflows in a dedicated backup project. Tags 🚀 Workflow Migration 🚀 n8n Automation 🚀 Sync Workflows 🚀 Backup & Deployment
by Hans Blaauw
This flow is supported by a Chrome plugin created with Cursor AI. The idea was to create a Chrome plugin and a backend service in N8N to do chart analytics with OpenAI. It's a good sample on how to submit a screenshot from the browser to N8N. Who is it for? N8N developers who want to learn about using a Chrome plugin, an N8N webhook and OpenAI. What opportunity does it present? This sample opens up a whole range of N8N connected Chrome extensions that can analyze screenshots by using OpenAI. What this workflow does? The workflow contains: a webhook trigger an OpenAI node with GPT-4O-MINI and Analyze Image selected a response node to send back the Text that was created after analysing the screenshot. All this is needed to talk to the Chrome extension which is created with Cursor AI. The idea is to visit the tradingview.com crypto charts, click the Chrome plugin and get back analytics about the shown chart in understandable language. This is driven by the N8N flow. With the new image analytics capabilities of OpenAI this opens up a world of opportunities. Requirements/setup OpenAI API key Cursor AI installed The Chrome extension. Download The N8N JSON code. Download How to customize it to your needs? Both the Chrome extension and N8N flow can be adapted to use on other websites. You can consider: analyzing a financial screen and ask questions about the data shown analyzing other charts extending the N8N workflow with other AI nodes With AI and image analytics the sky is the limit and in some cases it saves you from creating complex API integrations. Download Chrome extension
by Cheney Zhang
Paul Graham Essay Search & Chat with Milvus Vector Database How It Works This workflow creates a RAG (Retrieval-Augmented Generation) system using Milvus vector database to search Paul Graham essays: Scrape & Load: Fetches Paul Graham essays, extracts text, and stores them as vector embeddings in Milvus Chat Interface: Enables semantic search and AI-powered conversations about the essays Set Up Steps Set up Milvus server following the official installation guide, then create a collection Execute the workflow to scrape essays and load them into your Milvus collection Chat with the AI agent using the Milvus tool to query and discuss essay content
by Omar Akoudad
This n8n workflow helps eCommerce businesses (especially in the Cash on Delivery space) send real-time order events to the Meta (Facebook) Conversions API, ensuring accurate event tracking and better ad attribution. Features Webhook Listener**: Accepts incoming order data (name, phone, IP, user-agent, etc.) via HTTP POST/GET. Data Normalization**: Cleans and formats first_name, last_name, phone, and event_time according to Facebook's strict specs. Data Hashing**: Securely hashes sensitive user data (SHA256), as required by Meta. Full Custom Data Suppor**t: Pass order value, currency, and more. Ideal For: Shopify, WooCommerce, custom stores (Laravel, Node, etc.) Businesses using Meta Ads and needing high-quality server-side tracking Teams without access to full dev resources, but using n8n for automation How It Works: Receive Order from your store via Webhook or API. Format & Normalize fields to match Facebook’s expected structure. Encrypt Sensitive Fields using SHA256 (name, phone, email). Send to Facebook via the Conversions API endpoint. Requirements: A Meta Business Manager account with Conversions API access Your Access Token and Pixel ID set up in n8n credentials
by bswlife
Disclaimer The Execute Command node is only supported on self-hosted (local) instances of n8n. Introduction KOKORO TTS - Kokoro TTS is a compact yet powerful text-to-speech model, currently available on Hugging Face and GitHub. Despite its modest size—trained on less than 100 hours of audio—it delivers impressive results, consistently topping the TTS leaderboard on Hugging Face. Unlike larger systems, Kokoro TTS offers the advantage of running locally, even on devices without GPUs, making it accessible for a wide range of users. Who will benefit from this integration? This will be useful for video bloggers, TikTokers, and it will also enable the creation of a free voice chat bot. Currently, TTS models are mostly paid, but this integration will allow for fully free voice generation. The possibilities are limited only by your imagination. Note Unfortunately, we can't interact with the KOKORO API via browser URL (GET/POST), but we can run a Python script through n8n and pass any variables to it. In the tutorial, the D drive is used, but you can rewrite this for any paths, including the C drive. Step 1 You need to have Python installed. link Also, download and extract the portable version of KOKORO from GitHub. Create a file named voicegen.py with the following code in the KOKORO folder: (C:\KOKORO). As you can see, the output path is: (D:\output.mp3). import sys import shutil from gradio_client import Client Set UTF-8 encoding for stdout sys.stdout.reconfigure(encoding='utf-8') Get arguments from command line text = sys.argv[1] # First argument: input text voice = sys.argv[2] # Second argument: voice speed = float(sys.argv[3]) # Third argument: speed (converted to float) print(f"Received text: {text}") print(f"Voice: {voice}") print(f"Speed: {speed}") Connect to local Gradio server client = Client("http://localhost:7860/") Generate speech using the API result = client.predict( text=text, voice=voice, speed=speed, api_name="/generate_speech" ) Define output path output_path = r"D:\output.mp3" Move the generated file shutil.move(result[1], output_path) Print output path print(output_path) Step 2 Go to n8n and create the following workflow. Step 3 Edit Field Module. { "voice": "af_sarah", "text": "Hello world!" } Step 4 We’ll need an Execute Command module with the command: python C:\KOKORO\voicegen.py “{{ $json.text }}” “{{ $json.voice }}” 1 Step 5 The script is already working, but to listen to it, you can connect a Binary module with the path to the generated MP3 file D:/output.mp3 Step 6 Click “Text workflow” and enjoy the result. There are more voices and accents than in ChatGPT, plus it’s free. P.S. If you want, there is a detailed tutorial on my blog.