by Usman Liaqat
This workflow enables seamless, bidirectional communication between WhatsApp and Slack using n8n. It automates the reception, processing, and forwarding of messages (text, media, and documents) between users on WhatsApp and private Slack channels. Key Features & Flow: 1. WhatsApp to Slack Flow Trigger: The workflow starts with a WhatsApp Trigger node that listens for new incoming messages via a webhook. Channel Handling: It checks if a Slack channel with the WhatsApp sender’s number exists If not, it creates a private Slack channel with the sender's number as the name. Message Type Routing: A Switch Node (Message Type) inspects the message type (text, image, audio, document). Based on type: Text: Sends the message directly to Slack. Image/Audio/Document: Retrieves media URL via WhatsApp API → downloads the media → uploads it to the appropriate Slack channel. 2. Slack to WhatsApp Flow Trigger: A Slack Trigger listens for new messages or file uploads in Slack. Message Type Routing: A second Switch Node (Checking Message Type) checks if the message is text or media. Routing Logic: Text Message: Extracts and forwards it to the WhatsApp contact (identified by the Slack channel name). Media/File Message: Retrieves media file URL from Slack → downloads it → sends it as a document via WhatsApp API. Key Integrations: WhatsApp Cloud API: For receiving messages, downloading media, and sending messages. Slack API: For creating/getting channels, posting messages, and uploading files. HTTP Request Node: Used to securely download media from Slack and WhatsApp servers with proper authentication. Automation Use Case: This workflow is ideal for businesses that handle customer support or conversations over WhatsApp and wish to log, respond, and collaborate using Slack as their internal communication tool.
by Yaron Been
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow automatically monitors customer churn indicators and early warning signals to help reduce customer attrition and improve retention rates. It saves you time by eliminating the need to manually track customer behavior and provides proactive insights for preventing customer churn. Overview This workflow automatically scrapes customer data sources, support tickets, usage analytics, and engagement metrics to identify patterns that indicate potential customer churn. It uses Bright Data to access customer data and AI to intelligently analyze behavior patterns and predict churn risk. Tools Used n8n**: The automation platform that orchestrates the workflow Bright Data**: For scraping customer data and analytics platforms without being blocked OpenAI**: AI agent for intelligent churn prediction and pattern analysis Google Sheets**: For storing churn indicators and customer retention data How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Bright Data: Add your Bright Data credentials to the MCP Client node Set Up OpenAI: Configure your OpenAI API credentials Configure Google Sheets: Connect your Google Sheets account and set up your churn monitoring spreadsheet Customize: Define customer data sources and churn indicator parameters Use Cases Customer Success**: Proactively identify at-risk customers for retention efforts Account Management**: Prioritize customer outreach based on churn probability Product Teams**: Identify product issues that contribute to customer churn Revenue Operations**: Reduce churn rates and improve customer lifetime value Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Bright Data**: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #churnprediction #customerretention #brightdata #webscraping #customeranalytics #n8nworkflow #workflow #nocode #churnindicators #customersuccess #retentionanalysis #customerchurn #customerinsights #churnprevention #retentionmarketing #customerdata #churnmonitoring #customerlifecycle #retentionmetrics #churnanalysis #customerbehavior #retentionoptimization #churnreduction #customerengagement #retentionstrategy #churnmanagement #customerhealth #retentiontracking
by Yaron Been
This workflow automatically scrapes customer reviews from Trustpilot and performs sentiment analysis to extract valuable customer insights. It saves you time by eliminating the need to manually read through reviews and provides structured data on customer feedback, sentiment, and pain points. Overview This workflow automatically scrapes the latest customer reviews from any Trustpilot company page and uses AI to analyze each review for sentiment, extract key complaints or praise, and identify recurring customer pain points. It stores all structured review data in Google Sheets for easy analysis and reporting. Tools Used n8n**: The automation platform that orchestrates the workflow Bright Data**: For scraping Trustpilot review pages without being blocked OpenAI**: AI agent for intelligent review analysis and sentiment extraction Google Sheets**: For storing structured review data and analysis results How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Bright Data: Add your Bright Data credentials to the MCP Client node Set Up OpenAI: Configure your OpenAI API credentials Configure Google Sheets: Connect your Google Sheets account and set up your review tracking spreadsheet Customize: Enter target Trustpilot company URLs and adjust review analysis parameters Use Cases Product Teams**: Identify customer pain points and feature requests from reviews Customer Support**: Monitor customer satisfaction and recurring issues Marketing Teams**: Extract positive testimonials and understand customer sentiment Business Intelligence**: Track brand reputation and customer feedback trends Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Bright Data**: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #trustpilot #reviewscraping #sentimentanalysis #brightdata #webscraping #customerreviews #n8nworkflow #workflow #nocode #reviewautomation #customerinsights #brandmonitoring #reviewanalysis #customerfeedback #reputationmanagement #reviewmonitoring #customersentiment #productfeedback #trustpilotscraping #reviewdata #customerexperience #businessintelligence #feedbackanalysis #reviewtracking #customervoice #aianalysis #reviewmining #customerinsights
by Harshil Agrawal
This workflow allows you to send position updates of the ISS every minute to a table in Google BigQuery. Cron node: The Cron node will trigger the workflow every minute. HTTP Request node: This node will make a GET request to the API https://api.wheretheiss.at/v1/satellites/25544/positions to fetch the position of the ISS. This information gets passed on to the next node in the workflow. Set node: We will use the Set node to ensure that only the data that we set in this node gets passed on to the next nodes in the workflow. Google BigQuery: This node will send the data from the previous node to the position table in Google BigQuery. If you have created a table with a different name, use that table instead.
by Harshil Agrawal
This workflow allows you to compress binary files to zip format. HTTP Request node: The workflow uses the HTTP Request node to fetch files from the internet. If you want to fetch files from your local machine, replace it with the Read Binary File or Read Binary Files node. Compression node: The Compression node compresses the file into a zip. If you want to compress the files to gzip, then select the gzip format instead. Based on your use-case, you may want to write the files to your disk or upload it to Google Drive or Box. If you want to write the compressed file to your disk, replace the Dropbox node with the Write Binary File node, or if you want to upload the file to a different service, use the respective node.
by Zacharia Kimotho
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. What does this workflow do? This workflow helps speed up the analysis process of the top ranking titles and meta descriptions to identify paterns and styles that will help us rank on Google for a given keyword How does it work? We provide a keyword we are interested in on our Google sheet. When executed, We scrap the top 10 pages using Bright Data serp API and analyse the style and patterns of the top ranking pages and generate a new title and meta description Techncial setup Make a copy of this Google sheet Update your desired keywords on the cell/row Set your Bright data credentials on the Update the zone to your preset zone We are getting the results as a JSON. You can update this setting on the url https://www.google.com/search?q={{ $json.search_term .replaceAll(" ", "+")}}&start=0&brd_json=1 by removing the brd_json=1 query Store the generated results on the Duplicated sheet Run the workflow Setting up the Serp Scraper in Bright Data On Bright Data, go to the Proxies & Scraping tab Under SERP API, create a new zone Give it a suitable name and description. The default is serp_api Add this to your account Add your credentials as a header credential and rename to Bright data API
by Rajeet Nair
Overview This workflow automatically converts CSV or Excel files into a production-ready database schema using AI and rule-based validation. It analyzes uploaded data, detects column types, relationships, and data quality, then generates a normalized schema. The output includes SQL DDL scripts, ERD diagrams, a data dictionary, and a load plan. This eliminates manual schema design and accelerates database setup from raw data. How It Works File Upload (Webhook) Accepts CSV or XLSX files via webhook endpoint Initializes workflow configuration (thresholds, retry limits) File Extraction Detects file format (CSV or Excel) Extracts rows into structured JSON Merges extracted datasets Data Cleaning & Profiling Removes duplicates and normalizes values Detects data types (integer, float, date, boolean, string) Computes column statistics (nulls, uniqueness, distributions) Generates file hash and sample dataset Column Profiling Engine Identifies potential primary keys Detects cardinality and uniqueness levels Suggests foreign key relationships based on value overlap AI Schema Generation Uses an AI agent to design normalized tables Assigns SQL data types based on real data Defines primary keys, foreign keys, constraints, and indexes Validation Layer Ensures schema matches actual data Validates: Data types Primary key uniqueness Foreign key overlap (>70%) Constraint consistency Detects circular dependencies Revision Loop If validation fails: Sends feedback to AI agent Regenerates schema Retries up to configured limit Schema Output Generation Generates: SQL DDL scripts ERD (Mermaid format) Data dictionary Load plan with dependency graph Load Plan Engine Computes optimal table insertion order Detects circular dependencies Suggests batching strategy Combine & Explain Merges all outputs Optional AI explanation of schema decisions Response Output Returns structured JSON via webhook: SQL schema ERD summary Data dictionary Load plan Optional explanation Setup Instructions Activate the workflow and copy the webhook URL Send a POST request with a CSV or XLSX file Configure OpenAI credentials (used by AI agent) Adjust thresholds if needed (FK overlap, retries, confidence) Execute workflow and review generated outputs Use Cases Auto-generate database schema from CSV/Excel files Data migration and onboarding pipelines Rapid database prototyping Reverse engineering datasets AI-assisted data modeling Requirements n8n (latest version recommended) OpenAI API credentials LangChain nodes enabled CSV or XLSX input file
by Joachim Hummel
This workflow connects a USB scanner to Nextcloud via ScanservJS and the integrated API. It checks for new scans at a scheduled time (e.g., every 5 minutes). If there are any, they are automatically retrieved via HTTP request and then saved to a desired Nextcloud folder. Ideal for home offices, offices, or maker projects with Raspberry Pi and network scanners. Nodes used: Schedule Trigger – starts the flow cyclically HTTP Request – retrieves document data from ScanservJS Nextcloud Node – uploads the file directly to your Nextcloud account Requirements: Local installation of ScanservJS (e.g., on a Raspberry Pi) Configured USB scanner Nextcloud access with write permissions in the target folder
by Nazmy
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. OAuth Token Generator and Validator This n8n template helps you generate, validate, and store tokens for your customers securely using: n8n** as your backend automation engine Airtable** as your lightweight client and token store 🚀 What It Does Accepts client_id and client_secret via POST webhook. Validates client credentials against Airtable. Generates a long token on success. Stores the generated token in Airtable with metadata. Responds with a JSON containing the token, expiry, and type. Returns clear error messages if validation fails. How It Works Webhook node receives client_id and client_secret. Validator (Code node) checks: Body contains only client_id and client_secret. Rejects missing or extra fields. Airtable search: Looks up the client_id. Rejects if not found. Secret validation (If node): Compares provided client_secret with stored value. Rejects if incorrect. Token generation (Code node): Generates a 128-character secure token. Airtable create: Stores token, client ID, creation date, and type. Webhook response: Returns JSON { access_token, expires_in, token_type } on success. Returns appropriate JSON error messages on failure. Related Workflow You can also use it with the published Bearer Token Validation workflow: 👉 Validate API Requests with Bearer Token Authentication and Airtable to securely validate tokens you generate with this workflow across your protected endpoints. Why Use This Provides OAuth-like flows without a complex backend. Uses n8n + Airtable for client management and token storage. Clean, modular, and ready for your SaaS or internal API automations. Extendable for token expiry, refresh, and rotation handling. Enjoy building secure token-based APIs using n8n + Airtable! 🚀 Built by: Nazmy
by Hamed Nickmehr
This n8n workflow template uses community nodes and is only compatible with the self-hosted version of n8n. Title: n8n Credentials and Workflows Backup on Change Detection Purpose: Never lose track of your n8n changes again! This workflow smartly backs up all your workflows and credentials, automatically detects any changes using hash comparison, and pushes updates to GitHub—but only when something has actually changed. Set your own interval and stop cluttering your repo with redundant commits. Walkthrough Video on YouTube Trigger: Schedule Trigger**: Executes the entire process at a user-defined interval. No need to worry about traceability or managing countless backups, as the workflow only commits changes when a difference is detected. Workflow Backup Process: Set Workflow Path: Defines the local backup file path for workflows. Get Old Workflow Hash: Executes a helper workflow to retrieve the previous hash. Execute Workflow Backup: Runs n8n export:workflow to export all workflows to the defined file path. Get New Workflow Hash: Executes a helper workflow to generate the new hash from the exported file. Compare Hashes (If Workflow Updated): Checks if the new hash differs from the old one. If Updated: Read Workflow Data → Extract Text → Push to GitHub: Reads, extracts, and commits the updated workflow JSON to GitHub under a timestamped filename. Credential Backup Process: Set Credential Path: Defines the local backup file path for credentials. Get Old Credential Hash: Executes a helper workflow to retrieve the previous hash. Execute Credential Backup: Runs n8n export:credentials to export all credentials. Get New Credential Hash: Executes a helper workflow to generate the new hash from the exported file. Compare Hashes (If Credential Updated): Checks for changes. If Updated: Read Credential Data → Extract Text → Push to GitHub: Commits the new credentials JSON to GitHub if changes are found. Hash Generator (Helper Flow): Used in both workflow and credential backup paths: Read File* → *Extract Text* → *Hash Data** Outputs SHA-256 hash used for comparison GitHub Integration: Commits are created with ISO timestamp in the filename and message. Repository: https://github.com/your-github-name/n8n-onchange-bachup File paths: backups/WorkFlow Backup -timestamp-.json and backups/Credential Backup -timestamp-.json Change Detection Logic: Only commits files when hash changes are detected (i.e., actual content change). Avoids unnecessary GitHub commits and storage use. Error Handling: GitHub nodes are set to continue workflow execution on error, avoiding full process interruption.
by Alex Kim
Overview The n8n Workflow Cloner is a powerful automation tool designed to copy, sync, and migrate workflows across different n8n instances or projects. Whether you're managing multiple environments (development, staging, production) or organizing workflows within a team, this workflow automates the transfer process, ensuring seamless workflow deployment with minimal manual effort. By automatically detecting and copying only the missing workflows, this tool helps maintain consistency, improve collaboration, and streamline workflow migration between projects or instances. How to Use 1️⃣ Set Up API Credentials Configure API credentials for both source and destination n8n instances. Ensure the credentials have read and write access to manage workflows. 2️⃣ Select Source & Destination Update the "GET - Workflows" node to define the source instance. Set the "CREATE - Workflow" node to specify the destination instance. 3️⃣ Run the Workflow Click "Test Workflow" to start the transfer. The system will fetch all workflows from the source, compare them with the destination, and copy any missing workflows. 4️⃣ Change the Destination Project (Optional) By default, workflows are moved to the "KBB Workflows" project. Modify the "Filter" node to transfer workflows to a different project. 5️⃣ Monitor & Verify The Loop Over Items node ensures batch processing for multiple workflows. Log outputs provide details on transferred workflows and statuses. Key Benefits ✅ Automate Workflow Transfers – No more manual exports/imports. ✅ Sync Workflows Across Environments – Keep workflows up to date in dev, staging, and production. ✅ Effortless Team Collaboration – Share workflows across projects seamlessly. ✅ Backup & Migration Ready – Easily move workflows between n8n instances. Use Cases 🔹 CI/CD for Workflows – Deploy workflows between development and production environments. 🔹 Team Workflow Sharing – Share workflows across multiple n8n projects. 🔹 Workflow Backup Solution – Store copies of workflows in a dedicated backup project. Tags 🚀 Workflow Migration 🚀 n8n Automation 🚀 Sync Workflows 🚀 Backup & Deployment
by Cheney Zhang
Paul Graham Essay Search & Chat with Milvus Vector Database How It Works This workflow creates a RAG (Retrieval-Augmented Generation) system using Milvus vector database to search Paul Graham essays: Scrape & Load: Fetches Paul Graham essays, extracts text, and stores them as vector embeddings in Milvus Chat Interface: Enables semantic search and AI-powered conversations about the essays Set Up Steps Set up Milvus server following the official installation guide, then create a collection Execute the workflow to scrape essays and load them into your Milvus collection Chat with the AI agent using the Milvus tool to query and discuss essay content