by Ranjan Dailata
Who this is for? The LinkedIn Company Story Generator is an automated workflow that extracts company profile data from LinkedIn using Bright Data's web scraping infrastructure, then transforms that data into a professionally written narrative or story using a language model (e.g., OpenAI, Gemini). The final output is sent via webhook notification, making it easy to publish, review, or further automate. This workflow is tailored for: Marketing Professionals**: Seeking to generate compelling company narratives for campaigns. Sales Teams**: Aiming to understand potential clients through summarized company insights. Content Creators**: Looking to craft stories or articles based on company data. Recruiters**: Interested in obtaining concise overviews of companies for talent acquisition strategies. What problem is this workflow solving? Manually gathering and summarizing company information from LinkedIn can be time-consuming and inconsistent. This workflow automates the process, ensuring: Efficiency**: Quick extraction and summarization of company data. Consistency**: Standardized summaries for uniformity across use cases. Scalability**: Ability to process multiple companies without additional manual effort. What this workflow does The workflow performs the following steps: Input Acquisition**: Receives a company's name or LinkedIn URL as input. Data Extraction**: Utilizes Bright Data to scrape the company's LinkedIn profile. Information Parsing**: Processes the extracted HTML content to retrieve relevant company details. Summarization**: Employs AI Google Gemini to generate a concise company story. Output Delivery**: Sends the summarized content to a specified webhook or email address. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the LinkedIn URL by navigating to the Set LinkedIn URL node. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs Input Variations: Modify the **Set LinkedIn URL node to accept a different company LinkedIn URL. Data Points**: Adjust the HTML Data Extractor Node to retrieve additional details like employee count, industry, or headquarters location. Summarization Style**: Customize the AI prompt to generate summaries in different tones or formats (e.g., formal, casual, bullet points). Output Destinations**: Configure the output node to send summaries to various platforms, such as Slack, CRM systems, or databases.
by Davide
This workflow is designed to analyze YouTube videos by extracting their transcripts, summarizing the content using AI models, and sending the analysis via email. This workflow is ideal for content creators, marketers, or anyone who needs to quickly analyze and summarize YouTube videos for research, content planning, or educational purposes. How It Works: Trigger: The workflow starts with a manual trigger, allowing you to test it by clicking "Test workflow." You can also set a YouTube video URL manually or dynamically. YouTube Video ID Extraction: The workflow extracts the YouTube video ID from the provided URL using a custom JavaScript function. This ID is necessary for fetching the transcript. Transcript Generation: The video ID is sent via an HTTP request to generate the transcript. You need to replace APIKEY with a free API key from the service. Transcript Validation: The workflow checks if a transcript exists for the video. If a transcript is available, it proceeds; otherwise, it stops. Full Text Extraction: If a transcript exists, the workflow combines all transcript segments into a single text variable for further analysis. AI-Powered Analysis: The full transcript is passed to an AI model (DeepSeek, OpenAI, or OpenRouter) for analysis. The AI generates a structured summary, including a title and key points, formatted in markdown. Email Notification: The analysis results (title and summary) are sent via email using SMTP credentials. The email contains the structured summary of the video. Set Up Steps: YouTube Transcript API: Obtain a free API key from youtube-transcript.io and replace APIKEY in the "Generate transcript" node with your key. AI Model Configuration: Configure the AI model nodes (DeepSeek, OpenAI, or OpenRouter) with the appropriate API credentials. You can choose one or multiple models depending on your preference. Email Setup: Configure the "Send Email" node with your SMTP credentials (e.g., Gmail, Outlook, or any SMTP service). Ensure the email settings are correct to send the analysis results. Key Features: Free Tools: Uses **youtube-transcript.io for free transcript generation. AI Models**: Supports multiple AI models (DeepSeek, OpenAI, OpenRouter) for flexible analysis. Email Notifications**: Sends the analysis results directly to your inbox. Customizable**: Easily adapt the workflow to analyze different videos or use different AI models.
by Solido AI
How it works: This bot operates in a continuous WhatsApp monitoring loop. It analyzes messages to detect keywords in common questions (like hours, prices, and location) and sends automatic replies with predefined information. For unrecognized questions, it directs the user to manual assistance. Set up steps: The initial setup involves integrating with the WhatsApp API, registering keywords and their respective responses, and defining the fallback flow. It takes only a few minutes to have the bot running with essential information.
by Oneclick AI Squad
This automated n8n workflow tracks booked flight fares post-purchase using Amadeus and Skyscanner APIs to detect drops for refund or credit opportunities. It streamlines fare monitoring, updates booking statuses, and notifies users via SMS or email. Fundamental Aspects Fare Check Trigger** - Initiates the workflow Get Tracked Bookings** - Retrieves existing booking data Prepare Fare Query** - Prepares query parameters Search Current Fares** - Queries Skyscanner for current fares Analyze Fare Drops** - Identifies significant fare reductions Update Fare Tracking** - Updates fare tracking records Update Booking Status** - Updates status based on fare changes Check if Notification Needed** - Determines if alerts are required Send Fare Drop Email** - Notifies users via email Notify Slack Team** - Alerts the team via Slack Check Refund Eligible** - Assesses refund eligibility Initiate Refund Process** - Starts refund procedure if eligible Check if SMS Needed** - Decides if SMS alert is necessary Send SMS Alert** - Sends SMS notification Setup Instructions Import the workflow into n8n Configure API credentials for Amadeus and Skyscanner Run the workflow Verify notifications and refund processes Features Fare Monitoring** - Tracks and compares fares using Amadeus and Skyscanner Alert System** - Sends email and SMS notifications for fare drops Refund Management** - Checks and initiates refund processes Trend Analysis** - Analyzes fare trends for strategic decisions DB Queries Get Tracked Bookings Columns:** - booking_id, passenger_name, email, phone, flight_number, departure_date, origin, destination, airline, booking_class, original_fare, booking_date, confirmation_code, tracking_enabled, last_checked, current_lowest_fare, trend. Update Fare Tracking Columns:** - booking_id, check_date, lowest_fare, fare_source, savings_amount, savings_percentage, fare_trend, priority_level, action_recommended, refund_eligible, available_fares_json, updated_at. Update Booking Status: Columns** - last_checked, current_lowest_fare, booking_id. DB Setup: Create tables 'bookings' and 'fare_tracking' with above columns, set 'booking_id' as primary key, and ensure proper indexing for performance. Run queries after configuring DB connection in n8n with appropriate credentials. Parameters to Configure amadeus_api_key**: Amadeus API key skyscanner_api_key**: Skyscanner API key email_recipients**: List of email addresses for alerts sms_recipients**: List of phone numbers for SMS alerts slack_channel**: Slack channel for team notifications refund_threshold**: Minimum fare drop for refund eligibility
by Onur
Turn BBC News Articles into Podcasts using Hugging Face and Google Gemini Effortlessly transform BBC news articles into engaging podcasts with this automated n8n workflow. Who is this for? This template is perfect for: Content creators** who want to quickly produce podcasts from current events. Students** looking for an efficient way to create audio content for projects or assignments. Individuals** interested in generating their own podcasts without technical expertise. Setup Information Install n8n: If you haven't already, download and install n8n from n8n.io. Import the Workflow: Copy the JSON code for this workflow and import it into your n8n instance. Configure Credentials: Gemini API: Set up your Gemini API credentials in the workflow's LLM nodes. Hugging Face Token: Obtain an access token from Hugging Face and add it to the HTTP Request node for the text-to-speech model. Customize (Optional): Filtering Criteria: Adjust the News Classifier node to fine-tune the selection of news articles based on your preferences. Output Options: Modify the workflow to save the generated audio file to a cloud storage service or publish it to a podcast hosting platform. Prerequisites An active n8n instance. Basic understanding of n8n workflows (no coding required). API credentials for Gemini and a Hugging Face account with an access token. What problem does it solve? This workflow eliminates the manual effort involved in creating podcasts from news articles. It automates the entire process, from fetching and filtering news to generating the final audio file. What are the benefits? Time-saving:** Create podcasts in minutes, not hours. Easy to use:** No coding or technical skills required. Customizable:** Adapt the workflow to your specific needs and preferences. Cost-effective:** Leverage free or low-cost services like Gemini and Hugging Face. How does it work? The workflow fetches news articles from the BBC website. It filters articles based on their suitability for a podcast. It extracts the full content of the selected articles. It uses Gemini LLM to create a podcast script. It converts the script to speech using Hugging Face's text-to-speech model. The final podcast audio is ready for use. Nodes in the Workflow Fetch BBC News Page: Retrieves the main BBC News page. News Classifier: Categorizes news articles using Gemini LLM. Fetch BBC News Detail: Extracts detailed content from suitable articles. Basic Podcast LLM Chain: Generates a podcast script using Gemini LLM. HTTP Request: Converts the script to speech using Hugging Face. Add Story I'm excited to share this workflow with the n8n community and help content creators and students easily produce engaging podcasts! Additional Tips Explore the n8n documentation and community resources for more advanced customization options. Experiment with different filtering criteria and LLM prompts to achieve your desired podcast style.
by Cyril Nicko Gaspar
📌 AI Agent Template with Bright Data MCP Tool Integration This template obtains all the possible tools from Bright Data MCP, process this through chatbot, then run any tool based on the user's query ❓ Problem It Solves The problem that the MCP solves is the complexity and difficulty of traditional automation, where users need to have specific knowledge of APIs or interfaces to trigger backend processes. By allowing interaction through natural language, automatically classifying and routing queries, and managing context and memory effectively, MCP simplifies complex data operations, customer support, and workflow orchestration scenarios where inputs and responses change dynamically. 🧰 Pre-requisites Before deploying this template, ensure you have: An active n8n instance (self-hosted or cloud). A valid OpenAI API key (or any AI models) Access to Bright Data MCP API with credentials. Basic familiarity with n8n workflows and nodes. ⚙️ Setup Instructions **Install the MCP Community Node in N8N In your N8N self-hosted instance, go to Settings → Community Nodes. Search and install n8n-nodes-mcp. Configure Credentials: Add your OpenAI API key or any AI mdeols to the relevant nodes. If you want other AI model, please replace all associated nodes of OpenAI in the workflow Set up Bright Data MCP client credentials in the installed community node (STDIO) Obtain your API in Bright Data and put it in Environment field in the credentials window. It should be written as API_Key=<your api key from Bright Data> 🔄 Workflow Functionality (Summary) User message** triggers the workflow. AI Classifier** (OpenAI) interprets the intent and maps it to a tool from Bright Data MCP. If no match is found, the user is notified. If more information is needed, the AI requests it. Memory** preserves context for follow-up actions. The tool is executed, and results are returned contextually to the user. > 🧠 Optional memory buffer and chat memory manager nodes keep conversations context-aware across multiple messages. 🧩 Use Cases Data Scraping Automation**: Trigger scraping tasks via chat. Lead Generation Bots**: Use MCP tools to fetch, enrich, or validate data. Customer Support Agents**: Automatically classify and respond to queries with tool-backed answers. Internal Workflow Agents**: Let team members trigger backend jobs (e.g., reports, lookups) by chatting naturally. 🛠️ Customization Tool Matching Logic**: Modify the AI classifier prompt and schema to suit different APIs or services. Memory Size and Retention**: Adjust memory buffer size and filtering to fit your app’s complexity. Tool Execution**: Extend the "Execute the tool" sub-workflow to handle additional actions, fallback strategies, or logging. Frontend Integration**: Connect this with various platforms (e.g., WhatsApp, Slack, web chatbots) using the webhook. ✅ Summary This template delivers a powerful no-code/low-code agent that turns chat into automation, combining AI intelligence with real-world tool execution. With minimal setup, you can build contextual, dynamic assistants that drive backend operations using natural language.
by Stefan
Track n8n Node Definitions from GitHub and Export to Google Sheets Overview This workflow automatically retrieves and processes metadata from the official n8n GitHub repository, filters all available .node.json files, parses their structure, and appends structured information into a Google Sheet. Perfect for developers, community managers, and technical writers who need to maintain up-to-date information about n8n's evolving node ecosystem. Setup Instructions Prerequisites Before setting up this workflow, ensure you have: A GitHub account with API access A Google account with Google Sheets access An active n8n instance (cloud or self-hosted) Step 1: GitHub API Configuration Navigate to GitHub Settings → Developer Settings → Personal Access Tokens Generate a new token with public_repo permissions Copy the generated token and store it securely In n8n, create a new "GitHub API" credential Paste your token in the credential configuration and save Step 2: Google Sheets Setup Create a new Google Sheets document Set up the following column headers in the first row: node (Column A) - Node identifier/name nodeVersion (Column B) - Version of the node codexVersion (Column C) - Codex version number categories (Column D) - Node categories credentialDocumentation (Column E) - Credential documentation URL primaryDocumentation (Column F) - Primary documentation URL Note down the Google Sheets document ID from the URL Configure Google Sheets OAuth2 credentials in n8n Step 3: Workflow Configuration Import the workflow into your n8n instance Update the following placeholder values: Replace YOUR_GOOGLE_SHEETS_DOCUMENT_ID with your actual document ID Replace YOUR_WEBHOOK_ID if using webhook functionality Configure the GitHub API credentials in the HTTP Request nodes Set up Google Sheets credentials in the Google Sheets nodes Share your Google Sheets document with the email address associated with your Google OAuth2 credentials Grant "Editor" permissions to allow the workflow to write data Google Sheets Template Details The workflow creates a structured dataset with these columns: node**: Node identifier (e.g., n8n-nodes-base.slack) nodeVersion**: Version of the node (e.g., 1.0.0) codexVersion**: Codex version number (e.g., 1.0.0) categories**: Node categories (e.g., Communication, Productivity) credentialDocumentation**: URL to credential documentation primaryDocumentation**: URL to primary node documentation Customization Options Modifying Data Extraction You can customize the "Format Data" node to extract additional fields: Add new assignments in the Set node Modify the column mapping in the Google Sheets node Update your spreadsheet headers accordingly Changing Update Frequency To run this workflow on a schedule: Replace the Manual Trigger with a Cron node Set your desired schedule (e.g., daily, weekly) Configure appropriate timing to avoid API rate limits Adding Filters Customize the "Filter Node Files" code node to: Filter specific node types Include/exclude certain categories Process only recently updated nodes Features Fetches all node definitions from the n8n-io/n8n repository Filters for .node.json files only Downloads and parses metadata automatically Extracts key fields like node names, versions, categories, and documentation URLs Appends structured data to Google Sheets with batch processing Includes error handling and retry mechanisms Clears existing data before appending new information for fresh results Use Cases This workflow is ideal for: Track changes in official n8n node definitions over time Audit node categories and documentation links for completeness Build custom dashboards from node metadata Community management and documentation maintenance Integration planning and compatibility analysis
by InfraNodus
This template can be used to sync the files in your Google drive to a new or existing InfraNodus knowledge graph. The InfraNodus graph will then reveal the main topics and ideas in your collection of documents and show the content gaps in them. You can also use the built-in AI to converse with the documents. You can also access the InfraNodus Graphs via its GraphRAG API to re-use them in your other n8n workflows for high-quality content retrieval and knowledge base optimization. The template showcases the use of multiple n8n nodes and processes: Syncing documents from a Google Drive folder / extracting them text extraction from files optional: high-quality PDF conversion using ConvertAPI InfraNodus knowledge graph generation Note: If you want to **upload files from your Google drive to an InfraNodus graph, check out our other workflow* How it works Here's a description of this workflow step by step: Wait for new file(s) to appear in the Google drive folder Reiterate through each file Retrieve the new file from the Google drive For each file found: reiterate the workflow and Identify the type of the file (TXT, PDF, Markdown) For TXT and Markdown files extract the text data For PDF files use a special PDF to Text convertor to extract the text data. (Optional: using ConvertAPI for better quality PDF conversion) Forward everything to the InfraNodus graphAndStatements API endpoint with the name of the new graph, the text field with the text data, the text settings, and doNotSave=false to create a new graph Reiterate through another file. How to use You need an InfraNodus GraphRAG API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key for the InfraNodus HTTP nodes. Use that API key to set up authorization for the InfraNodus tool in the workflow. If you want to upload the files to an existing graph, you should copy its name from InfraNodus. Otherwise you can specify any name you want. Requirements An InfraNodus account and API key A Google Drive account and authorization (you will need to set it up via Google Cloud using the n8n instructions provided in the Google Drive node). Customizing this workflow You can use Dropbox instead of Google Drive. You can also modify this workflow slightly to make it Upload the files from a Google Drive when the new files appear in it. Check out the complete guide at https://support.noduslabs.com/hc/en-us/articles/20267019838108-Upload-Sync-Your-Google-Drive-Folder-with-InfraNodus-using-n8n
by Khairul Muhtadin
The Page Speed Insight workflow automates website performance analysis by integrating Google PageSpeed Insights API with Discord messaging and Gemini. This n8n workflow provides expert-level performance audits and comparisons, delivering actionable insights for website owners, SEO professionals, and developers. Disclaimer: this workflow using community nodes Google PageSpeed Insights Community Node 💡 Why Use Page Speed Insight? Save Time:** Instantly analyze and compare website speeds without manual tool usage Eliminate Guesswork:** Receive expert audit reports that translate technical data into clear, actionable insights Improve Website Outcomes:** Identify critical bottlenecks and enhancements prioritized by AI-driven analysis Seamless Integration:** Pull URLs and deliver reports directly via Discord for team collaboration and immediate response ⚡ Who Is This For? Webmasters and website owners seeking fast, automated performance checks SEO analysts who need consistent, data-backed website comparisons Developers requiring clear, prioritized action points from performance audits Digital agencies managing multiple client sites with ongoing monitoring needs 🔧 What This Workflow Does ⏱ Trigger:** Discord message containing URLs or scheduled execution 📎 Parse:** Extracts URLs and determines analysis type (single/comparison) 🔍 Analyze:** Calls Google PageSpeed API for performance data 🤖 Process:** AI generates user-friendly reports from raw Lighthouse JSON 💌 Deliver:** Sends chunked reports to Discord channels 🗂 Log:** Stores execution data for review and improvement 🔐 Setup Instructions Import the provided JSON workflow into your n8n instance Set up credentials for: Google PageSpeed API (ensure you have a valid API key — get yours here) Discord Bot API with permissions to read messages and send messages in your chosen guild/channel Customize the workflow by adjusting: Discord guild and channel IDs where messages are monitored and results posted Scheduled trigger interval if needed Any prompt text or AI model parameters to tailor report tone and detail level Test thoroughly with real URLs and Discord interaction to confirm smooth data flow and output quality 🧩 Pre-Requirements Active n8n instance (Cloud or self-hosted) n8n Google PageSpeed community node Google PageSpeed Insights API key Discord Bot credentials with channel access Google Gemini AI credentials (recommended) 🛠️ Customize It Further Extend to analyze desktop performance or other device types easily by modifying the PageSpeed API call Integrate with Slack, email, or other team tools alongside Discord for broader notification Enhance report depth by adding more AI-driven insights like competitor site recommendations or historical trend tracking 🧠 Nodes Used Google PageSpeed Insights Community Node Discord (getAllMessages, sendMessage) Code (URL parsing, message chunking) AI Language Model (Google Gemini) Schedule Trigger Switch (message type handling) Sticky Notes (workflow guidance) 📞 Support Made by: khaisa Studio Tag: automation, performance, SEO, google-pagespeed, discord Category: Monitoring & Reporting Need a custom solution? Contact Me
by Trung Tran
🎧 IT Voice Support Automation Bot – Telegram Voice Message to JIRA ticket with OpenAI Whisper > Automatically process IT support requests submitted via Telegram voice messages by transcribing, extracting structured data, creating a JIRA ticket, and notifying relevant parties. 🧑💼 Who’s it for Internal teams that handle IT support but want to streamline voice-based requests. Employees who prefer using mobile/voice to report incidents or ask for support. Organizations aiming to integrate conversational AI into existing support workflows. ⚙️ How it works / What it does A user sends a voice message to a Telegram bot. The system checks whether it’s an audio message. If valid, the audio is: Downloaded Transcribed via OpenAI Whisper Backed up to Google Drive The transcription and file metadata are merged. The merged content is processed through an AI Agent (GPT) to extract structured request info. A JIRA ticket is created using the extracted data. The IT team is notified via Slack (or other channels). The requester receives a Telegram confirmation message with the JIRA ticket link. If the input is not audio, a polite rejection message is sent. 📌 Key Features Supports voice-based ticket creation Accurate transcription using Whisper Context-aware request parsing using GPT-4.1 mini Fully automated ticket creation in JIRA Notifies both IT and the original requester Cloud backup of original voice messages (Google Drive) 🛠️ Setup Instructions Prerequisites | Component | Required | |----------|----------| | Telegram Bot & API Key | ✅ | | OpenAI Whisper / Transcription Model | ✅ | | Google Drive Credentials (OAuth2) | ✅ | | Google Sheets or other storage (optional) | ⬜ | | JIRA Cloud API Access | ✅ | | Slack Bot or Webhook | ✅ | Workflow Steps Telegram Voice Message Trigger: Starts the flow when a user sends a voice message. Is Audio Message?: If false → reply "only voice is supported" Download Audio: Download .oga file from Telegram. Transcribe Audio: Use OpenAI Whisper to get text transcript. Backup to Google Drive: Upload original voice file with metadata. Merge Results: Combine transcript and metadata. Pre-process Output: Clean formatting before AI extraction. Transcript Processing Agent: GPT-based agent extracts: Requester name, department Request title & description Priority & request type Submit JIRA Request Ticket: Create ticket from AI-extracted data. Setup Slack / Email / Manual Steps: Optional internal routing or approvals. Inform Reporter via Telegram: Sends confirmation message with JIRA ticket link. 🔧 How to Customize Replace JIRA with Zendesk, GitHub Issues, or other ticketing tools. Change Slack to Microsoft Teams or Email. Add Notion/Airtable logging. Enhance agent to extract department from user ID or metadata. 📦 Requirements | Integration | Notes | |-------------|-------| | Telegram Bot | Used for input/output | | Google Drive | Audio backup | | OpenAI GPT + Whisper | Transcript & Extraction | | JIRA | Ticketing platform | | Slack | Team notification | Built with ❤️ using n8n
by Nima Salimi
Description🔍 This n8n workflow is a complete marketing automation system that connects to your CDP (Customer Data Platform), selects which flows to send, and delivers personalized emails using Brevo. It's modular and extensible — you can also add SMS, push notifications, Telegram messages, or other channels. To build a full marketing automation system, you need four key components: Workflow Automation – using n8n (this workflow) CDP – store and manage user data (e.g., NocoDB, Metabase, Power BI, etc.) Database – track transactions, templates, and send statuses (e.g., NocoDB) BI / Analytics – monitor performance by flows, journeys, and sent events This workflow represents the Workflow Automation layer. You can connect it to your own data stack or use the included example databases (cdp-ecrm, n8n-templates-ecrm, and n8n-transaction-ecrm) to get started quickly. 👤 Who’s it for? Growth & CRM teams managing user engagement flows Ecommerce marketers running time-sensitive email journeys Marketing automation pros using low-code CRM stacks Data teams building custom campaign triggers from CDPs ✅ Features 🔁 Two modular flows: "Insert user_id" and "Sending Email" 🧠 Select flow using flow_id from templates in NocoDB ✏️ Insert user data into n8n-transaction-ecrm with processing status 🔍 Filter duplicate users by user_id to avoid over-sending 📧 Validate email fields and flag disposables 📨 Send personalized emails using Brevo template parameters 📊 Track delivery with sent_result, sent_at, and status updates 🕒 Runs every 30 minutes via schedule trigger 🛠 How to Use Set your flow In the Setup Flow node, change the flow_id to match a row in your n8n-templates-ecrm table. Prepare your tables in NocoDB cdp-ecrm: contains users (user_id, email, first_name, phone_number) n8n-templates-ecrm: contains flows with metadata n8n-transaction-ecrm: stores and updates user send status Configure credentials NocoDB API Token Brevo (Sendinblue) API Key Trigger the flows Run “Insert user_id” manually or on a schedule to prepare users “Sending Email” runs automatically every 30 minutes 📌 Notes Disposable email domains are filtered using regex Status: 0-processing → just inserted 1-sending → ready to send 2-sent → email sent successfully 3-no-email → missing email address 4-disposal-email → disposable or banned email Easily duplicate the "Insert user_id" flow to add more campaigns
by Akhil Varma Gadiraju
n8n Workflow: Sync Workflows with GitLab How It Works This workflow ensures that your self-hosted n8n workflows are version-controlled in a GitLab repository. It compares each current workflow from n8n with its stored counterpart in GitLab. If any differences are detected, the GitLab file is updated with the latest version. Core Logic: Retrieve Workflows – Fetch all workflows from the n8n REST API. Compare with GitLab – For each workflow, fetch the corresponding file from GitLab and compare the JSON. Update if Changed – If differences exist, commit the updated workflow to GitLab using its API. Setup Before using the workflow, ensure the following: Prerequisites: n8n**: Self-hosted instance with access to the /rest/workflows API. GitLab**: A repository where workflows will be stored, and a Personal Access Token (PAT) with api and write_repository permissions. n8n Nodes Required**: HTTP Request (to call n8n and GitLab APIs) Code or Function nodes (for diffing and formatting) Looping (SplitInBatches or similar) Configuration: Set environment variables or workflow credentials for: GITLAB_TOKEN GITLAB_REPO GITLAB_BRANCH (e.g., main) GITLAB_FILE_PATH_PREFIX (e.g., n8n-workflows/) How to Use Import the Workflow into your n8n instance. Configure GitLab API Credentials: Set the GitLab PAT as a header in the HTTP Request node: Private-Token: {{ $env.GITLAB_TOKEN }} Map Workflows to GitLab Paths: Use the workflow name or ID to create the file path. Example: n8n-workflows/workflow-name.json Trigger the Workflow: Can be manually triggered, or scheduled to run at intervals (e.g., daily). Review Commits in GitLab: Each updated workflow will be committed with a message like: "Update workflow: Sample Workflow" Disclaimer This workflow does not handle merge conflicts or manual edits made directly in GitLab. Always ensure proper coordination if multiple sources are modifying workflows. Only structural changes are tracked. Non-functional metadata (like timestamps or IDs) may trigger false positives unless filtered. Use at your own risk. Test in a safe environment before applying to production workflows.