by Geoffrey Saxena
👤 Who is this for? This workflow is great for n8n users who want to prevent duplicate or overlapping workflow runs. If you're a developer, DevOps engineer, or automation enthusiast managing tasks like database updates, syncing tools, or hitting rate-limited APIs, this one’s for you. 🧩 What problem does this solve? In the real world, automations can get triggered at the same time—whether that’s because of multiple webhook calls, overlapping schedules, or retries. And when two workflows try to do the same thing at once (like updating a record or syncing data), it can cause conflicts, data corruption, or wasted API calls. This workflow helps avoid that problem by using Redis as a lock system, so only one instance runs at a time. Think of it like putting up a “🚧 Workflow in Progress” sign while your logic is running. ⚙️ What this workflow does When the workflow starts, it tries to set a Redis key as a lock with a short expiry. If the lock is free: Your main business logic runs. Once it's done, the lock is cleared. If the lock is already taken (i.e., another run is in progress): The workflow will wait and retry a few times. If a duplicate request shows up while one is already being processed: It skips that duplicate to avoid unnecessary work. You can customize both the timeout and retry logic to match your needs. 🛠️ Setup guide To use this template: You’ll need access to a Redis instance (either self-hosted or managed like Upstash, Redis Cloud, etc). Set up your Redis credentials in the n8n Redis node. Swap out the webhook node with your actual trigger or logic. Adjust the lock timeout to match how long your task typically takes. > 💡 Bonus Tip: Use this pattern wherever you need idempotency or want to avoid duplicate processing. 🧪 Example use case Let’s say you have a workflow that syncs ClickUp tickets to Google Sheets. It runs daily at 9 AM and updates tickets, adds notes, and makes sure nothing is missed. But what if two runs start at the same time? Or someone triggers a manual sync while the scheduled one is still working? By wrapping that whole sync inside this Redis locking template, you can make sure it only runs one at a time, saving your APIs (and your sanity).
by Alex Huy
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Description This n8n workflow automatically scrapes Airbnb listings from a specified location and saves the data to a Google Sheet. It performs pagination to collect listings across multiple pages, extracts detailed information for each property, and organizes the data in a structured format for easy analysis. How it Works The workflow operates through these high-level steps: Search Initialization: Starts with an Airbnb search for a specific location (London) with defined check-in/check-out dates and guest count Pagination Loop: Automatically processes multiple pages of search results using cursor-based pagination Data Extraction: Parses listing information including names, prices, ratings, reviews, and URLs Detail Enhancement: Fetches additional details for each listing (house rules, highlights, descriptions, amenities) Data Storage: Saves all collected data to a Google Sheet with proper formatting Loop Control: Continues until reaching the page limit (2 pages) or no more results are available Setup Steps Prerequisites n8n instance with MCP (Model Context Protocol) support Google Sheets API credentials configured Airbnb MCP client properly set up Configuration Steps Configure MCP Client Set up the Airbnb MCP client with credential ID: Ensure the client has access to airbnb_search and airbnb_listing_details tools Google Sheets Setup Create a Google Sheet with ID: 15IOJquaQ8CBtFilmFTuW8UFijux10NwSVzStyNJ1MsA Configure Google Sheets OAuth2 credentials (ID: 6YhBlgb8cXMN3Ra2) Ensure the sheet has these column headers: "id, name, url, price_per_night, total_price, price_details beds_rooms, rating, reviews, badge, location houseRules, highlights, description, amenities" Search Parameters Location: "London" (can be modified in the "Airbnb Search" node) Adults: 7 Children: 1 Check-in: "2025-08-14" Check-out: "2025-08-17" Page limit: 2 (can be adjusted in the "If1" condition node) Execution Use the manual trigger "When clicking 'Execute workflow'" to start the process Monitor the workflow execution through the n8n interface Check the Google Sheet for populated data after completion Key Features Automatic Pagination: Processes multiple pages without manual intervention Comprehensive Data: Extracts both basic listing info and detailed property information Error Handling: Includes JSON parsing error handling and data validation Batch Processing: Uses split batches for efficient processing of individual listings Real-time Updates: Appends new data to existing Google Sheet records Output Data Structure Each listing contains: Basic info: ID, name, URL, pricing details, room/bed count Ratings: Average rating and review count Location: Latitude and longitude coordinates Enhanced details: House rules, highlights, descriptions, amenities Metadata: Page number, check-in/out dates, badges
by Ventsislav Minev
Automated Email Attachment Organizer Automatically process labeled emails with attachments into organized Google Drive folders Who Is This For? Teams or Individuals** needing to: Automatically sort invoices, receipts, and files. Organize client documents by date. Verify sender emails against a whitelist. Timestamp files to avoid duplicate names. What Does This Workflow Solve? 🕒 Manual Email Sorting: Saves time by automating the organization of email attachments. 📂 Disorganized Cloud Storage: Ensures attachments are neatly stored in Google Drive folders. 📧 Unverified Sender Issues: Filters and validates emails using a whitelist. 🔄 Duplicate Filenames: Uses timestamps to ensure every file name is unique. Setup Guide 1. Pre-Requisites Whitelist Sheet**: Make a copy of the Example Whitelist Sheet Gmail Filter**: Create a filter in Gmail to label emails with attachments. To Create a Gmail Filter: Open your Gmail Inbox. Click the search bar and select "Show search options". Enter your criteria (e.g., type has:attachment). Click "Create filter". Choose "Apply the label: Custom_Label" and save. 2. Credentials Setup Make sure your n8n instance is connected with: Gmail Account**: (via OAuth2) Google Drive Account**: (via OAuth2) Google Sheets** (via OAuth2) 3. Configure Your n8n Workflow Nodes 1. Trigger and Email Retrieval Gmail Trigger**: Setup check interval and filters for emails (i.e. emails labeled with Custom_Label) 2. Whitelist settings Lookup in Sheets**: Searches for a row with the sender email. Configure this node to point to your whitelist spreadsheet containing two columnds: |email|company| 3. File storage location Confirue parent folder to create sub-folders and store files into in the Create Company Folder node Parent Folder dropdown Final Steps Test Your Workflow: Run the workflow to verify emails are processed and files are uploaded correctly. Happy Automating!
by Romain Jouhannet
Linear Project/Issue Status and End Date to Productboard feature Sync Sync project and issue data between Linear and Productboard to keep teams aligned. This workflow updates Productboard features with the status and end date from Linear projects or due date from Linear issues. It ensures consistent data and sends a Slack notification whenever changes are made. Features Listens for updates in Linear projects/issues. Maps Linear statuses to Productboard feature statuses. Updates Productboard feature details including timeframe. Sends a Slack notification summarizing the updates. Setup Linear Credentials: Add your Linear API credentials in n8n. Productboard Credentials: Configure the Productboard API credentials in n8n. Linear Projects or Issues: Select the Linear project(s) or Issue(s) you want to monitor for updates. Productboard Custom Field: Create a custom field in Productboard named "Linear". This field should store the URL of the Linear project or issue you want to sync. Retrieve the UUID of the custom field in Productboard and set it up in the "Get Productboard Feature ID" node. Slack Notification: Update the Slack node with the desired Slack channel ID. Activate the Workflow: Enable the workflow to automatically sync data when triggered by updates in Linear.
by Nazmy
Based on Jonathan & Solomon work. > The only addition I've made is a Set node. This node organizes workflows into subfolders within the GitHub repository based on their respective tags. How it works This workflow will backup your workflows to GitHub. It uses the n8n API node to export all workflows. It then loops over the data, checks in GitHub to see if a file exists that uses the credential's ID. Once checked it will: update the file on GitHub if it exists; create a new file if it doesn't exist; ignore if it's the same. Who is this for? People wanting to backup their workflows outside the server for safety purposes or to migrate to another server.
by Ferenc Erb
Use Case Extend Bitrix24 tasks with custom widgets that display relevant task information and enable seamless interaction through a custom tab interface. What This Workflow Does Processes incoming webhook requests from Bitrix24 task interfaces Handles authentication and secure token validation Manages application installation and placement registration Displays task data in a custom formatted view Stores and retrieves configuration settings persistently Provides user-friendly HTML interfaces for task information Setup Instructions Configure Bitrix24 webhook endpoints for the task widget Set up authentication credentials in your Bitrix24 account Install the application and register the task view tab placement Customize the task data display format as needed Deploy and test the application functionality within Bitrix24 tasks
by Artem Boiko
Revit to HTML Quantity Takeoff Generator Automates extraction of wall quantities from Revit models and creates a professional interactive HTML report. Key Features Automated wall quantity analysis Calculates volumes by wall type ("Type Name") Generates interactive HTML QTO report Includes summary statistics: total elements, total and average volumes Provides detailed breakdown by element type How it works Upload a Revit file as input Workflow extracts wall quantities and types Creates and saves a ready-to-share HTML dashboard with QTO data No API keys required Runs offline Output is a professional, ready-to-use HTML report
by Dvir Sharon
💼 LinkedIn Job Finder Automation using Bright Data API & Google Sheets A comprehensive n8n automation that searches LinkedIn job postings using Bright Data’s API and automatically organizes results in Google Sheets for efficient job hunting and recruitment workflows. 📋 Overview This workflow provides an automated LinkedIn job search solution that collects job postings based on your search criteria and organizes them in Google Sheets. Perfect for job seekers, recruiters, HR professionals, and talent acquisition teams. ✨ Key Features 🔍 Smart Job Search:** Form-based input for city, job title, country, and job type 🛍 LinkedIn Integration:** Uses Bright Data’s LinkedIn dataset for accurate job posting data 📊 Automated Organization:** Populates Google Sheets with structured job data 📧 Real-time Processing:** Processes job search requests in real-time 📈 Data Storage:** Stores job details including company info, locations, and apply links 🔄 Batch Processing:** Handles multiple job postings efficiently ⚡ Fast & Reliable:** Built-in error handling for scraping 🎯 Customizable Filters:** Advanced job filtering based on criteria 🎯 What This Workflow Does Input Job Search Criteria:** City, job title, country, and optional job type Search Parameters:** Configurable filters and limits Output Preferences:** Google Sheets destination Processing Steps Form Submission Data Request to Bright Data API Status Monitoring Data Extraction Data Filtering Sheet Update Error Handling Output Data Points Field Description Example Job Title Position title from posting Senior Software Engineer Company Name Employer company name Tech Solutions Inc. Job Detail Job summary/description Remote position requiring 5+ years… Location Job location San Francisco, CA Company URL Company profile link View Profile Apply Link Direct application link Apply Now 🚀 Setup Instructions Prerequisites n8n instance (self-hosted or cloud) Google account with Sheets access Bright Data account with LinkedIn dataset access Steps Import the Workflow: Use JSON import in n8n Configure Bright Data: Add API credentials and dataset ID Configure Google Sheets: Create sheet, set credentials, map columns Update Workflow Settings: Replace placeholders with your actual data Test & Activate: Submit test form and verify data in Google Sheets 📖 Usage Guide Submitting Job Searches Go to your webhook URL and fill in the form with: City:** e.g., New York Job Title:** e.g., Software Engineer Country:** e.g., US Job Type:** Optional (Full-Time, Remote, etc.) Understanding Results Comprehensive job data Company info and profile links Direct application links Location and job descriptions Customizing Search Parameters Edit the Create Snapshot ID node to change: Time range (e.g., “Past month”) Result limits Company filters 🔧 Customization Options More Data Points:** Add salary, seniority, applicants, etc. Custom Form Fields:** Add filters for salary, experience, industry Multiple Sheets:** Route results by job type or location 🚨 Troubleshooting Bright Data connection failed:** Check API credentials and dataset access No job data extracted:** Verify search parameters and API limits Google Sheets permission denied:** Re-authenticate and check sharing Form not working:** Check webhook URL and field mappings Filter issues:** Review logic and data types Execution failed:** Check logs, retry logic, and network status 📊 Use Cases & Examples Job Seeker Dashboard:** Automate job search and track applications Recruitment Pipeline:** Source candidates and monitor hiring trends Market Research:** Analyze job trends and salary benchmarks HR Analytics:** Support workforce planning and competitive insights ⚙️ Advanced Configuration Batch Processing:** Queue multiple searches with delays Search History:** Track and analyze past searches Tool Integration:** Connect to CRM, Slack, databases, BI tools 📈 Performance & Limits Processing Time:** 30–60 seconds per search Concurrent Requests:** 2–3 (depends on Bright Data plan) Data Accuracy:** 95%+ Success Rate:** 90%+ Daily Capacity:** 50–200 searches Memory:** ~50MB per execution API Calls:** 3–4 Bright Data + 1 Google Sheets per search 🤝 Support & Community n8n Community:** community.n8n.io Documentation:** docs.n8n.io Bright Data Support:** Via your Bright Data dashboard GitHub Issues:** Report bugs and request features 🎯 Ready to Use! Your workflow is ready for automated LinkedIn job searching. Customize it to your recruiting or job search needs. Webhook URL: https://your-n8n-instance.com/webhook/linkedin-job-finder What Gets Extracted: * ✅ Job Title * ✅ Company Information * ✅ Location Data * ✅ Job Details * ✅ Application Links * ✅ Processing Timestamps ### Use Cases: * 🔍 Job Search Automation * 📊 Recruitment Intelligence * 📝 Market Research * 🎯 HR Analytics
by Ranjan Dailata
Who this is for The Async Structured Bulk Data Extract with Bright Data Web Scraper workflow is designed for data engineers, market researchers, competitive intelligence teams, and automation developers who need to programmatically collect and structure high-volume data from the web using Bright Data's dataset and snapshot capabilities. This workflow is built for: Data Engineers - Building large-scale ETL pipelines from web sources Market Researchers - Collecting bulk data for analysis across competitors or products Growth Hackers & Analysts - Mining structured datasets for insights Automation Developers - Needing reliable snapshot-triggered scrapers Product Managers - Overseeing data-backed decision-making using live web information What problem is this workflow solving? Web scraping at scale often requires asynchronous operations, including waiting for data preparation and snapshots to complete. Manual handling of this process can lead to timeouts, errors, or inconsistencies in results. This workflow automates the entire process of submitting a scraping request, waiting for the snapshot, retrieving the data, and notifying downstream systems all in a structured, repeatable fashion. It solves: Asynchronous snapshot completion handling Reliable retrieval of large datasets using Bright Data Automated delivery of scraped results via webhook Disk persistence for traceability or historical analysis What this workflow does Set Bright Data Dataset ID & Request URL: Takes in the Dataset ID and Bright Data API endpoint used to trigger the scrape job HTTP Request: Sends an authenticated request to the Bright Data API to start a scraping snapshot job Wait Until Snapshot is Ready: Implements a loop or wait mechanism that checks snapshot status (e.g., polling every 30 seconds) until completion i.e ready state Download Snapshot: Downloads the structured dataset snapshot once ready Persist Response to Disk: Saves the dataset to disk for archival, review, or local processing Webhook Notification: Sends the final result or a summary of it to an external webhook Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. Update the Set Dataset Id, Request URL for setting the brand content URL. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs Polling Strategy : Adjust polling interval (e.g., every 15–60 seconds) based on snapshot complexity Input Flexibility : Accept datasetId and request URL dynamically from a webhook trigger or input form Webhook Output : Send notifications to - Internal APIs – for use in dashboards Zapier/Make – for multi-step automation Persistence Save output to: Remote FTP or SFTP storage Amazon S3, Google Cloud Storage etc.
by Stefan
Track n8n Node Definitions from GitHub and Export to Google Sheets Overview This workflow automatically retrieves and processes metadata from the official n8n GitHub repository, filters all available .node.json files, parses their structure, and appends structured information into a Google Sheet. Perfect for developers, community managers, and technical writers who need to maintain up-to-date information about n8n's evolving node ecosystem. Setup Instructions Prerequisites Before setting up this workflow, ensure you have: A GitHub account with API access A Google account with Google Sheets access An active n8n instance (cloud or self-hosted) Step 1: GitHub API Configuration Navigate to GitHub Settings → Developer Settings → Personal Access Tokens Generate a new token with public_repo permissions Copy the generated token and store it securely In n8n, create a new "GitHub API" credential Paste your token in the credential configuration and save Step 2: Google Sheets Setup Create a new Google Sheets document Set up the following column headers in the first row: node (Column A) - Node identifier/name nodeVersion (Column B) - Version of the node codexVersion (Column C) - Codex version number categories (Column D) - Node categories credentialDocumentation (Column E) - Credential documentation URL primaryDocumentation (Column F) - Primary documentation URL Note down the Google Sheets document ID from the URL Configure Google Sheets OAuth2 credentials in n8n Step 3: Workflow Configuration Import the workflow into your n8n instance Update the following placeholder values: Replace YOUR_GOOGLE_SHEETS_DOCUMENT_ID with your actual document ID Replace YOUR_WEBHOOK_ID if using webhook functionality Configure the GitHub API credentials in the HTTP Request nodes Set up Google Sheets credentials in the Google Sheets nodes Share your Google Sheets document with the email address associated with your Google OAuth2 credentials Grant "Editor" permissions to allow the workflow to write data Google Sheets Template Details The workflow creates a structured dataset with these columns: node**: Node identifier (e.g., n8n-nodes-base.slack) nodeVersion**: Version of the node (e.g., 1.0.0) codexVersion**: Codex version number (e.g., 1.0.0) categories**: Node categories (e.g., Communication, Productivity) credentialDocumentation**: URL to credential documentation primaryDocumentation**: URL to primary node documentation Customization Options Modifying Data Extraction You can customize the "Format Data" node to extract additional fields: Add new assignments in the Set node Modify the column mapping in the Google Sheets node Update your spreadsheet headers accordingly Changing Update Frequency To run this workflow on a schedule: Replace the Manual Trigger with a Cron node Set your desired schedule (e.g., daily, weekly) Configure appropriate timing to avoid API rate limits Adding Filters Customize the "Filter Node Files" code node to: Filter specific node types Include/exclude certain categories Process only recently updated nodes Features Fetches all node definitions from the n8n-io/n8n repository Filters for .node.json files only Downloads and parses metadata automatically Extracts key fields like node names, versions, categories, and documentation URLs Appends structured data to Google Sheets with batch processing Includes error handling and retry mechanisms Clears existing data before appending new information for fresh results Use Cases This workflow is ideal for: Track changes in official n8n node definitions over time Audit node categories and documentation links for completeness Build custom dashboards from node metadata Community management and documentation maintenance Integration planning and compatibility analysis
by InfraNodus
Optimize Your Top Performing Website Content with Google Analytics, Firecrawl, and InfraNodus This templates helps you extract** the top performing pages from your website using Google Analytics scrape** the content of the pages using Firecrawl API (HTTP node provided) build a knowledge graph* for all these pages with the *topics* and *gaps** identified using InfraNodus understand the main concepts and topical clusters in your top-performing content, so you can create more of it, while also identifying the content gaps — structural holes between the topics that you can use to generate new content ideas have access to a knowledge graph visualization of your top performing content to explore it using the interactive network interface How it works This template uses the InfraNodus to visualize and analyze your top performing content. It will extract the top pages from the Google Analytics data for the website you choose and scrape their text content using the high-quality Firecrawl API. Then it will ingest every page into an InfraNodus graph you specify. The graph can be used to explore the content visually. The insights from the graph, such as the main topics and gaps between them will be shown to you in the end of the workflow. You can use these insights to understand what kind of content you should focus on creating to get the highest number of views* and to establish *topical authority* in your area, which is good for *SEO* and *LLM optimization** — focusing on the topics identified in the top content discover the content gaps — which topics are not connected yet that you could link with new content ideas and publish — this caters to your audience's interests, but connects your existing ideas in a new way. So you deliver the content that's relevant but also novel. Here's a description step by step: Note:* you can replace the PDF to Text convertor node with a better quality *PDF convertor* from ConvertAPI which respects the original file layout and doesn't split text into small chunks Trigger the workflow Extract a list of top (25, 50) pages from your Google Analytics account (you'll need to connect it via the Google Cloud API) Fix the extracted data and add a correct URL prefix to each page (if your Analytics has relative paths only Loop through each page extracted Extract the text content of every page using the high-quality Firecrawl API Ingest the text content into the InfraNodus graph that you specify Once all the pages are ingested into the InfraNodus graph, access the AI insights endpoint in InfraNodus and get the information about the main topics and gaps Display this information to the user How to use You need an InfraNodus API account and key to use this workflow. Create an InfraNodus account Get the API key at https://infranodus.com/api-access and create a Bearer authorization key for the InfraNodus HTTP nodes. Requirements An InfraNodus account and API key Optional: A Google Analytics account for your property (alternatively, you can modify this workflow to provide a list of the most popular pages) Optional: A Google Cloud API access (to access the data from Google Analytic saccount — follow the n8n instructions) Optional: A Firecrawl API key API key for better quality web page scraping (otherwise, use the standard HTTP to Text node from n8n) Customizing this workflow You can customize this workflow by using a list of the URL pages you want to analyze from a Google sheet. Alternatively, you can use the Google SERP node to extract top search results for a query and get the main topics for them. For support and feedback, please, contact us at https://support.noduslabs.com To learn more about InfraNodus: https://infranodus.com
by Solido AI
How it works: This bot operates in a continuous WhatsApp monitoring loop. It analyzes messages to detect keywords in common questions (like hours, prices, and location) and sends automatic replies with predefined information. For unrecognized questions, it directs the user to manual assistance. Set up steps: The initial setup involves integrating with the WhatsApp API, registering keywords and their respective responses, and defining the fallback flow. It takes only a few minutes to have the bot running with essential information.