by Mario
Purpose This workflow allows you to import any workflow from a file or another n8n instance and map the credentials easily. How it works A multi-form setup guides you through the entire process At the beginning you have two options: Upload a workflow file (JSON) Copy workflow from a remote n8n instance If you choose the second option, you get to choose one of your predefined (in the Settings node) remote instances first, then it retrieves a list of all the workflows using the n8n API which you then can choose a workflow from. Now both initial options come together - the workflow file is being processed In parallel all credentials of the current instance are being retrieved using the Execute Command node The next form page enables a mapping of all the credentials used in the workflow. The matching happens between the names (because one workflow can contain different credentials of the same type) of the original credentials and the ones available on the current instance. Every option then shows all available credentials of the same type. In addition the user has always the choice to create a new credential on the fly. For every option which was set to create a new credential, an empty credential is being created on the current instance using the n8n API. An emoji is being appended to the name, which indicates that it needs to be populated. Finally the workflow gets updated with the new credential ID’s and created on the current instance using the n8n API. Then the user gets a message, if the process has succeeded or not. Setup Select your credentials in the nodes which require those Configure your remote instance(s) in the Settings node. (You can skip this step, if you only want to use the File Upload feature) Every instance is defined as object with the keys name, apiKey and baseUrl. Those instances are then wrapped inside an array. You can find an example described within a note on the workflow canvas. How to use Grab the (production) URL of the Form from the first node Open the URL and follow the instructions given in the multi-form Disclaimer Security: Beware, that all credentials are being decrypted and processed within the workflow. Also the API keys to other n8n instances are stored within the workflow. This solution is primarily meant for transferring data between testing environments. For production use consider the n8n enterprise edition which provides a reliable way to deploy workflows between different environments without the need of manual credential mapping.
by Yaron Been
Automatically monitor and track funding rounds in the US Fintech and Healthtech sectors using Crunchbase API, with daily updates pushed to Google Sheets for easy analysis and monitoring. 🚀 What It Does Daily Monitoring**: Automatically checks for new funding rounds every day at 8 AM Smart Filtering**: Focuses on US-based Fintech and Healthtech companies Data Enrichment**: Extracts and formats key funding information Automated Storage**: Pushes data to Google Sheets for easy access and analysis 🎯 Perfect For VC firms tracking investment opportunities Startup founders monitoring market activity Market researchers analyzing funding trends Business analysts tracking competitor funding ⚙️ Key Benefits ✅ Real-time funding round monitoring ✅ Focused industry tracking (Fintech & Healthtech) ✅ Automated data collection and organization ✅ Structured data output in Google Sheets ✅ Complete funding details including investors and amounts 🔧 What You Need Crunchbase API key Google Sheets account n8n instance Basic spreadsheet setup 📊 Data Collected Company Name Industry Funding Round Type Announced Date Money Raised (USD) Investors Crunchbase URL 🛠️ Setup & Support Quick Setup Deploy in 30 minutes with our step-by-step configuration guide 📺 Watch Tutorial 💼 Get Expert Support 📧 Direct Help Stay ahead of market movements with automated funding round tracking. Transform manual research into an efficient, automated process.
by Aleksandr
This template processes webhooks received from amoCRM in a URL-encoded format and transforms the data into a structured array that n8n can easily interpret. By default, n8n does not automatically parse URL-encoded webhook payloads into usable JSON. This template bridges that gap, enabling seamless data manipulation and integration with subsequent processing nodes. Key Features: Input Handling: Processes URL-encoded data received from amoCRM webhooks. Data Transformation: Converts complex, nested keys into a structured JSON array. Ease of Use: Simplifies access to specific fields for further workflow automation. Setup Guide: Webhook Trigger Node: Configure the Webhook Trigger node to receive data from amoCRM. URL-Encoding Parsing: Use the provided nodes to transform the input URL-encoded data into a structured array. Access Transformed Data: Use the resulting JSON structure for subsequent nodes in your workflow, such as filtering, updating records, or triggering external systems. Example Data Transformation: Sample Input (URL-Encoded): The following input format is typically received from amoCRM: $json.body'leads[updatecustom_fields[id]'] Output (Structured JSON): After processing, the data is transformed into an easily accessible JSON array format: {{ $json.leads.update[‘0’].id }} This output allows you to work with clean, structured JSON, simplifying field extraction and workflow continuation. Code Explanation: This workflow parses URL-encoded key-value pairs using n8n nodes to restructure the data into a nested JSON object. By doing so, the template improves transparency, ensures data integrity, and makes further automation tasks straightforward.
by Geoffrey Saxena
👤 Who is this for? This workflow is great for n8n users who want to prevent duplicate or overlapping workflow runs. If you're a developer, DevOps engineer, or automation enthusiast managing tasks like database updates, syncing tools, or hitting rate-limited APIs, this one’s for you. 🧩 What problem does this solve? In the real world, automations can get triggered at the same time—whether that’s because of multiple webhook calls, overlapping schedules, or retries. And when two workflows try to do the same thing at once (like updating a record or syncing data), it can cause conflicts, data corruption, or wasted API calls. This workflow helps avoid that problem by using Redis as a lock system, so only one instance runs at a time. Think of it like putting up a “🚧 Workflow in Progress” sign while your logic is running. ⚙️ What this workflow does When the workflow starts, it tries to set a Redis key as a lock with a short expiry. If the lock is free: Your main business logic runs. Once it's done, the lock is cleared. If the lock is already taken (i.e., another run is in progress): The workflow will wait and retry a few times. If a duplicate request shows up while one is already being processed: It skips that duplicate to avoid unnecessary work. You can customize both the timeout and retry logic to match your needs. 🛠️ Setup guide To use this template: You’ll need access to a Redis instance (either self-hosted or managed like Upstash, Redis Cloud, etc). Set up your Redis credentials in the n8n Redis node. Swap out the webhook node with your actual trigger or logic. Adjust the lock timeout to match how long your task typically takes. > 💡 Bonus Tip: Use this pattern wherever you need idempotency or want to avoid duplicate processing. 🧪 Example use case Let’s say you have a workflow that syncs ClickUp tickets to Google Sheets. It runs daily at 9 AM and updates tickets, adds notes, and makes sure nothing is missed. But what if two runs start at the same time? Or someone triggers a manual sync while the scheduled one is still working? By wrapping that whole sync inside this Redis locking template, you can make sure it only runs one at a time, saving your APIs (and your sanity).
by Alex Huy
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Description This n8n workflow automatically scrapes Airbnb listings from a specified location and saves the data to a Google Sheet. It performs pagination to collect listings across multiple pages, extracts detailed information for each property, and organizes the data in a structured format for easy analysis. How it Works The workflow operates through these high-level steps: Search Initialization: Starts with an Airbnb search for a specific location (London) with defined check-in/check-out dates and guest count Pagination Loop: Automatically processes multiple pages of search results using cursor-based pagination Data Extraction: Parses listing information including names, prices, ratings, reviews, and URLs Detail Enhancement: Fetches additional details for each listing (house rules, highlights, descriptions, amenities) Data Storage: Saves all collected data to a Google Sheet with proper formatting Loop Control: Continues until reaching the page limit (2 pages) or no more results are available Setup Steps Prerequisites n8n instance with MCP (Model Context Protocol) support Google Sheets API credentials configured Airbnb MCP client properly set up Configuration Steps Configure MCP Client Set up the Airbnb MCP client with credential ID: Ensure the client has access to airbnb_search and airbnb_listing_details tools Google Sheets Setup Create a Google Sheet with ID: 15IOJquaQ8CBtFilmFTuW8UFijux10NwSVzStyNJ1MsA Configure Google Sheets OAuth2 credentials (ID: 6YhBlgb8cXMN3Ra2) Ensure the sheet has these column headers: "id, name, url, price_per_night, total_price, price_details beds_rooms, rating, reviews, badge, location houseRules, highlights, description, amenities" Search Parameters Location: "London" (can be modified in the "Airbnb Search" node) Adults: 7 Children: 1 Check-in: "2025-08-14" Check-out: "2025-08-17" Page limit: 2 (can be adjusted in the "If1" condition node) Execution Use the manual trigger "When clicking 'Execute workflow'" to start the process Monitor the workflow execution through the n8n interface Check the Google Sheet for populated data after completion Key Features Automatic Pagination: Processes multiple pages without manual intervention Comprehensive Data: Extracts both basic listing info and detailed property information Error Handling: Includes JSON parsing error handling and data validation Batch Processing: Uses split batches for efficient processing of individual listings Real-time Updates: Appends new data to existing Google Sheet records Output Data Structure Each listing contains: Basic info: ID, name, URL, pricing details, room/bed count Ratings: Average rating and review count Location: Latitude and longitude coordinates Enhanced details: House rules, highlights, descriptions, amenities Metadata: Page number, check-in/out dates, badges
by Ventsislav Minev
Automated Email Attachment Organizer Automatically process labeled emails with attachments into organized Google Drive folders Who Is This For? Teams or Individuals** needing to: Automatically sort invoices, receipts, and files. Organize client documents by date. Verify sender emails against a whitelist. Timestamp files to avoid duplicate names. What Does This Workflow Solve? 🕒 Manual Email Sorting: Saves time by automating the organization of email attachments. 📂 Disorganized Cloud Storage: Ensures attachments are neatly stored in Google Drive folders. 📧 Unverified Sender Issues: Filters and validates emails using a whitelist. 🔄 Duplicate Filenames: Uses timestamps to ensure every file name is unique. Setup Guide 1. Pre-Requisites Whitelist Sheet**: Make a copy of the Example Whitelist Sheet Gmail Filter**: Create a filter in Gmail to label emails with attachments. To Create a Gmail Filter: Open your Gmail Inbox. Click the search bar and select "Show search options". Enter your criteria (e.g., type has:attachment). Click "Create filter". Choose "Apply the label: Custom_Label" and save. 2. Credentials Setup Make sure your n8n instance is connected with: Gmail Account**: (via OAuth2) Google Drive Account**: (via OAuth2) Google Sheets** (via OAuth2) 3. Configure Your n8n Workflow Nodes 1. Trigger and Email Retrieval Gmail Trigger**: Setup check interval and filters for emails (i.e. emails labeled with Custom_Label) 2. Whitelist settings Lookup in Sheets**: Searches for a row with the sender email. Configure this node to point to your whitelist spreadsheet containing two columnds: |email|company| 3. File storage location Confirue parent folder to create sub-folders and store files into in the Create Company Folder node Parent Folder dropdown Final Steps Test Your Workflow: Run the workflow to verify emails are processed and files are uploaded correctly. Happy Automating!
by Romain Jouhannet
Linear Project/Issue Status and End Date to Productboard feature Sync Sync project and issue data between Linear and Productboard to keep teams aligned. This workflow updates Productboard features with the status and end date from Linear projects or due date from Linear issues. It ensures consistent data and sends a Slack notification whenever changes are made. Features Listens for updates in Linear projects/issues. Maps Linear statuses to Productboard feature statuses. Updates Productboard feature details including timeframe. Sends a Slack notification summarizing the updates. Setup Linear Credentials: Add your Linear API credentials in n8n. Productboard Credentials: Configure the Productboard API credentials in n8n. Linear Projects or Issues: Select the Linear project(s) or Issue(s) you want to monitor for updates. Productboard Custom Field: Create a custom field in Productboard named "Linear". This field should store the URL of the Linear project or issue you want to sync. Retrieve the UUID of the custom field in Productboard and set it up in the "Get Productboard Feature ID" node. Slack Notification: Update the Slack node with the desired Slack channel ID. Activate the Workflow: Enable the workflow to automatically sync data when triggered by updates in Linear.
by Dvir Sharon
💼 LinkedIn Job Finder Automation using Bright Data API & Google Sheets A comprehensive n8n automation that searches LinkedIn job postings using Bright Data’s API and automatically organizes results in Google Sheets for efficient job hunting and recruitment workflows. 📋 Overview This workflow provides an automated LinkedIn job search solution that collects job postings based on your search criteria and organizes them in Google Sheets. Perfect for job seekers, recruiters, HR professionals, and talent acquisition teams. ✨ Key Features 🔍 Smart Job Search:** Form-based input for city, job title, country, and job type 🛍 LinkedIn Integration:** Uses Bright Data’s LinkedIn dataset for accurate job posting data 📊 Automated Organization:** Populates Google Sheets with structured job data 📧 Real-time Processing:** Processes job search requests in real-time 📈 Data Storage:** Stores job details including company info, locations, and apply links 🔄 Batch Processing:** Handles multiple job postings efficiently ⚡ Fast & Reliable:** Built-in error handling for scraping 🎯 Customizable Filters:** Advanced job filtering based on criteria 🎯 What This Workflow Does Input Job Search Criteria:** City, job title, country, and optional job type Search Parameters:** Configurable filters and limits Output Preferences:** Google Sheets destination Processing Steps Form Submission Data Request to Bright Data API Status Monitoring Data Extraction Data Filtering Sheet Update Error Handling Output Data Points Field Description Example Job Title Position title from posting Senior Software Engineer Company Name Employer company name Tech Solutions Inc. Job Detail Job summary/description Remote position requiring 5+ years… Location Job location San Francisco, CA Company URL Company profile link View Profile Apply Link Direct application link Apply Now 🚀 Setup Instructions Prerequisites n8n instance (self-hosted or cloud) Google account with Sheets access Bright Data account with LinkedIn dataset access Steps Import the Workflow: Use JSON import in n8n Configure Bright Data: Add API credentials and dataset ID Configure Google Sheets: Create sheet, set credentials, map columns Update Workflow Settings: Replace placeholders with your actual data Test & Activate: Submit test form and verify data in Google Sheets 📖 Usage Guide Submitting Job Searches Go to your webhook URL and fill in the form with: City:** e.g., New York Job Title:** e.g., Software Engineer Country:** e.g., US Job Type:** Optional (Full-Time, Remote, etc.) Understanding Results Comprehensive job data Company info and profile links Direct application links Location and job descriptions Customizing Search Parameters Edit the Create Snapshot ID node to change: Time range (e.g., “Past month”) Result limits Company filters 🔧 Customization Options More Data Points:** Add salary, seniority, applicants, etc. Custom Form Fields:** Add filters for salary, experience, industry Multiple Sheets:** Route results by job type or location 🚨 Troubleshooting Bright Data connection failed:** Check API credentials and dataset access No job data extracted:** Verify search parameters and API limits Google Sheets permission denied:** Re-authenticate and check sharing Form not working:** Check webhook URL and field mappings Filter issues:** Review logic and data types Execution failed:** Check logs, retry logic, and network status 📊 Use Cases & Examples Job Seeker Dashboard:** Automate job search and track applications Recruitment Pipeline:** Source candidates and monitor hiring trends Market Research:** Analyze job trends and salary benchmarks HR Analytics:** Support workforce planning and competitive insights ⚙️ Advanced Configuration Batch Processing:** Queue multiple searches with delays Search History:** Track and analyze past searches Tool Integration:** Connect to CRM, Slack, databases, BI tools 📈 Performance & Limits Processing Time:** 30–60 seconds per search Concurrent Requests:** 2–3 (depends on Bright Data plan) Data Accuracy:** 95%+ Success Rate:** 90%+ Daily Capacity:** 50–200 searches Memory:** ~50MB per execution API Calls:** 3–4 Bright Data + 1 Google Sheets per search 🤝 Support & Community n8n Community:** community.n8n.io Documentation:** docs.n8n.io Bright Data Support:** Via your Bright Data dashboard GitHub Issues:** Report bugs and request features 🎯 Ready to Use! Your workflow is ready for automated LinkedIn job searching. Customize it to your recruiting or job search needs. Webhook URL: https://your-n8n-instance.com/webhook/linkedin-job-finder What Gets Extracted: * ✅ Job Title * ✅ Company Information * ✅ Location Data * ✅ Job Details * ✅ Application Links * ✅ Processing Timestamps ### Use Cases: * 🔍 Job Search Automation * 📊 Recruitment Intelligence * 📝 Market Research * 🎯 HR Analytics
by Ranjan Dailata
Who this is for The Async Structured Bulk Data Extract with Bright Data Web Scraper workflow is designed for data engineers, market researchers, competitive intelligence teams, and automation developers who need to programmatically collect and structure high-volume data from the web using Bright Data's dataset and snapshot capabilities. This workflow is built for: Data Engineers - Building large-scale ETL pipelines from web sources Market Researchers - Collecting bulk data for analysis across competitors or products Growth Hackers & Analysts - Mining structured datasets for insights Automation Developers - Needing reliable snapshot-triggered scrapers Product Managers - Overseeing data-backed decision-making using live web information What problem is this workflow solving? Web scraping at scale often requires asynchronous operations, including waiting for data preparation and snapshots to complete. Manual handling of this process can lead to timeouts, errors, or inconsistencies in results. This workflow automates the entire process of submitting a scraping request, waiting for the snapshot, retrieving the data, and notifying downstream systems all in a structured, repeatable fashion. It solves: Asynchronous snapshot completion handling Reliable retrieval of large datasets using Bright Data Automated delivery of scraped results via webhook Disk persistence for traceability or historical analysis What this workflow does Set Bright Data Dataset ID & Request URL: Takes in the Dataset ID and Bright Data API endpoint used to trigger the scrape job HTTP Request: Sends an authenticated request to the Bright Data API to start a scraping snapshot job Wait Until Snapshot is Ready: Implements a loop or wait mechanism that checks snapshot status (e.g., polling every 30 seconds) until completion i.e ready state Download Snapshot: Downloads the structured dataset snapshot once ready Persist Response to Disk: Saves the dataset to disk for archival, review, or local processing Webhook Notification: Sends the final result or a summary of it to an external webhook Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. Update the Set Dataset Id, Request URL for setting the brand content URL. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs Polling Strategy : Adjust polling interval (e.g., every 15–60 seconds) based on snapshot complexity Input Flexibility : Accept datasetId and request URL dynamically from a webhook trigger or input form Webhook Output : Send notifications to - Internal APIs – for use in dashboards Zapier/Make – for multi-step automation Persistence Save output to: Remote FTP or SFTP storage Amazon S3, Google Cloud Storage etc.
by Solido AI
How it works: This bot operates in a continuous WhatsApp monitoring loop. It analyzes messages to detect keywords in common questions (like hours, prices, and location) and sends automatic replies with predefined information. For unrecognized questions, it directs the user to manual assistance. Set up steps: The initial setup involves integrating with the WhatsApp API, registering keywords and their respective responses, and defining the fallback flow. It takes only a few minutes to have the bot running with essential information.
by Solido AI
How it works: This system functions by receiving expenses via webhook POST. It validates the data, stores it in Google Sheets, and, daily at 8 PM, generates and sends financial summaries. Automatic categorization simplifies the organization of expenses. Set up steps: Setup involves creating the Google Sheet, configuring the webhook, and defining the categorization rules. The process is quick and intuitive, taking about 10-15 minutes for the system to be ready to receive your expenses.
by Kirill Khatkevich
This workflow is a comprehensive solution for digital marketers, performance agencies, and e-commerce brands looking to scale their creative testing process on Meta Ads efficiently. It eliminates the tedious manual work of uploading assets, creating campaigns, and setting up ads one by one. Use Case Manually launching weekly creative tests is time-consuming and prone to errors. This workflow solves that problem by creating a fully automated pipeline: from a creative asset in a folder to a complete, ready-to-launch (but paused) ad structure in your Meta Ads account. It's perfect for teams that want to: Save hours of manual work every week. Systematically test a high volume of creatives. Maintain a structured and consistent campaign naming convention. Keep a detailed log of all created assets for data-driven performance analysis. How it Works The workflow is structured into four logical blocks: 1. Configuration & Scheduling: The workflow runs on a weekly schedule. A central "Configuration" Set node at the beginning holds all key variables (Ad Account ID, Page ID, Pixel ID, making it incredibly easy to adapt the template for different projects. 2. Creative Ingestion & Processing: It scans a specific Google Drive folder for new image and video files. Using an IF node, it branches the logic based on the file type. Each file is uploaded to the Meta Ads library, and a corresponding Ad Creative is built with a pre-defined destination URL. 3. Campaign & Ad Set Assembly: The workflow creates a single new Campaign with an OUTCOME_SALES objective. It then creates a single Ad Set optimized for OFFSITE_CONVERSIONS (e.g., "Add to Cart"), using the Pixel ID from the configuration. A Merge node intelligently combines the single Ad Set ID with every creative processed in the previous block, preparing the data for the final step. 4. Ad Creation & Data Logging: The workflow iterates through the prepared data, creating a unique Ad for each creative. Upon the successful creation of each ad, a new row is appended to a Google Sheet, logging all relevant IDs (CampaignID, AdSetID, AdID, CreativeID) and metadata for a complete audit trail. Setup Instructions To use this template, you need to configure a few key nodes. 1. Credentials: Connect your Meta Ads account. Connect your Google account (for both Drive and Sheets). 2. The ⚙️ Configuration Node (Set node): This is the most important step. Open the first Set node and fill in your specific values: adAccountId: Your Meta Ad Account ID. pageId: The ID of the Facebook Page you're advertising for. pixelId: Your Meta Pixel ID for conversion tracking. 3. Google Sheets Node (Save Full Report to Sheet): Select your spreadsheet and the specific sheet where you want to save the reports. Make sure your sheet has columns with the following headers: CampaignID, AdSetID, AdID, CreativeID, FileName, MimeType, Timestamp. 4. Check URLs and IDs in HTTP Request Nodes: The template is configured to use the variables from the ⚙️ Configuration node. Double-check that the URLs in the Create Campaign, Create Ad Set, and Create ... Creative nodes correctly reference these variables (e.g., .../act_{{ $('⚙️ Configuration Meta Ads').item.json.adAccountId }}/campaigns). Verify the link in the Create Video Creative and Create Image Creative nodes points to your desired landing page. 5. Activate the Workflow: Set your desired schedule in the Schedule Trigger node. Save and activate the workflow. Further Ideas & Customization This workflow is a powerful foundation. You can easily extend it to: Create a second workflow** that runs a week later, reads the Google Sheet, and pulls performance data for all the ads created. A/B test ad copy** by adding different text variations from a spreadsheet. Add a Slack or Email notification** at the end to confirm that the weekly campaign launch was successful.