by Aleksandr
This template processes webhooks received from amoCRM in a URL-encoded format and transforms the data into a structured array that n8n can easily interpret. By default, n8n does not automatically parse URL-encoded webhook payloads into usable JSON. This template bridges that gap, enabling seamless data manipulation and integration with subsequent processing nodes. Key Features: Input Handling: Processes URL-encoded data received from amoCRM webhooks. Data Transformation: Converts complex, nested keys into a structured JSON array. Ease of Use: Simplifies access to specific fields for further workflow automation. Setup Guide: Webhook Trigger Node: Configure the Webhook Trigger node to receive data from amoCRM. URL-Encoding Parsing: Use the provided nodes to transform the input URL-encoded data into a structured array. Access Transformed Data: Use the resulting JSON structure for subsequent nodes in your workflow, such as filtering, updating records, or triggering external systems. Example Data Transformation: Sample Input (URL-Encoded): The following input format is typically received from amoCRM: $json.body'leads[updatecustom_fields[id]'] Output (Structured JSON): After processing, the data is transformed into an easily accessible JSON array format: {{ $json.leads.update[‘0’].id }} This output allows you to work with clean, structured JSON, simplifying field extraction and workflow continuation. Code Explanation: This workflow parses URL-encoded key-value pairs using n8n nodes to restructure the data into a nested JSON object. By doing so, the template improves transparency, ensures data integrity, and makes further automation tasks straightforward.
by Agent Studio
Restore backed up workflows from GitHub to your n8n workspace. This workflow was inspired by this one that lets you back up your n8n workflows to GitHub. It will let you restore your backed up workflows in your workspace, without creating duplicates. In case of issue with your instance, it will save you a lot of time to restore them. How it works It retrieves the workflows saved in a GitHub repository. Then compares these saved workflows with the ones in your n8n workspace based on the name. It will only create them if they don't already exist. Set up steps Open the "Global" node and set your own information (see Configuration below) Click on "Test workflow" It will run through all the workflows in the GitHub repository, check if the name doesn't already exist in your workspace and, in this case, create it. Configuration repo.owner: your GitHub owner name repo.name: your GitHub repository name repo.path: the path within the GitHub repository
by Geoffrey Saxena
👤 Who is this for? This workflow is great for n8n users who want to prevent duplicate or overlapping workflow runs. If you're a developer, DevOps engineer, or automation enthusiast managing tasks like database updates, syncing tools, or hitting rate-limited APIs, this one’s for you. 🧩 What problem does this solve? In the real world, automations can get triggered at the same time—whether that’s because of multiple webhook calls, overlapping schedules, or retries. And when two workflows try to do the same thing at once (like updating a record or syncing data), it can cause conflicts, data corruption, or wasted API calls. This workflow helps avoid that problem by using Redis as a lock system, so only one instance runs at a time. Think of it like putting up a “🚧 Workflow in Progress” sign while your logic is running. ⚙️ What this workflow does When the workflow starts, it tries to set a Redis key as a lock with a short expiry. If the lock is free: Your main business logic runs. Once it's done, the lock is cleared. If the lock is already taken (i.e., another run is in progress): The workflow will wait and retry a few times. If a duplicate request shows up while one is already being processed: It skips that duplicate to avoid unnecessary work. You can customize both the timeout and retry logic to match your needs. 🛠️ Setup guide To use this template: You’ll need access to a Redis instance (either self-hosted or managed like Upstash, Redis Cloud, etc). Set up your Redis credentials in the n8n Redis node. Swap out the webhook node with your actual trigger or logic. Adjust the lock timeout to match how long your task typically takes. > 💡 Bonus Tip: Use this pattern wherever you need idempotency or want to avoid duplicate processing. 🧪 Example use case Let’s say you have a workflow that syncs ClickUp tickets to Google Sheets. It runs daily at 9 AM and updates tickets, adds notes, and makes sure nothing is missed. But what if two runs start at the same time? Or someone triggers a manual sync while the scheduled one is still working? By wrapping that whole sync inside this Redis locking template, you can make sure it only runs one at a time, saving your APIs (and your sanity).
by Alex Huy
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Description This n8n workflow automatically scrapes Airbnb listings from a specified location and saves the data to a Google Sheet. It performs pagination to collect listings across multiple pages, extracts detailed information for each property, and organizes the data in a structured format for easy analysis. How it Works The workflow operates through these high-level steps: Search Initialization: Starts with an Airbnb search for a specific location (London) with defined check-in/check-out dates and guest count Pagination Loop: Automatically processes multiple pages of search results using cursor-based pagination Data Extraction: Parses listing information including names, prices, ratings, reviews, and URLs Detail Enhancement: Fetches additional details for each listing (house rules, highlights, descriptions, amenities) Data Storage: Saves all collected data to a Google Sheet with proper formatting Loop Control: Continues until reaching the page limit (2 pages) or no more results are available Setup Steps Prerequisites n8n instance with MCP (Model Context Protocol) support Google Sheets API credentials configured Airbnb MCP client properly set up Configuration Steps Configure MCP Client Set up the Airbnb MCP client with credential ID: Ensure the client has access to airbnb_search and airbnb_listing_details tools Google Sheets Setup Create a Google Sheet with ID: 15IOJquaQ8CBtFilmFTuW8UFijux10NwSVzStyNJ1MsA Configure Google Sheets OAuth2 credentials (ID: 6YhBlgb8cXMN3Ra2) Ensure the sheet has these column headers: "id, name, url, price_per_night, total_price, price_details beds_rooms, rating, reviews, badge, location houseRules, highlights, description, amenities" Search Parameters Location: "London" (can be modified in the "Airbnb Search" node) Adults: 7 Children: 1 Check-in: "2025-08-14" Check-out: "2025-08-17" Page limit: 2 (can be adjusted in the "If1" condition node) Execution Use the manual trigger "When clicking 'Execute workflow'" to start the process Monitor the workflow execution through the n8n interface Check the Google Sheet for populated data after completion Key Features Automatic Pagination: Processes multiple pages without manual intervention Comprehensive Data: Extracts both basic listing info and detailed property information Error Handling: Includes JSON parsing error handling and data validation Batch Processing: Uses split batches for efficient processing of individual listings Real-time Updates: Appends new data to existing Google Sheet records Output Data Structure Each listing contains: Basic info: ID, name, URL, pricing details, room/bed count Ratings: Average rating and review count Location: Latitude and longitude coordinates Enhanced details: House rules, highlights, descriptions, amenities Metadata: Page number, check-in/out dates, badges
by Ventsislav Minev
Automated Email Attachment Organizer Automatically process labeled emails with attachments into organized Google Drive folders Who Is This For? Teams or Individuals** needing to: Automatically sort invoices, receipts, and files. Organize client documents by date. Verify sender emails against a whitelist. Timestamp files to avoid duplicate names. What Does This Workflow Solve? 🕒 Manual Email Sorting: Saves time by automating the organization of email attachments. 📂 Disorganized Cloud Storage: Ensures attachments are neatly stored in Google Drive folders. 📧 Unverified Sender Issues: Filters and validates emails using a whitelist. 🔄 Duplicate Filenames: Uses timestamps to ensure every file name is unique. Setup Guide 1. Pre-Requisites Whitelist Sheet**: Make a copy of the Example Whitelist Sheet Gmail Filter**: Create a filter in Gmail to label emails with attachments. To Create a Gmail Filter: Open your Gmail Inbox. Click the search bar and select "Show search options". Enter your criteria (e.g., type has:attachment). Click "Create filter". Choose "Apply the label: Custom_Label" and save. 2. Credentials Setup Make sure your n8n instance is connected with: Gmail Account**: (via OAuth2) Google Drive Account**: (via OAuth2) Google Sheets** (via OAuth2) 3. Configure Your n8n Workflow Nodes 1. Trigger and Email Retrieval Gmail Trigger**: Setup check interval and filters for emails (i.e. emails labeled with Custom_Label) 2. Whitelist settings Lookup in Sheets**: Searches for a row with the sender email. Configure this node to point to your whitelist spreadsheet containing two columnds: |email|company| 3. File storage location Confirue parent folder to create sub-folders and store files into in the Create Company Folder node Parent Folder dropdown Final Steps Test Your Workflow: Run the workflow to verify emails are processed and files are uploaded correctly. Happy Automating!
by Romain Jouhannet
Linear Project/Issue Status and End Date to Productboard feature Sync Sync project and issue data between Linear and Productboard to keep teams aligned. This workflow updates Productboard features with the status and end date from Linear projects or due date from Linear issues. It ensures consistent data and sends a Slack notification whenever changes are made. Features Listens for updates in Linear projects/issues. Maps Linear statuses to Productboard feature statuses. Updates Productboard feature details including timeframe. Sends a Slack notification summarizing the updates. Setup Linear Credentials: Add your Linear API credentials in n8n. Productboard Credentials: Configure the Productboard API credentials in n8n. Linear Projects or Issues: Select the Linear project(s) or Issue(s) you want to monitor for updates. Productboard Custom Field: Create a custom field in Productboard named "Linear". This field should store the URL of the Linear project or issue you want to sync. Retrieve the UUID of the custom field in Productboard and set it up in the "Get Productboard Feature ID" node. Slack Notification: Update the Slack node with the desired Slack channel ID. Activate the Workflow: Enable the workflow to automatically sync data when triggered by updates in Linear.
by Risper
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How It Works This n8n workflow automatically discovers high-quality business leads from Reddit posts by analysing posts across targeted subreddits. Loads your business profile from a connected Google Sheet. Uses AI to identify relevant subreddits where your potential customers engage. Generates intent-based Reddit search queries based on your services, keywords, and client pain points. Searches Reddit in real time using the generated queries. Classifies posts based on whether they show lead potential. Analyses high-potential posts for service-fit, urgency, and estimated value. Filters and scores leads to prioritize high-conversion opportunities. Saves the most promising leads to a dedicated Google Sheet. Sends Slack alerts to notify your sales team for immediate follow-up. Requirements Before using this workflow, ensure the following services are connected and configured: Google Sheets (OAuth2): Reads your business profile and writes qualified leads Reddit (OAuth2) Perform Reddit post searches based on generated queries Google Gemini API Analyse posts, generate queries, and extract insights Slack API : Notify your team with qualified lead summaries Google Sheets Setup You will need two Google Sheets: Business Profile Sheet (Input) This sheet contains a single row describing your service business. The workflow reads this to generate relevant subreddit selections and search queries. Required Fields (as headers in row 1): profession industry primary_services service_keywords target_client_profile pain_points intent_signals urgency_indicators price_range Reddit Leads Sheet (Output) This sheet stores high-quality Reddit posts identified as potential leads. The workflow appends or updates rows based on post_id to avoid duplication. Expected Columns: post_id post_url post_title post_post post_subreddit post_date
by Blockia Labs
Time Logging on Clockify Using Slack How it works This workflow simplifies time tracking for teams and agencies by integrating Slack with Clockify. It enables users to log, update, or delete time entries directly within Slack, leveraging an AI-powered assistant for seamless and conversational interactions. Key features include: Effortless Time Logging**: Create and manage time entries in Clockify without leaving Slack. AI-Powered Assistant**: Get step-by-step guidance to ensure accurate and efficient time logging. Project and Client Management**: Retrieve project and client information from Clockify effortlessly. Overlap Prevention**: Avoid overlapping entries with built-in time validation. Automated Descriptions**: Generate ethical, grammatically correct descriptions for time logs. Set up steps 1. Prepare your integrations Ensure you have active accounts for both Slack and Clockify. Generate your Clockify API credentials for integration. 2. Import the workflow Download and import the workflow template into your n8n instance. Configure the workflow to connect with your Slack and Clockify accounts. 3. Configure the workflow Add your Clockify API credentials in the workflow settings. Set up the Slack Trigger to listen for app mentions or specific commands. 4. Test the workflow Use Slack to create a time entry and verify it in Clockify. Test updating and deleting existing entries to ensure smooth functionality. Check for any overlapping time logs or incorrect data entries. Why use this workflow? Efficiency**: Eliminate the need to switch between tools for time tracking. Accuracy**: AI-driven validation ensures error-free entries. Automation**: Simplify repetitive tasks like updating or deleting time logs. Proactive Guidance**: Conversational assistant ensures smooth operations.
by Nazmy
Based on Jonathan & Solomon work. > The only addition I've made is a Set node. This node organizes workflows into subfolders within the GitHub repository based on their respective tags. How it works This workflow will backup your workflows to GitHub. It uses the n8n API node to export all workflows. It then loops over the data, checks in GitHub to see if a file exists that uses the credential's ID. Once checked it will: update the file on GitHub if it exists; create a new file if it doesn't exist; ignore if it's the same. Who is this for? People wanting to backup their workflows outside the server for safety purposes or to migrate to another server.
by Yang
📽️ What this workflow does This workflow turns a user-submitted form with country or animal names into a cinematic video with animated scenes and immersive ambient audio. Using GPT-4 for prompt generation, Dumpling AI for visual creation,& Replicate for motion animation, ElevenLabs for sound generation, and Creatomate for video stitching, it fully automates video production — from raw idea to rendered file. 🎯 What problem is this solving? Creating engaging multimedia content can take hours. This workflow automates the entire process of ideation, design, and rendering of high-quality cinematic clips, eliminating the need for manual video editing or audio production. 👥 Who is this for? Content creators and educators Digital artists and storytellers Marketers or YouTubers creating short-form visual content No-code/AI automation enthusiasts ⚙️ Setup Instructions ✅ Step 1: Google Sheet Create a Google Sheet with two columns: Title Generated videos Update the Sheet ID and tab name in the final node. ✅ Step 2: Google Drive Create two folders: One for ambient audio tracks One for final generated videos Update the folder IDs in both Google Drive nodes. ✅ Step 3: Credentials Setup Make sure all your API tokens are saved as credentials in n8n. This workflow uses the following integrations: OpenAI (GPT-4) Dumpling AI (via HTTP header) Replicate.com ElevenLabs Google Drive Google Sheets Creatomate ✅ Step 4: Form Fields Ensure your trigger form includes these fields: Title Country 1, Country 2, Country 3, Country 4 Style (e.g., cinematic, epic, fantasy, noir, etc.) 🧩 How it works User Form Submission Kicks off the workflow with the required inputs. Format Inputs Combines all 4 countries/animals into a single array. GPT-4: Generate Visual Prompts Uses GPT-4 to create rich cinematic descriptions per animal/country. Dumpling AI: Create Images Each description becomes a high-quality visual. GPT-4: Create Motion Prompts Each image prompt is rewritten into motion-based video prompts. Replicate: Animate Prompts and images are sent to Replicate’s model for animation. GPT-4: Generate Sound Prompt Based on the style, GPT-4 creates an ambient sound idea. ElevenLabs: Create Ambient Audio Audio is generated and uploaded to Google Drive. Creatomate: Stitch All Media All 4 motion videos and the audio track are stitched into one cinematic output. Upload to Google Drive + Log to Sheet Final video is saved in Drive and logged in Sheets with its title and link. 🛠️ How to Customize 🎨 Modify GPT prompts for different themes (e.g., horror, fantasy, sci-fi). 🧠 Swap animals for characters, objects, or locations. 🎧 Replace ambient sound with ElevenLabs voiceovers or music. 📂 Add metadata logging (generation time, duration, tags). 🧪 Try using alternative video tools like Pika Labs or Runway ML. ✅ Requirements n8n self-hosted or cloud instance Active accounts for: OpenAI, Dumpling AI, Replicate, ElevenLabs, Creatomate Google credentials set up for Drive + Sheets This is a perfect end-to-end automation that showcases the power of AI + automation for video storytelling.
by Dele Tosh
🚀 Automate your social media presence! This workflow duo automatically curates content from your Wallabag RSS feeds, generates platform-specific posts using AI, and publishes them—complete with AI-generated images. ⚙️ Setup & Configuration Required Credentials & Services: To run this workflow, you will need to set up the following credentials in n8n: Wallabag RSS Feeds:** This workflow uses Wallabag as your content curation service. Wallabag is the most cost-effective option—easy to self-host or use as a paid service. You'll need to generate access tokens for RSS feed access. Airtable API Key:** To create and update records in your "Content Store" database. LLM Provider API Key:* To power the social media content generation. The demo uses *Groq (llama-3.3-70b-versatile)**, but this can be replaced with any preferred LLM (OpenAI, Anthropic, etc.). GetLate API Key:** To authenticate and post to your social media platforms. Imgbb API Key:** To host the AI-generated images. An Image Generation Service API Key:* For creating images from prompts. The demo uses *Hugging Face (stable-diffusion-xl-base-1.0)**, but this can be swapped for any other service (Fal.ai, Stability AI, etc.). Key Setup Requirement: Define Your Tagging Convention Before running this workflow, establish a consistent tagging system in Wallabag where each tag corresponds to a specific social media platform. For example: #to-share-linkedin for LinkedIn content #to-share-bluesky for Bluesky content #to-share-instagram for Instagram content Adding More Feeds & Platforms: New Feeds:** Simply duplicate the example sub-workflows and update the RSS feed URL and target platform information for each new tag/platform combination. New Platforms:** To add support for additional social platforms, duplicate one of the existing platform sub-workflows (LinkedIn or Bluesky) and update the platform-specific parameters, prompts, and GetLate API settings for your new platform. Airtable Database Schema: "RSS Feed - Content Store" The workflow uses an Airtable base with the following fields to track content from ingestion to publication: id (Primary Field): Formula, for unique record ID (RECORD_ID()) audience_targeted: Long text author: Long text character_count: Number content_markdown: Long text cta_used: Long text feed_id: Single line text goal_applied: Long text image_filename: Singe line text image_id: Single line text image_link: URL image_prompt: Long text is_posted: Number, default is 0 platform: Single line text post_text: Long text suggested_hashtags: Long text title: Long text tone_applied: Long text article_url: URL Customizable User Inputs: This workflow is built for flexibility. Key inputs you can customize include: Wallabag RSS Feed URL & Platform Tag:** The specific feed and platform-specific tag (e.g., #to-share-linkedin) to monitor in Workflow #1. Target Social Platform:** Defined per feed in the "Edit Field" node. Content Generation Schedule:** The frequency for auto-posting in the Schedule Trigger. Brand Voice & LLM Parameters:** The tone, style, and specific instructions for the AI in the "Set Custom SMCG Prompt" node. Platform-Specific Prompts:** The template used to generate posts for each social network (Instagram, LinkedIn, etc.). Posting Behavior & Image Generation:* Configured within the *SMCG (Social Media Content Generation) node**. This is where you set the posting mode (immediate vs. draft) and define a boolean for each platform to enable or disable AI-generated images for its posts. 📥 Workflow 1: RSS Aggregator & Content Store RSS Trigger** → Pulls tagged articles from Wallabag feeds using platform-specific tags Platform Assignment** → Sets target social platform based on tag Content Conversion** → HTML to Markdown formatting Airtable Storage** → Saves articles to content database 🌟 Adding New RSS Feeds: To monitor additional Wallabag feeds for different content sources, simply duplicate the existing RSS feed sub-workflow and update the RSS URL with your new Wallabag access token and platform-specific tag. Each feed can target a different social platform or content category. 🔄 Workflow 2: AI Content Generator & Publisher Schedule Trigger** → Runs on your preferred frequency Content Selection** → Pulls unpublished articles from Airtable AI Configuration** → Sets brand voice, posting behavior, and image generation preferences Platform Routing** → Directs to appropriate social platform workflow AI Content Generation** → Creates posts and image prompts using LLM Image Generation** → Creates & hosts images when enabled Social Publishing** → Posts to platforms via GetLate API Database Update** → Marks content as published in Airtable 🌟 Adding New Platform Support: To extend this workflow to additional social platforms, simply duplicate one of the existing platform sub-workflows and update the platform-specific parameters, LLM prompts, and GetLate API configuration for your target platform.
by InfyOm Technologies
✅ What problem does this workflow solve? Many websites lack a smart, searchable interface. Visitors often leave due to unanswered questions. This workflow transforms any website into a Retrieval-Augmented Generation (RAG) chatbot—automatically extracting content, creating embeddings, and enabling real-time, context-aware chat on your own site. ⚙️ What does this workflow do? Accepts a website URL through a form trigger. Fetches and cleans website content. Parses content into smaller sections. Generates vector embeddings using OpenAI (or your embedding model). Stores embeddings and metadata in Supabase’s vector database. When a user asks a question: Searches Supabase for relevant chunks via similarity search. Retrieves matching content as context. Sends context + question to OpenAI to generate an accurate answer. Returns the AI-generated response to the user in the chat interface. 🔧 Setup Instructions 🖥️ Website Form Trigger Use a Form / HTTP Trigger to submit website URLs for indexing. 📥 Content Extraction & Chunking Use HTTP nodes to fetch HTML. Clean and parse it (e.g., remove scripts, ads). Use a Function node to split into manageable text chunks. 🧠 Embedding Generation Call OpenAI (or Cohere) to generate embeddings for each chunk. Insert vectors and metadata into Supabase via its API or n8n Supabase node. 💬 User Query Handling Use a Chat Trigger (webhook/UI) to receive user questions. Convert the question into an embedding. Query Supabase with similarity search (e.g., match_documents RPC). Retrieve top-matching chunks and feed them into OpenAI with the user question. Return the reply to the user. 🛠 AI & Database Setup OpenAI API key** for embedding and chat. A Supabase project with: vector extension enabled Tables for document chunks and embeddings A similarity search function like match_documents 💬 How to Embed the Chat Widget on Your Website You can add the chatbot interface to your website with a simple JavaScript snippet. Steps: Open the "When chat message received" node Copy Chat URL Make sure, "Make Chat Publicly Available "Toggle is enabled Make sure the mode is "Embedded Chat" Follow the instructions given on this package here. 🧠 How it Works Submit URL → Form Trigger Fetch Website Content → HTTP Request Clean & Chunk Content → Function Node Make Embeddings (OpenAI/Cohere) Store in Supabase → embeddings + metadata User Chat → Chat Trigger Search for Similar Content → Supabase similarity match Generate Answer → OpenAI completion w/ context Send Reply → Chat interface returns answer 🗂 Why Supabase? Supabase offers a scalable Postgres-based vector database with extensions like pgvector, making it easy to: Store vector data alongside metadata Run ANN (Approximate Nearest Neighbor) similarity searches Integrate seamlessly with n8n and your chatbot UI :contentReference[oaicite:1]{index=1} 👤 Who can use this? 📝 Documentation websites 👩💼 Support portals 🏢 Product/Landing pages 🛠 Internal knowledge bases Perfect for anyone who wants a smart, website-specific chatbot without building an entire AI stack from scratch. 🚀 Ready to Deploy? Plug in your: ✅ OpenAI API Key ✅ Supabase project credentials ✅ Chat UI or webhook endpoint … and launch your AI-powered, website-specific RAG chatbot in minutes!