by Dave Bernier
This n8n workflow template uses community nodes and is only compatible with the self-hosted version of n8n. This template aims to ease the process of deploying workflows from github. It has a companion repository that developers might find useful{. See below for more details How it works Automatically import and deploy n8n workflows from your GitHub repository to your production n8n instance using a secured webhook-based approach. This template enables teams to maintain version control of their workflows while ensuring seamless deployment through a CI/CD pipeline. Receives webhook notifications from GitHub when changes are pushed to your repository Lists all files in the repository and filters for .json workflow files Downloads each workflow file and saves it locally Imports all workflows into n8n using the CLI import command Cleans up temporary files after successful import To trigger the deployment, send a POST request to your webhook with the set up credentials (basic auth) with the following body: { "owner": "GITHUB_REPO_OWNER_NAME", "repository": "GITHUB_REPOSITORY_NAME" } Set up steps Once importing this template in n8n : Setup the webhook basic auth credentials Setup the github credentials Activate the workflow ! Companion repository There is a companion repository located at https://github.com/dynamicNerdsSolutions/n8n-git-flow-template that has a Github action already setup to work with this workflow. It provides a complete development environment with: Local n8n instance via Docker Automated workflow export and commit scripts Version control integration CI/CD pipeline setup This setup allows teams to maintain a clean separation between development and production environments while ensuring reliable workflow deployment.
by Yang
Who is this for? This workflow is perfect for operations teams, accountants, e-commerce businesses, or finance managers who regularly process digital invoices and need to automate data extraction and record-keeping. What problem is this workflow solving? Manually reading invoice PDFs, extracting relevant data, and entering it into spreadsheets is time-consuming and error-prone. This workflow automates that process—watching a Google Drive folder, extracting structured invoice data using Dumpling AI, and saving the results into Google Sheets. What this workflow does Watches a specific Google Drive folder for new invoices. Downloads the uploaded invoice file. Converts the file into a Base64 format. Sends the file to Dumpling AI’s extract-document endpoint with a detailed parsing prompt. Parses Dumpling AI’s JSON response using a Code node. Splits the items array into individual rows using the Split Out node. Appends each invoice item to a preformatted Google Sheet along with the full header metadata (order number, PO, addresses, etc.). Setup Google Drive Setup Create or select a folder in Google Drive and place the folder ID in the trigger node. Make sure your n8n Google Drive credentials are authorized for access. Google Sheets Create a Google Sheet with the following headers: Order number, Document Date, Po_number, Sold to name, Sold to address, Ship to name, Ship to address, Model, Description, Quantity, Unity price, Total price Paste the Sheet ID and sheet name (Sheet1) into the Google Sheets node. Dumpling AI Sign up at Dumpling AI Go to your account settings and generate your API key. Paste this key into the HTTP header of the Dumpling AI request node. The endpoint used is: https://app.dumplingai.com/api/v1/extract-document Prompt (already included) This prompt extracts: order number, document date, PO number, shipping/billing details, and detailed line items (model, quantity, unit price, total). How to customize this workflow to your needs Adjust the Google Sheet fields to fit your invoice structure. Modify the Dumpling AI prompt if your invoices have additional or different data points. Add filtering logic if you want to handle different invoice types differently. Replace Google Sheets with Airtable or a database if preferred. Use a different trigger like an email attachment if invoices come via email.
by n8n Team
This workflow creates a Slack thread when a new ticket is created in Zendesk. Subsequent comments on the ticket in Zendesk are added as replies to the thread in Slack. Prerequisites Zendesk account and Zendesk credentials. Slack account and Slack credentials. Slack channel to create threads in. How it works The workflow listens for new tickets in Zendesk. When a new ticket is created, the workflow creates a new thread/message in Slack. The Slack thread ID is then saved in one of the ticket's fields called "Slack thread ID". The next time a comment is added to the ticket, the workflow retrieves the Slack thread ID from the ticket's field and adds the comment to the thread/message in Slack as a reply. Setup This workflow requires that you set up a webhook in Zendesk. To do so, follow the steps below: In the workflow, open the On new Zendesk ticket node and copy the webhook URL. In Zendesk, navigate to Admin Center > Apps and integrations > Webhooks > Actions > Create Webhook. Add all the required details which can be retrieved from the On new Zendesk ticket node. The webhook URL gets added to the “Endpoint URL” field, and the “Request method” should match what is shown in n8n. Save the webhook. In Zendesk, navigate to Admin Center > Objects and rules > Business rules > Triggers > Add trigger. Give trigger a name such as “New tickets”. Under “Conditions” in “Meet ALL of the following conditions”, add “Status is New”. Under “Actions”, select “Notify active webhook” and select the webhook you created previously. In the JSON body, add the following: { "id": "{{ticket.id}}", "comment": "{{ticket.latest_comment_html}}" } Save the Zendesk trigger. You will also need to set up a field in Zendesk to store the Slack thread ID. To do so, follow the steps below: In Zendesk, navigate to Admin Center > Objects and rules > Tickets > Fields > Add field. Use the text field option and give the field a name such as “Slack thread ID”. Save the field. In n8n, open the Update ticket node and select the field you created in Zendesk.
by Damian Karzon
This workflow randomly select recipes from a Mealie instance (can use a specific category) and then creates a meal plan in Mealie with those recipes. How it works: Workflow has a scheduled trigger (set to run weekly on a Friday) Config node sets a few properties to configure the workflow A call to the Mealie API to get the list of recipes The code node holds most of the logic, this will loop through the number of recipes defined in the config node and randomly select a recipe from the list (making sure not to double up any recipes) Once all the recipes are selected it will call the Mealie API to set up the meal plan on the days Setup Add your Mealie API token as a credential and set it on the Http Request nodes Set the relevant schedule trigger to run when you like Update the Config node with the config you want numberOfRecipes - Number of recipes to populate for the meal plan offsetPlanDays - Number of days in the future to start the plan (0 will start it today, 1 tomorrow, etc.) mealieCategoryId - A category id of the category you want to pull in recipes from (default to select from all recipes) mealieBaseUrl - The base url of your Mealie instance
by Jimleuk
This n8n template demonstrates how to get started with Gemini 2.0's new Bounding Box detection capabilities in your workflows. The key difference being this enables prompt-based object detection for images which is pretty powerful for things like contextual search over an image. eg. "Put a bounding box around all adults with children in this image" or "Put a bounding box around cars parked out of bounds of a parking space". How it works An image is downloaded via the HTTP node and an "Edit Image" node is used to extract the file's width and height. The image is then given to the Gemini 2.0 API to parse and return coordinates of the bounding box of the requested subjects. In this demo, we've asked for the AI to identify all bunnies. The coordinates are then rescaled with the original image's width and height to correctl align them. Finally to measure the accuracy of the object detection, we use the "Edit Image" node to draw the bounding boxes onto the original image. How to use Really up to the imagination! Perhaps a form of grounding for evidence based workflows or a higher form of image search can be built. Requirements Google Gemini for LLM Customising the workflow This template is just a demonstration of an experimental version of Gemini 2.0. It is recommended to wait for Gemini 2.0 to come out of this stage before using in production.
by Yaron Been
Automated pipeline that extracts job listings from Upwork and exports them to Google Sheets for better organization, analysis, and team collaboration. 🚀 What It Does Fetches job postings based on saved searches Extracts key job details (title, budget, description) Organizes data in Google Sheets Updates in real-time Supports multiple search criteria 🎯 Perfect For Freelancers tracking opportunities Teams managing multiple projects Agencies monitoring client needs Market researchers Business analysts ⚙️ Key Benefits ✅ Centralized job board ✅ Easy sharing with team members ✅ Advanced filtering and sorting ✅ Historical data tracking ✅ Customizable data points 🔧 What You Need Upwork account Google account n8n instance Google Sheets setup 📊 Data Exported Job title and description Budget and hourly rate Client information Posted date Required skills Job URL 🛠️ Setup & Support Quick Setup Get started in 15 minutes with our step-by-step guide 📺 Watch Tutorial 💼 Get Expert Support 📧 Direct Help Streamline your job search and opportunity tracking with automated data collection and organization.
by PollupAI
LinkedIn Profile Enrichment Workflow Who is this for? This workflow is ideal for recruiters, sales professionals, and marketing teams who need to enrich LinkedIn profiles with additional data for lead generation, talent sourcing, or market research. What problem is this workflow solving? Manually gathering detailed LinkedIn profile information can be time-consuming and prone to errors. This workflow automates the process of enriching profile data from LinkedIn, saving time and ensuring accuracy. What this workflow does Input: Reads LinkedIn profile URLs from a Google Sheet. Validation: Filters out already enriched profiles to avoid redundant processing. Data Enrichment: Uses RapidAPI's Fresh LinkedIn Profile Data API to retrieve detailed profile information. Output: Updates the Google Sheet with enriched profile data, appending new information efficiently. Setup Google Sheet: Create a sheet with a column named linkedin_url and populate it with the profile URLs to enrich. RapidAPI Account: Sign up at RapidAPI and subscribe to the Fresh LinkedIn Profile Data API. API Integration: Replace the x-rapidapi-key and x-rapidapi-host values with your credentials from RapidAPI. Run the Workflow: Trigger the workflow and monitor the updates to your Google Sheet. How to customize this workflow Filter Criteria**: Modify the filter step to include additional conditions for processing profiles. API Configuration**: Adjust API parameters to retrieve specific fields or extend usage. Output Format**: Customize how the enriched data is appended to the Google Sheet (e.g., format, column mappings). Error Handling**: Add steps to handle API rate limits or missing data for smoother automation. This workflow streamlines LinkedIn profile enrichment, making it faster and more effective for data-driven decision-making.
by phil
This workflow automates web scraping of Amazon search result pages by retrieving raw HTML, cleaning it to retain only the relevant product elements, and then using an LLM to extract structured product data (name, description, rating, reviews, and price), before saving the results back to Google Sheets. It integrates Google Sheets to supply and collect URLs, BrightData to fetch page HTML, a custom n8n Function node to sanitize the HTML, LangChain (OpenRouter GPT-4) to parse product details, and Google Sheets again to store the output. URL to scape . Result Who Needs Amazon Search Result Scraping? This scraping workflow is ideal for teams and businesses that need to monitor Amazon product listings at scale: E-commerce Analysts** – Track competitor pricing, ratings, and inventory trends. Market Researchers** – Collect data on product popularity and reviews for market analysis. Data Teams** – Automate ingestion of product metadata into BI pipelines or data lakes. Affiliate Marketers** – Keep affiliate catalogs up to date with latest product details and prices. If you need reliable, structured data from Amazon search results delivered directly into your spreadsheets, this workflow saves you hours of manual copy-and-paste. Why Use This Workflow? End-to-End Automation** – From URL list to clean JSON output in Sheets. Robust HTML Cleaning** – Strips scripts, styles, unwanted tags, and noise. Accurate Structured Parsing** – Leverages GPT-4 via LangChain for reliable extraction. Scalable & Repeatable** – Processes thousands of URLs in batches. Step-by-Step: How This Workflow Scrapes Amazon Get URLs from Google Sheets – Reads a list of search result URLs. Loop Over Items – Iterates through each URL in controlled batches. Fetch Raw HTML – Uses BrightData’s Web Unlocker proxy to retrieve the page. Clean HTML – A Function node removes doctype, scripts, styles, head, comments, classes, and non-whitelisted tags, collapsing extra whitespace. Extract with LLM – Passes cleaned HTML into LangChain → GPT-4 to output JSON for each product: name, description, rating, reviews, price Save Results – Appends the JSON fields as columns back into a “results” sheet in Google Sheets. Customization: Tailor to Your Needs Adaptable Sites** – This workflow can be adapted to any e-commerce or other website, for example Walmart or eBay. Whitelist Tags** – Modify the allowedTags array in the Code node to keep additional HTML elements. Schema Changes** – Update the Structured Output Parser schema to include more fields (e.g., availability, SKU). Alternate Data Sink** – Instead of Sheets, route output to a database, CSV file, or webhook. 🔑 Prerequisites Google Sheets Credentials** – OAuth credentials configured in n8n. BrightData API token** – Stored in n8n credentials as BRIGHTDATA_TOKEN. OpenRouter API Key** – Configured for the LangChain node to call GPT-4. n8n Instance** – Self-hosted or cloud with sufficient quota for HTTP requests and LLM calls. 🚀 Installation & Setup Configure Credentials** In n8n, set up Google Sheets OAuth under “Credentials.” Add BrightData token as a new HTTP Request credential. Create an OpenRouter API key credential for the LangChain node. Import the Workflow** Copy the JSON workflow into n8n’s “Import” dialog. Map your Google Sheet IDs and GIDs to the {{WEB_SHEET_ID}}, {{TRACK_SHEET_GID}}, and {{RESULTS_SHEET_GID}} placeholders. Ensure the BRIGHTDATA_TOKEN credential is selected on the HTTP Request node. Test & Run** Add a few Amazon search URLs to your “track” sheet. Execute the workflow and verify product data appears in your “results” sheet. Tweak batch size or parser schema as needed. ⚠ Important API Rate Limits** – Monitor your BrightData and OpenRouter usage to avoid throttling. Amazon’s Terms** – Ensure your scraping complies with Amazon’s policies and legal requirements. Summary This workflow delivers a fully automated, scalable solution to extract structured product data from Amazon search pages directly into Google Sheets—streamlining your competitive analysis and data collection. 🚀 Phil | Inforeole
by Emmanuel Bernard
🎥 AI Video Generator with HeyGen 🚀 Create AI-Powered Videos in n8n with HeyGen This workflow enables you to generate realistic AI videos using HeyGen, an advanced AI platform for video automation. Simply input your text, choose an AI avatar and voice, and let HeyGen generate a high-quality video for you – all within n8n! ✅ Ideal for: Content creators & marketers 🏆 Automating personalized video messages 📩 AI-powered video tutorials & training materials 🎓 🔧 How It Works 1️⃣ Provide a text script – This will be spoken in the AI-generated video. 2️⃣ Select an Avatar & Voice – Choose from a variety of AI-generated avatars and voices. 3️⃣ Run the workflow – HeyGen processes your request and generates a video. 4️⃣ Download your video – Get the direct link to your AI-powered video! ⚡ Setup Instructions 1️⃣ Get Your HeyGen API Key Sign up for a HeyGen account. Go to your account settings and retrieve your API Key. 2️⃣ Configure n8n Credentials In n8n, create new credentials and select "Custom Auth" as the authentication type. In the Name provide : X-Api-Key And in the value paste your API key from Heygen Update the 2 http node with the right credentials. 3️⃣ Select an AI Avatar & Voice Browse available avatars & voices in your HeyGen account. Copy the Avatar ID and Voice ID for your video. 4️⃣ Run the Workflow Enter your text, avatar ID, and voice ID. Execute the workflow – your video will be generated automatically! 🎯 Why Use This Workflow? ✔️ Fully Automated – No manual editing required! ✔️ Realistic AI Avatars – Choose from a variety of digital avatars. ✔️ Seamless Integration – Works directly within your n8n workflow. ✔️ Scalable & Fast – Generate multiple videos in minutes. 🔗 Start automating AI-powered video creation today with n8n & HeyGen!
by Baptiste Fort
Who is it for? This workflow is for marketers, sales teams, and local businesses who want to quickly collect leads (business name, phone, website, and email) from Google Maps and store them in Airtable. You can use it for real estate agents, restaurants, therapists, or any local niche. How it works Scrape Google Maps with Apify Google Maps Extractor. Clean and structure the data (name, address, phone, website). Visit each website and retrieve the raw HTML. Use GPT to extract the most relevant email from the site content. Save everything to Airtable for easy filtering and future outreach. It works for any location or keyword – just adapt the input in Apify. Requirements Before running this workflow, you’ll need: ✅ Apify account (to use the Google Maps Extractor) ✅ OpenAI API key (for GPT email extraction) ✅ Airtable account & base with the following fields: Business Name Address Website Phone Number Email Google Maps URL Airtable Structure Your Airtable base should contain these columns: Airtable Structure | Title | Street | Website | Phone Number | Email | URL | |-------------------------|-------------------------|--------------------|-----------------|------------------------|----------------------| | Paris Real Estate Agency| 10 Rue de Rivoli, Paris | https://agency.fr | +33 1 23 45 67 | contact@agency.fr | maps.google.com/... | | Example Business 2 | 25 Avenue de l’Opéra | https://example.fr | +33 1 98 76 54 | info@example.fr | maps.google.com/... | | Example Business 3 | 8 Boulevard Haussmann | https://demo.fr | +33 1 11 22 33 | contact@demo.fr | maps.google.com/... | Error Handling Missing websites:** If a business has no website, the flow skips the scraping step. No email found:** GPT returns Null if no email is detected. API rate limits:** Add a Wait node between requests to avoid Apify/OpenAI throttling. Now let’s take a detailed look at how to set up this automation, using real estate agencies in Paris as an example. Step 1 – Launch the Google Maps Scraper Start with a When clicking Execute workflow trigger to launch the flow manually. Then, add an HTTP Request node with the method set to POST. 👉 Head over to Apify: Google Maps Extractor On the page: https://apify.com/compass/google-maps-extractor Enter your business keyword (e.g., real estate agency, hairdresser, restaurant) Set the location you want to target (e.g., Paris, France) Choose how many results to fetch (e.g., 50) Optionally, use filters (only places with a website, by category, etc.) ⚠️ No matter your industry, this works — just adapt the keyword and location. Once everything is filled in: Click Run to test. Then, go to the top right → click on API. Select the API endpoints tab. Choose Run Actor synchronously and get dataset items. Copy the URL and paste it into your HTTP Request (in the URL field). Then enable: ✅ Body Content Type → JSON ✅ Specify Body Using JSON` Go back to Apify, click on the JSON tab, copy the entire code, and paste it into the JSON body field of your HTTP Request. At this point, if you run your workflow, you should see a structured output similar to this: title subTitle price categoryName address neighborhood street city postalCode ........ Step 2 – Clean and structure the data Once the raw data is fetched from Apify, we clean it up using the Edit Fields node. In this step, we manually select and rename the fields we want to keep: Title → {{ $json.title }} Address → {{ $json.address }} Website → {{ $json.website }} Phone → {{ $json.phone }} URL → {{ $json.url }}* This node lets us keep only the essentials in a clean format, ready for the next steps. On the right: a clear and usable table, easy to work with. Step 3 – Loop Over Items Now that our data is clean (see step 2), we’ll go through it item by item to handle each contact individually. The Loop Over Items node does exactly that: it takes each row from the table (each contact pulled from Apify) and runs the next steps on them, one by one. 👉 Just set a Batch Size of 20 (or more, depending on your needs). Nothing tricky here, but this step is essential to keep the flow dynamic and scalable. Step 4 – Edit Field (again) After looping through each contact one by one (thanks to Loop Over Items), we're refining the data a bit more. This time, we only want to keep the website. We use the Edit Fields node again, in Manual Mapping mode, with just: Website → {{ $json.website }} The result on the right? A clean list with only the URLs extracted from Google Maps. 🔧 This simple step helps isolate the websites so we can scrape them one by one in the next part of the flow. Step 5 – Scrape Each Website with an HTTP Request Let’s continue the flow: in the previous step, we isolated the websites into a clean list. Now, we’re going to send a request to each URL to fetch the content of the site. ➡️ To do this, we add an HTTP Request node, using the GET method, and set the URL as: {{ $json.website }} This value comes from the previous Edit Fields input This node will simply “visit” each website automatically and return the raw HTML code (as shown on the right). 📄 That’s the material we’ll use in the next step to extract email addresses (and any other useful info). We’re not reading this code manually — we’ll scan through it line by line to detect patterns that matter to us. This is a technical but crucial step: it’s how we turn a URL into real, usable data. Step 6 – Extract the Email with GPT Now that we've retrieved all the raw HTML from the websites using the HTTP Request node, it's time to analyze it. 💡 Goal: detect the most relevant email address on each site (ideally the main contact or owner). 👉 To do that, we’ll use an OpenAI node (Message a Model). Here’s how to configure it: ⚙️ Key Parameters: Model: GPT-4-1-MINI (or any GPT-4+ model available) Operation: Message a Model Resource: Text Simplify Output: ON Prompt (message you provide): Look at this website content and extract only the email I can contact this business. In your output, provide only the email and nothing else. Ideally, this email should be of the business owner, so if you have 2 or more options, try for most authoritative one. If you don't find any email, output 'Null'. Exemplary output of yours: name@examplewebsite.com {{ $json.data }} Step 7 – Save the Data in Airtable Once we’ve collected everything — the business name, address, phone number, website… and most importantly the email extracted via ChatGPT — we need to store all of this somewhere clean and organized. 👉 The best place in this workflow is Airtable. 📦 Why Airtable? Because it allows you to: Easily view and sort the leads you've scraped Filter, tag, or enrich them later And most importantly… reuse them in future automations ⚙️ What we're doing here We add an Airtable → Create Record node to insert each lead into our database. Inside this node, we manually map each field with the data collected in the previous steps: | Airtable Field | Description | Value from n8n | | -------------- | ------------------------ | ------------------------------------------ | | Title | Business name | {{ $('Edit Fields').item.json.Title }} | | Street | Full address | {{ $('Edit Fields').item.json.Address }} | | Website | Website URL | {{ $('Edit Fields').item.json.Website }} | | Phone Number | Business phone number | {{ $('Edit Fields').item.json.Phone }} | | Email | Email found by ChatGPT | {{ $json.message.content }} | | URL | Google Maps listing link | {{ $('Edit Fields').item.json.URL }} | 🧠 Reminder: we’re keeping only clean, usable data — ready to be exported, analyzed, or used in cold outreach campaigns (email, CRM, enrichment, etc.). ➡️ And the best part? You can rerun this workflow automatically every week or month to keep collecting fresh leads 🔁.
by Zacharia Kimotho
This workflow takes off the task of backing up workflows regularly on Github and uses Google Drive as the main tool to host these. This can be a good way to keep track of your workflows so that you never lose any workflows in case your n8n goes down. How does it work Creates a new folder within a specified folder with the time its backed up Loops around all workflows, converts them to a JSON file and uploads them to the created folder Gets the previous backups and deletes them This has a clean feel and look as it simplifies the backup while not keeping a cache of workflows on your drive. Setup Create a new folder Create new service account credentials Share the folder with the service account email Upload this workflow to your canvas and map the credentials Set the schedule that you need your workflows to run and manage your backups Activate the workflow Happy Productivity! @Imperol
by bangank36
This workflow retrieves all Squarespace Orders and saves them into a Google Sheets spreadsheet using the Squarespace Commerce API. It uses pagination to ensure all orders are collected efficiently. How It Works The workflow queries your Squarespace Orders API. It fetches data in paginated batches and inserts them into Google Sheets. The Global node is used to configure API parameters dynamically, allowing users to set date filters, pagination, and fulfillment status. The workflow runs on demand or on a schedule, ensuring your data stays up to date. Parameters This workflow allows you to customize the API request using the Global node settings: api-version** (string, required) – The current API version (see Squarespace Orders API documentation). modifiedAfter**={a-datetime} (string, conditional) – Fetch orders modified after a specific date (ISO 8601 format). modifiedBefore**={b-datetime} (string, conditional) – Fetch orders modified before a specific date (ISO 8601 format). cursor**={c} (string, conditional) – Used for pagination, cannot be combined with other filters. fulfillmentStatus**={status} (optional, enum) – Filter by fulfillment status: PENDING, FULFILLED, or CANCELED. maxPage** – Set -1 to enables infinite pagination to fetch all available orders. Requirements Credentials To use this workflow, you need: Squarespace API Key – Retrieve from your Squarespace settings. Google Sheets API credentials – Required to insert data into a spreadsheet. Google Sheets Setup Use the Squarespace order export feature to create a reference sheet. Google Sheets template is available Who Is This For? This workflow is designed for: Squarespace store owners exporting orders for tax reports, analytics, or sales tracking. Businesses automating order data retrieval for external reporting. Anyone needing an efficient way to extract Squarespace order data without manual effort. Explore More Templates Get all orders in Shopify to Google Sheets Sync Shopify customers to Google Sheets + Squarespace compatible csv 👉 Check out my other n8n templates