by n8n Team
This workflow creates a Slack thread when a new ticket is created in Zendesk. Subsequent comments on the ticket in Zendesk are added as replies to the thread in Slack. Prerequisites Zendesk account and Zendesk credentials. Slack account and Slack credentials. Slack channel to create threads in. How it works The workflow listens for new tickets in Zendesk. When a new ticket is created, the workflow creates a new thread/message in Slack. The Slack thread ID is then saved in one of the ticket's fields called "Slack thread ID". The next time a comment is added to the ticket, the workflow retrieves the Slack thread ID from the ticket's field and adds the comment to the thread/message in Slack as a reply. Setup This workflow requires that you set up a webhook in Zendesk. To do so, follow the steps below: In the workflow, open the On new Zendesk ticket node and copy the webhook URL. In Zendesk, navigate to Admin Center > Apps and integrations > Webhooks > Actions > Create Webhook. Add all the required details which can be retrieved from the On new Zendesk ticket node. The webhook URL gets added to the “Endpoint URL” field, and the “Request method” should match what is shown in n8n. Save the webhook. In Zendesk, navigate to Admin Center > Objects and rules > Business rules > Triggers > Add trigger. Give trigger a name such as “New tickets”. Under “Conditions” in “Meet ALL of the following conditions”, add “Status is New”. Under “Actions”, select “Notify active webhook” and select the webhook you created previously. In the JSON body, add the following: { "id": "{{ticket.id}}", "comment": "{{ticket.latest_comment_html}}" } Save the Zendesk trigger. You will also need to set up a field in Zendesk to store the Slack thread ID. To do so, follow the steps below: In Zendesk, navigate to Admin Center > Objects and rules > Tickets > Fields > Add field. Use the text field option and give the field a name such as “Slack thread ID”. Save the field. In n8n, open the Update ticket node and select the field you created in Zendesk.
by Angel Menendez
Who is this for? This subworkflow is ideal for developers and automation builders working with UniPile and n8n to automate message enrichment and LinkedIn lead routing. What problem is this workflow solving? UniPile separates personal and organization accounts into two different API endpoints. This flow handles both intelligently so you're not missing sender context due to API quirks or bad assumptions. What this workflow does This subworkflow is used by: LinkedIn Auto Message Router with Request Detection** LinkedIn AI Response Generator with Slack Approval** It receives a message sender ID and tries to enrich it using UniPile's /people and /organizations endpoints. It returns a clean, consistent profile object regardless of which source was used. Setup Generate a UniPile API token and save it in your n8n credentials Make sure this subworkflow is triggered correctly by your parent flows Test both people and organization lookups to verify responses are normalized How to customize this workflow to your needs Add a secondary enrichment layer using tools like Clearbit or FullContact Customize the fallback logic or error handling Expand the returned data for more AI context or user routing (e.g., job title, region)
by Alex Kim
Automate Video Creation with Luma AI Dream Machine and Airtable (Part 2) Description This is the second part of the Luma AI Dream Machine automation. It captures the webhook response from Luma AI after video generation is complete, processes the data, and automatically updates Airtable with the video and thumbnail URLs. This completes the end-to-end automation for video creation and tracking. 👉 Airtable Base Template 👉 Tutorial Video Setup 1. Luma AI Setup Ensure you’ve created an account with Luma AI and generated an API key. Confirm that the API key has permission to manage video requests. 2. Airtable Setup Make sure your Airtable base includes the following fields (set up in Part 1): Use the Airtable Base Template linked above to simplify setup. Generation ID** – To match incoming webhook data. Status** – Workflow status (e.g., "Done"). Video URL** – Stores the generated video URL. Thumbnail URL** – Stores the thumbnail URL. 3. n8n Setup Ensure that the n8n workflow from Part 1 is set up and configured. Import this workflow and connect it to the webhook callback from Luma AI. How It Works 1. Webhook Trigger The Webhook node listens for a POST response from Luma AI once video generation is finished. The response includes: Video URL – Direct link to the video. Thumbnail URL – Link to the video thumbnail. Generation ID – Used to match the record in Airtable. 2. Process Webhook Data The Set node extracts the video data from the webhook response. The If node checks if the video URL is valid before proceeding. 3. Store in Airtable The Airtable node updates the record with: Video URL – Direct link to the video. Thumbnail URL – Link to the video thumbnail. Status – Marked as "Done." Uses the Generation ID to match and update the correct record. Why This Workflow is Useful ✅ Automates the completion step for video creation ✅ Ensures accurate record-keeping by matching generation IDs ✅ Simplifies the process of managing and organizing video content ✅ Reduces manual effort by automating the update process Next Steps Future Enhancements** – Adding more complex post-processing, video trimming, and multi-platform publishing.
by Ahmed Alnaqa
Who is this template for? This workflow template is designed for content creators, researchers, educators, and professionals who need quick, accurate summaries of YouTube videos. It’s ideal for those looking to save time, extract key insights, or repurpose video content into concise formats for reports, studies, or social media. What does it do? The workflow automates the process of summarizing YouTube videos by extracting the transcript, analyzing the content, and generating a concise summary. It leverages AI tools to ensure accuracy and relevance, making it easier to digest lengthy videos in seconds. Why is it useful? This template saves hours of manual effort by automating video summarization, enabling users to focus on analyzing or sharing insights rather than watching entire videos. It’s particularly useful for staying updated with trends, conducting research, or creating content efficiently. How does it work? The workflow integrates with YouTube’s Transcript API powered by Apify Actor to fetch video transcripts, process the text using AI-powered summarization tools, and deliver a clear, concise summary. Setup Instructions You need an Apify account and an API key to connect with the Actor. Follow the steps below: Create a Free Account. Choose the appropriate Actor from the Apify search. Under the Integration tab, click on “Use API endpoints.” Select the API that best suits your needs.
by Yaron Been
Automated pipeline that extracts job listings from Upwork and exports them to Google Sheets for better organization, analysis, and team collaboration. 🚀 What It Does Fetches job postings based on saved searches Extracts key job details (title, budget, description) Organizes data in Google Sheets Updates in real-time Supports multiple search criteria 🎯 Perfect For Freelancers tracking opportunities Teams managing multiple projects Agencies monitoring client needs Market researchers Business analysts ⚙️ Key Benefits ✅ Centralized job board ✅ Easy sharing with team members ✅ Advanced filtering and sorting ✅ Historical data tracking ✅ Customizable data points 🔧 What You Need Upwork account Google account n8n instance Google Sheets setup 📊 Data Exported Job title and description Budget and hourly rate Client information Posted date Required skills Job URL 🛠️ Setup & Support Quick Setup Get started in 15 minutes with our step-by-step guide 📺 Watch Tutorial 💼 Get Expert Support 📧 Direct Help Streamline your job search and opportunity tracking with automated data collection and organization.
by PollupAI
LinkedIn Profile Enrichment Workflow Who is this for? This workflow is ideal for recruiters, sales professionals, and marketing teams who need to enrich LinkedIn profiles with additional data for lead generation, talent sourcing, or market research. What problem is this workflow solving? Manually gathering detailed LinkedIn profile information can be time-consuming and prone to errors. This workflow automates the process of enriching profile data from LinkedIn, saving time and ensuring accuracy. What this workflow does Input: Reads LinkedIn profile URLs from a Google Sheet. Validation: Filters out already enriched profiles to avoid redundant processing. Data Enrichment: Uses RapidAPI's Fresh LinkedIn Profile Data API to retrieve detailed profile information. Output: Updates the Google Sheet with enriched profile data, appending new information efficiently. Setup Google Sheet: Create a sheet with a column named linkedin_url and populate it with the profile URLs to enrich. RapidAPI Account: Sign up at RapidAPI and subscribe to the Fresh LinkedIn Profile Data API. API Integration: Replace the x-rapidapi-key and x-rapidapi-host values with your credentials from RapidAPI. Run the Workflow: Trigger the workflow and monitor the updates to your Google Sheet. How to customize this workflow Filter Criteria**: Modify the filter step to include additional conditions for processing profiles. API Configuration**: Adjust API parameters to retrieve specific fields or extend usage. Output Format**: Customize how the enriched data is appended to the Google Sheet (e.g., format, column mappings). Error Handling**: Add steps to handle API rate limits or missing data for smoother automation. This workflow streamlines LinkedIn profile enrichment, making it faster and more effective for data-driven decision-making.
by phil
This workflow automates web scraping of Amazon search result pages by retrieving raw HTML, cleaning it to retain only the relevant product elements, and then using an LLM to extract structured product data (name, description, rating, reviews, and price), before saving the results back to Google Sheets. It integrates Google Sheets to supply and collect URLs, BrightData to fetch page HTML, a custom n8n Function node to sanitize the HTML, LangChain (OpenRouter GPT-4) to parse product details, and Google Sheets again to store the output. URL to scape . Result Who Needs Amazon Search Result Scraping? This scraping workflow is ideal for teams and businesses that need to monitor Amazon product listings at scale: E-commerce Analysts** – Track competitor pricing, ratings, and inventory trends. Market Researchers** – Collect data on product popularity and reviews for market analysis. Data Teams** – Automate ingestion of product metadata into BI pipelines or data lakes. Affiliate Marketers** – Keep affiliate catalogs up to date with latest product details and prices. If you need reliable, structured data from Amazon search results delivered directly into your spreadsheets, this workflow saves you hours of manual copy-and-paste. Why Use This Workflow? End-to-End Automation** – From URL list to clean JSON output in Sheets. Robust HTML Cleaning** – Strips scripts, styles, unwanted tags, and noise. Accurate Structured Parsing** – Leverages GPT-4 via LangChain for reliable extraction. Scalable & Repeatable** – Processes thousands of URLs in batches. Step-by-Step: How This Workflow Scrapes Amazon Get URLs from Google Sheets – Reads a list of search result URLs. Loop Over Items – Iterates through each URL in controlled batches. Fetch Raw HTML – Uses BrightData’s Web Unlocker proxy to retrieve the page. Clean HTML – A Function node removes doctype, scripts, styles, head, comments, classes, and non-whitelisted tags, collapsing extra whitespace. Extract with LLM – Passes cleaned HTML into LangChain → GPT-4 to output JSON for each product: name, description, rating, reviews, price Save Results – Appends the JSON fields as columns back into a “results” sheet in Google Sheets. Customization: Tailor to Your Needs Adaptable Sites** – This workflow can be adapted to any e-commerce or other website, for example Walmart or eBay. Whitelist Tags** – Modify the allowedTags array in the Code node to keep additional HTML elements. Schema Changes** – Update the Structured Output Parser schema to include more fields (e.g., availability, SKU). Alternate Data Sink** – Instead of Sheets, route output to a database, CSV file, or webhook. 🔑 Prerequisites Google Sheets Credentials** – OAuth credentials configured in n8n. BrightData API token** – Stored in n8n credentials as BRIGHTDATA_TOKEN. OpenRouter API Key** – Configured for the LangChain node to call GPT-4. n8n Instance** – Self-hosted or cloud with sufficient quota for HTTP requests and LLM calls. 🚀 Installation & Setup Configure Credentials** In n8n, set up Google Sheets OAuth under “Credentials.” Add BrightData token as a new HTTP Request credential. Create an OpenRouter API key credential for the LangChain node. Import the Workflow** Copy the JSON workflow into n8n’s “Import” dialog. Map your Google Sheet IDs and GIDs to the {{WEB_SHEET_ID}}, {{TRACK_SHEET_GID}}, and {{RESULTS_SHEET_GID}} placeholders. Ensure the BRIGHTDATA_TOKEN credential is selected on the HTTP Request node. Test & Run** Add a few Amazon search URLs to your “track” sheet. Execute the workflow and verify product data appears in your “results” sheet. Tweak batch size or parser schema as needed. ⚠ Important API Rate Limits** – Monitor your BrightData and OpenRouter usage to avoid throttling. Amazon’s Terms** – Ensure your scraping complies with Amazon’s policies and legal requirements. Summary This workflow delivers a fully automated, scalable solution to extract structured product data from Amazon search pages directly into Google Sheets—streamlining your competitive analysis and data collection. 🚀 Phil | Inforeole
by Emmanuel Bernard
🎥 AI Video Generator with HeyGen 🚀 Create AI-Powered Videos in n8n with HeyGen This workflow enables you to generate realistic AI videos using HeyGen, an advanced AI platform for video automation. Simply input your text, choose an AI avatar and voice, and let HeyGen generate a high-quality video for you – all within n8n! ✅ Ideal for: Content creators & marketers 🏆 Automating personalized video messages 📩 AI-powered video tutorials & training materials 🎓 🔧 How It Works 1️⃣ Provide a text script – This will be spoken in the AI-generated video. 2️⃣ Select an Avatar & Voice – Choose from a variety of AI-generated avatars and voices. 3️⃣ Run the workflow – HeyGen processes your request and generates a video. 4️⃣ Download your video – Get the direct link to your AI-powered video! ⚡ Setup Instructions 1️⃣ Get Your HeyGen API Key Sign up for a HeyGen account. Go to your account settings and retrieve your API Key. 2️⃣ Configure n8n Credentials In n8n, create new credentials and select "Custom Auth" as the authentication type. In the Name provide : X-Api-Key And in the value paste your API key from Heygen Update the 2 http node with the right credentials. 3️⃣ Select an AI Avatar & Voice Browse available avatars & voices in your HeyGen account. Copy the Avatar ID and Voice ID for your video. 4️⃣ Run the Workflow Enter your text, avatar ID, and voice ID. Execute the workflow – your video will be generated automatically! 🎯 Why Use This Workflow? ✔️ Fully Automated – No manual editing required! ✔️ Realistic AI Avatars – Choose from a variety of digital avatars. ✔️ Seamless Integration – Works directly within your n8n workflow. ✔️ Scalable & Fast – Generate multiple videos in minutes. 🔗 Start automating AI-powered video creation today with n8n & HeyGen!
by Baptiste Fort
Who is it for? This workflow is for marketers, sales teams, and local businesses who want to quickly collect leads (business name, phone, website, and email) from Google Maps and store them in Airtable. You can use it for real estate agents, restaurants, therapists, or any local niche. How it works Scrape Google Maps with Apify Google Maps Extractor. Clean and structure the data (name, address, phone, website). Visit each website and retrieve the raw HTML. Use GPT to extract the most relevant email from the site content. Save everything to Airtable for easy filtering and future outreach. It works for any location or keyword – just adapt the input in Apify. Requirements Before running this workflow, you’ll need: ✅ Apify account (to use the Google Maps Extractor) ✅ OpenAI API key (for GPT email extraction) ✅ Airtable account & base with the following fields: Business Name Address Website Phone Number Email Google Maps URL Airtable Structure Your Airtable base should contain these columns: Airtable Structure | Title | Street | Website | Phone Number | Email | URL | |-------------------------|-------------------------|--------------------|-----------------|------------------------|----------------------| | Paris Real Estate Agency| 10 Rue de Rivoli, Paris | https://agency.fr | +33 1 23 45 67 | contact@agency.fr | maps.google.com/... | | Example Business 2 | 25 Avenue de l’Opéra | https://example.fr | +33 1 98 76 54 | info@example.fr | maps.google.com/... | | Example Business 3 | 8 Boulevard Haussmann | https://demo.fr | +33 1 11 22 33 | contact@demo.fr | maps.google.com/... | Error Handling Missing websites:** If a business has no website, the flow skips the scraping step. No email found:** GPT returns Null if no email is detected. API rate limits:** Add a Wait node between requests to avoid Apify/OpenAI throttling. Now let’s take a detailed look at how to set up this automation, using real estate agencies in Paris as an example. Step 1 – Launch the Google Maps Scraper Start with a When clicking Execute workflow trigger to launch the flow manually. Then, add an HTTP Request node with the method set to POST. 👉 Head over to Apify: Google Maps Extractor On the page: https://apify.com/compass/google-maps-extractor Enter your business keyword (e.g., real estate agency, hairdresser, restaurant) Set the location you want to target (e.g., Paris, France) Choose how many results to fetch (e.g., 50) Optionally, use filters (only places with a website, by category, etc.) ⚠️ No matter your industry, this works — just adapt the keyword and location. Once everything is filled in: Click Run to test. Then, go to the top right → click on API. Select the API endpoints tab. Choose Run Actor synchronously and get dataset items. Copy the URL and paste it into your HTTP Request (in the URL field). Then enable: ✅ Body Content Type → JSON ✅ Specify Body Using JSON` Go back to Apify, click on the JSON tab, copy the entire code, and paste it into the JSON body field of your HTTP Request. At this point, if you run your workflow, you should see a structured output similar to this: title subTitle price categoryName address neighborhood street city postalCode ........ Step 2 – Clean and structure the data Once the raw data is fetched from Apify, we clean it up using the Edit Fields node. In this step, we manually select and rename the fields we want to keep: Title → {{ $json.title }} Address → {{ $json.address }} Website → {{ $json.website }} Phone → {{ $json.phone }} URL → {{ $json.url }}* This node lets us keep only the essentials in a clean format, ready for the next steps. On the right: a clear and usable table, easy to work with. Step 3 – Loop Over Items Now that our data is clean (see step 2), we’ll go through it item by item to handle each contact individually. The Loop Over Items node does exactly that: it takes each row from the table (each contact pulled from Apify) and runs the next steps on them, one by one. 👉 Just set a Batch Size of 20 (or more, depending on your needs). Nothing tricky here, but this step is essential to keep the flow dynamic and scalable. Step 4 – Edit Field (again) After looping through each contact one by one (thanks to Loop Over Items), we're refining the data a bit more. This time, we only want to keep the website. We use the Edit Fields node again, in Manual Mapping mode, with just: Website → {{ $json.website }} The result on the right? A clean list with only the URLs extracted from Google Maps. 🔧 This simple step helps isolate the websites so we can scrape them one by one in the next part of the flow. Step 5 – Scrape Each Website with an HTTP Request Let’s continue the flow: in the previous step, we isolated the websites into a clean list. Now, we’re going to send a request to each URL to fetch the content of the site. ➡️ To do this, we add an HTTP Request node, using the GET method, and set the URL as: {{ $json.website }} This value comes from the previous Edit Fields input This node will simply “visit” each website automatically and return the raw HTML code (as shown on the right). 📄 That’s the material we’ll use in the next step to extract email addresses (and any other useful info). We’re not reading this code manually — we’ll scan through it line by line to detect patterns that matter to us. This is a technical but crucial step: it’s how we turn a URL into real, usable data. Step 6 – Extract the Email with GPT Now that we've retrieved all the raw HTML from the websites using the HTTP Request node, it's time to analyze it. 💡 Goal: detect the most relevant email address on each site (ideally the main contact or owner). 👉 To do that, we’ll use an OpenAI node (Message a Model). Here’s how to configure it: ⚙️ Key Parameters: Model: GPT-4-1-MINI (or any GPT-4+ model available) Operation: Message a Model Resource: Text Simplify Output: ON Prompt (message you provide): Look at this website content and extract only the email I can contact this business. In your output, provide only the email and nothing else. Ideally, this email should be of the business owner, so if you have 2 or more options, try for most authoritative one. If you don't find any email, output 'Null'. Exemplary output of yours: name@examplewebsite.com {{ $json.data }} Step 7 – Save the Data in Airtable Once we’ve collected everything — the business name, address, phone number, website… and most importantly the email extracted via ChatGPT — we need to store all of this somewhere clean and organized. 👉 The best place in this workflow is Airtable. 📦 Why Airtable? Because it allows you to: Easily view and sort the leads you've scraped Filter, tag, or enrich them later And most importantly… reuse them in future automations ⚙️ What we're doing here We add an Airtable → Create Record node to insert each lead into our database. Inside this node, we manually map each field with the data collected in the previous steps: | Airtable Field | Description | Value from n8n | | -------------- | ------------------------ | ------------------------------------------ | | Title | Business name | {{ $('Edit Fields').item.json.Title }} | | Street | Full address | {{ $('Edit Fields').item.json.Address }} | | Website | Website URL | {{ $('Edit Fields').item.json.Website }} | | Phone Number | Business phone number | {{ $('Edit Fields').item.json.Phone }} | | Email | Email found by ChatGPT | {{ $json.message.content }} | | URL | Google Maps listing link | {{ $('Edit Fields').item.json.URL }} | 🧠 Reminder: we’re keeping only clean, usable data — ready to be exported, analyzed, or used in cold outreach campaigns (email, CRM, enrichment, etc.). ➡️ And the best part? You can rerun this workflow automatically every week or month to keep collecting fresh leads 🔁.
by kapio
How it Works: Capture Contact Requests:** This template efficiently handles contact requests coming through a WordPress website using the Contact Form 7 (CF7) plugin with a webhook extension. Contact Management:** It automatically creates or updates contacts in Pipedrive upon receiving a new request. Lead Management:** Each contact request is securely stored in the lead inbox of Pipedrive, ensuring no opportunity is missed. Task Creation:** For each new contact or update, the workflow triggers the creation of a related task, streamlining follow-up actions. Note Attachment:** A comprehensive note containing all details from the contact request is attached to the corresponding lead, ensuring that all information is readily accessible. Step-by-Step Guide: Estimated Setup Time: The setup process is straightforward and can be completed quickly. Specific time may vary depending on your familiarity with n8n and the systems involved. Detailed setup instructions are provided within the workflow via sticky notes. These notes offer in-depth guidance for configuring each component of the template to suit your specific needs.
by n8n Team
This workflow creates a GitHub issue when a new ticket is created in Zendesk. Subsequent comments on the ticket in Zendesk are added as comments to the issue in GitHub. Prerequisites Zendesk account and Zendesk credentials. GitHub account and GitHub credentials. GitHub repository to create issues in. How it works The workflow listens for new tickets in Zendesk. When a new ticket is created, the workflow creates a new issue in GitHub. The GitHub issue number is then saved in one of the ticket's fields (in setup we call this "GitHub Issue Number"). The next time a comment is added to the ticket, the workflow retrieves the GitHub issue number from the ticket's field and adds the comment to the issue in GitHub. Setup This workflow requires that you set up a webhook in Zendesk. To do so, follow the steps below: In the workflow, open the On new Zendesk ticket node and copy the webhook URL. In Zendesk, navigate to Admin Center > Apps and integrations > Webhooks > Actions > Create Webhook. Add all the required details which can be retrieved from the On new Zendesk ticket node. The webhook URL gets added to the “Endpoint URL” field, and the “Request method” should match what is shown in n8n. Save the webhook. In Zendesk, navigate to Admin Center > Objects and rules > Business rules > Triggers > Add trigger. Give trigger a name such as “New tickets”. Under “Conditions” in “Meet ALL of the following conditions”, add “Status is New”. Under “Actions”, select “Notify active webhook” and select the webhook you created previously. In the JSON body, add the following: { "id": "{{ticket.id}}", "comment": "{{ticket.latest_comment_html}}" } Save the Zendesk trigger. You will also need to set up a field in Zendesk to store the GitHub issue number. To do so, follow the steps below: In Zendesk, navigate to Admin Center > Objects and rules > Tickets > Fields > Add field. Use the number field option and give the field a name such as “GitHub Issue Number”. Save the field. In n8n, open the Update ticket node and select the field you created in Zendesk.
by Zacharia Kimotho
This workflow takes off the task of backing up workflows regularly on Github and uses Google Drive as the main tool to host these. This can be a good way to keep track of your workflows so that you never lose any workflows in case your n8n goes down. How does it work Creates a new folder within a specified folder with the time its backed up Loops around all workflows, converts them to a JSON file and uploads them to the created folder Gets the previous backups and deletes them This has a clean feel and look as it simplifies the backup while not keeping a cache of workflows on your drive. Setup Create a new folder Create new service account credentials Share the folder with the service account email Upload this workflow to your canvas and map the credentials Set the schedule that you need your workflows to run and manage your backups Activate the workflow Happy Productivity! @Imperol