by Jaruphat J.
Who’s it for This template is perfect for content creators, AI enthusiasts, marketers, and developers who want to automate the generation of cinematic videos using Google Vertex AI’s Veo 3 model. It’s also ideal for anyone experimenting with generative AI for video using n8n. What it does This workflow: Accepts a text prompt and a GCP access token via form. Sends the prompt to the Veo 3 (preview model) using Vertex AI’s predictLongRunning endpoint. Waits for the video rendering to complete. Fetches the final result and converts the base64-encoded video to a file. Uploads the resulting .mp4 to your Google Drive. Output How to set up Enable Vertex AI API in your GCP project: https://console.cloud.google.com/marketplace/product/google/aiplatform.googleapis.com Authenticate with GCP using Cloud Shell or local terminal: gcloud auth login gcloud config set project [YOUR_PROJECT_ID] gcloud auth application-default set-quota-project [YOUR_PROJECT_ID] gcloud auth print-access-token Copy the token and use it in the form when running the workflow. ⚠️ This token lasts ~1 hour. Regenerate as needed. Connect your Google Drive OAuth2 credentials to allow file upload. Import this workflow into n8n and execute it via form trigger. Requirements n8n (v1.94.1+)** A Google Cloud project with: Vertex AI API enabled Billing enabled A way to get Access Token A Google Drive OAuth2 credential connected to n8n How to customize the workflow You can modify the in the HTTP node to match your use case. Replace the Google Drive upload node with alternatives like Dropbox, S3, or YouTube upload. Extend the workflow to add subtitles, audio dubbing, or LINE/Slack alerts. Step-by-step for each major node: Prompt Input → Vertex Predict → Wait → Fetch Result → Convert to File → Upload Best Practices Followed No hardcoded API tokens Secure: GCP token is input via form, not stored in workflow All nodes are renamed with clear purpose All editable config grouped in Set node External References GCP Veo API Docs: https://cloud.google.com/vertex-ai/docs/generative-ai/video/overview Disclaimer This workflow uses official Google Cloud APIs and requires a valid GCP project. Access token should be generated securely using gcloud CLI. Do not embed tokens in the workflow itself. Notes on GCP Access Token To use the Vertex AI API in n8n securely: Run the following on your local machine or GCP Cloud Shell: gcloud auth login gcloud config set project your-project-id gcloud auth print-access-token Paste the token in the workflow form field when submitting. Do not hardcode the token into HTTP nodes or Set nodes — input it each time or use a secure credential vault.
by Angel Menendez
Temporary solution using the undocumented REST API for backups using Google drive. Please note that there are issues with this workflow. It does not support versioning, so please know that it will create multiple copies of the workflows so if you run this daily it will make the folder grow quickly. Once I figure out how to version in Gdrive I'll update it here.
by Harshil Agrawal
This workflow demonstrates how to can use Redis to implement rate limits to your API. The workflow uses the incoming API key to uniquely identify the user and use it as a key in Redis. Every time a request is made, the value is incremented by one, and we check for the threshold using the IF node. Duplicate the following Airtable to try out the workflow: https://airtable.com/shraudfG9XAvqkBpF
by rpshu
-- Disclaimer: This template is mainly made for self-hosted users who can reach CSV files in their file system. For Cloud users, just replace the first few nodes with your file system of choice, like Google Drive or Dropbox -- How to automatically import CSV files into postgres 1、project description This workflow demonstrates how CSV file can be automatically imported into existing PostgreSQL database. Before running the workflow please make sure you have a file on the server: /tmp/t1.csv The name of the test database is db01, and you can replace it. then create table t1 create table t1(id int,name varchar(10)); And the content of the file is the following: |id|name| |-|-|-| |1|a| |2|b| |3|c| 2、Other If you want to import a custom csv file, please refer to the following methods. 2.1、Create a table in the database SQL Commands: https://www.postgresql.org/docs/current/sql-createtable.html 2.2、Upload csv file Upload csv file to N8N server and make sure it can be read.
by Lucas Walter
AI News Scraping System This n8n workflow automates the process of pulling in breaking AI-related headlines from curated RSS feeds, scraping their full content, and saving readable Markdown versions directly to Google Drive. Use cases include: Creating a personal newsletter curation system Automating blog post research workflows Archiving news content for later summarization or AI use How it Works Scheduled Triggers The workflow runs every 3–4 hours using multiple Schedule Trigger nodes. Each trigger targets a different news source (e.g., Google News, OpenAI Blog, Hugging Face, etc.). Fetch and Parse Feeds RSS feeds are fetched via the HTTP Request node. Items from the feed are split into individual entries using the Split Out node. Scrape Article Content Each article URL is sent to the Firecrawl API with a prompt to extract only the main content in Markdown. The scraping skips navigation, headers, footers, and ads. Convert and Save The extracted Markdown is converted into a .md file using the Convert to File node. The file is then uploaded to a Google Drive folder. Good to Know This workflow uses the Firecrawl API for web scraping. Be sure to configure a Generic HTTP Header credential with your API key. Output files are saved in Markdown format You can add more Schedule Trigger + HTTP Request pairs to extend this workflow to additional feeds. Requirements Firecrawl API account for scraping Google Drive account (OAuth2 credentials must be configured in n8n) n8n instance (self-hosted or cloud) Customization Ideas Replace or extend RSS feeds with sources relevant to your niche Load up scraped news stories into a prompt to create new content like TikToks and Reels Add a summarization step using an LLM like GPT or Claude Send the Markdown files to Notion, Slack, or a blog CMS Example Feeds | Feed Name | URL | |------------------|----------------------------------------------------------------------| | Google News (AI) | https://rss.app/feeds/v1.1/AkOariu1C7YyUUMv.json | | OpenAI Blog | https://rss.app/feeds/v1.1/xNVg2hbY14Z7Gpva.json | | Hugging Face | https://rss.app/feeds/v1.1/sgHcE2ehHQMTWhrL.json |
by bangank36
This workflow automates the Mark as Fulfilled action in Squarespace for each order, ensuring a seamless fulfillment process without manual intervention. How It Works This workflow retrieves all pending Squarespace orders and processes their fulfillment automatically. The workflow follows these steps: 1️⃣ Get all pending orders using the HTTP Request node (Since Squarespace does not have a n8n node) 2️⃣ Create a fulfillment request using Fulfill Order node The Filter Orders node can be used to filter valid pending order to process. Step-by-step The workflow can be run as requested or on schedule You can adjust these parameters within the Global and filter nodes: Global node for API Setting api-version** (string, required) – The current API version (see Squarespace Orders API documentation). modifiedAfter**={a-datetime} (string, conditional) – Fetch orders modified after a specific date (ISO 8601 format). modifiedBefore**={b-datetime} (string, conditional) – Fetch orders modified before a specific date (ISO 8601 format). cursor**={c} (string, conditional) – Used for pagination, cannot be combined with other filters. fulfillmentStatus**={status} (optional, enum) – Filter by fulfillment status: PENDING, FULFILLED, or CANCELED. maxPage** – Set -1 to enables infinite pagination to fetch all available orders. Filter Orders node Order Filtering – Ensures only valid orders are fulfilled, particularly useful if: You sell digital downloads or gift cards exclusively. You use third-party fulfillment services for all products. Requirements Credentials To use this workflow, you need: Squarespace API Key – Retrieve from your Squarespace settings. Who Is This For? Squarespace store owners looking to automate their fulfillment process. Merchants selling digital or personalized products who need instant fulfillment. Explore More Templates Get all orders in Squarespace to Google Sheets Convert Squarespace Profiles to Shopify Customers in Google Sheets Fetch Squarespace Blog & Event Collections to Google Sheets 👉 Check out my other n8n templates
by Eric
This is a specific use case. The ElevenLabs guide for Cal.com bookings is comprehensive but I was having trouble with the booking API request. So I built a simple workflow to validate the request and handle the booking creation. Who's this for? You have an ElevenLabs voice agent (or other external service) booking meetings in your Cal.com account and you want more control over the book_meeting tool called by the voice agent. How's it work? Request is received by the webhook trigger node Request sent from ElevenLabs voice agent, or other source Request body contains contact info for the user with whom a meeting will be booked in Cal.com Workflow validates input data for required fields in Cal.com If validation fails, a 400 bad request response is returned If valid, meeting is booked in Cal.com api How do I use this? Create a custom tool in the ElevenLabs agent setup, and connect it to the webhook trigger in this workflow. Add authorization for security. Instruct your voice agent to call this tool after it has collected the required information from the user. Expected input structure Note: Modify this according to your needs, but be sure to reflect your changes in all following nodes. Requirements here depend on required fields in your Cal.com event type. If you have multiple event types in Cal.com with varying required fields, you'll need to handle this in this workflow, and provide appropriate instructions in your *voice agent prompt*. "body": { "attendee_name": "Some Guy", "start": "2025-07-07T13:30:00Z", "attendee_phone": "+12125551234", "attendee_timezone": "America/New_York", "eventTypeId": 123456, "attendee_email": "someguy@example.com", "attendee_company": "Example Inc", "notes": "Discovery call to find synergies." } Modifications Note: ElevenLabs doesn't handle webhook response headers or body, and only recognizes the response code. In other words, if the workflow responds with 400 Bad request that's the only info the voice agent gets back; it doesn't get back any details, eg. "User email still needed". You can modify the structure of the expected webhook request body, and then you should reflect that structure change in all following nodes in the workflow. Ie. if you change attendee_name to attendeeFirstName and attendeeLastName then you need to make this change in the following nodes that use these properties. You can also require or make optional other user data for the Cal.com event type which would reduce or increase the data the voice agent must collect from the user. You can modify the authorization of this webhook to meet your security needs. ElevenLabs has some limitations and you should be mindful of those, but it also offers a secret feature with proves useful. An improvement to this workflow could include a GET request to a CRM or other db to get info on the user interacting with the voice agent. This could reduce some of the data collection needed from the voice agent, like if you already have the user's email address, for example. I believe you can also get the user's phone number if the voice agent is set up on a dial-in interface, so then the agent wouldn't need to ask for it. This all depends on your use case. A savvy step might be prompting the voice agent to get an email, and using the email in this workflow to pull enrichment data from Apollo.io or similar ;-)
by David w/ SimpleGrow
This n8n workflow tracks user engagement in a specific WhatsApp group by capturing incoming messages via a Whapi webhook. It first filters messages to ensure they come from the correct group, then identifies the message type—text, emoji reaction, voice, or image. The workflow searches for the user in an Airtable database using their WhatsApp ID and increments their message count by one. It updates the Airtable record with the new count and the date of the last interaction. This automated process helps measure user activity and supports engagement initiatives like weekly raffles or rewards. The system is flexible and can be expanded to include more message types or additional actions. Overall, it provides a seamless way to encourage and track user participation in your WhatsApp community.
by Baptiste Fort
Who is it for? This workflow is perfect for marketers, sales teams, agencies, and local businesses who want to save time by automating lead generation from Google Maps. It’s ideal for real estate agencies, restaurants, service providers, and any local niche that needs a clean database of fresh contacts, including emails, websites, and phone numbers. ✅ Prerequisites Before starting, make sure you have: Apify account** → to scrape Google Maps data OpenAI API key** → for GPT-4 email extraction Airtable account & base** → for structured lead storage Gmail account with OAuth** → to send personalized outreach emails Your Airtable base should have these columns: | Title | Street | Website | Phone Number | Email | URL | |-------------------------|-------------------------|--------------------|-----------------|------------------------|----------------------| | Paris Real Estate Agency| 10 Rue de Rivoli, Paris | https://agency.fr | +33 1 23 45 67 | contact@agency.fr | maps.google.com/... | 🏡 Example Use Case To keep things clear, we’ll use real estate agencies in Paris as an example. But you can replace this with restaurants, plumbers, lawyers, or even hamster trainers (you never know). 🔄 How the workflow works Scrape Google Maps leads with Apify Clean & structure the data (name, phone, website) Visit each website & extract emails with GPT-4 Save all leads into Airtable Automatically send a personalized email via Gmail This works for any industry, keyword, or location. Step 1 – Scraping Google Maps with Apify Start simply: Open your n8n workflow and choose the trigger: “Execute Workflow” (manual trigger). Add an HTTP Request node (POST method). Now, head over to Apify Google Maps Extractor. Fill in the fields according to your needs: Keyword: e.g., "real estate agency" (or restaurant, plumber...) Location: "Paris, France" Number of results: 50 (or more) Optional: filters (with/without a website, by categories…) Click Run to test the scraper. Then click API → select API endpoints tab. Choose “Run Actor synchronously and get dataset items”. Copy the URL, go back to n8n, and paste it into your HTTP Request node (URL field). Then enable: Body Content Type → JSON Specify Body Using JSON Go back to Apify, click the JSON tab, copy everything, and paste it into the JSON field of your HTTP Request. If you now run your workflow, you'll get a nice structured table filled with Google Maps data. Pretty magical already—but we're just getting started! Step 2 – Cleaning Things Up (Edit Fields) Raw data is cool, but messy. Add an Edit Fields node next, using Manual Mapping mode. Here’s what you keep (copy-paste friendly): Title → {{ $json.title }} Address → {{ $json.address }} Website → {{ $json.website }} Phone → {{ $json.phone }} URL → {{ $json.url }} Now, you have a clean, readable table ready to use. Step 3 – Handling Each Contact Individually (Loop Over Items) Next, we process each contact one by one. Add the Loop Over Items node: Set Batch Size to 20 or more, depending on your needs. This node is simple but crucial to avoid traffic jams in the automation. Step 4 – Isolating Websites (Edit Fields again) Add another Edit Fields node (Manual Mapping). This time, keep just: Website → {{ $json.website }} We've isolated the websites for the next step: scraping them one by one. Step 5 – Scraping Each Website (HTTP Request) Now, we send our little robot to visit each website automatically. Add another HTTP Request node: Method: GET URL: {{ $json.website }} (from the previous node) This returns the raw HTML content of each site. Yes, it's ugly, but we won't handle it manually. We'll leave the next step to AI! Step 6 – Extracting Emails with ChatGPT We now use OpenAI (Message a Model) to politely ask GPT to extract only relevant emails. Configure as follows: Model: GPT-4-1-mini or higher Operation: Message a Model Simplify Output: ON Prompt to copy-paste: Look at this website content and extract only the email I can contact this business. In your output, provide only the email and nothing else. Ideally, this email should be of the business owner, so if you have 2 or more options, try for the most authoritative one. If you don't find any email, output 'Null'. Exemplary output of yours: name@examplewebsite.com {{ $json.data }} ChatGPT will kindly return the perfect email address (or 'Null' if none is found). Step 7 – Neatly Store Everything in Airtable Almost done! Add an Airtable → Create Record node. Fill your Airtable fields like this: | Airtable Field | Content | n8n Variable | | ------------------ | ------------------------------- | ------------------------------------------ | | Title | Business name | {{ $('Edit Fields').item.json.Title }} | | Street | Full address | {{ $('Edit Fields').item.json.Address }} | | Website | Website URL | {{ $('Edit Fields').item.json.Website }} | | Phone Number | Phone number | {{ $('Edit Fields').item.json.Phone }} | | Email | Email retrieved by the AI agent | {{ $json.message.content }} | | URL | Google Maps link | {{ $('Edit Fields').item.json.URL }} | Now, you have a tidy Airtable database filled with fresh leads, ready for action. Step 8 – Automated Email via Gmail (The Final Touch) To finalize the workflow, add a Gmail → Send Email node after your Airtable node. Here’s how to configure this node using the data pulled directly from your Airtable base (from the previous step): Recipient (To): Retrieve the email stored in Airtable ({{ $json.fields.Email }}). Subject: Use the company name stored in Airtable ({{ $json.fields.Title }}) to personalize the subject line. Body: You can include several fields directly from Airtable, such as: Company name: {{ $json.fields.Title }} Website URL: {{ $json.fields.Website }} Phone number: {{ $json.fields["Phone Number"] }} Link to the Google Maps listing: {{ $json.fields.URL }} All of this data is available in Airtable because it was automatically inserted in the previous step (Step 7). This ensures that each email sent is fully personalized and based on clear, reliable, and structured information.
by David Olusola
Overview A comprehensive educational workflow that demonstrates practical JavaScript usage in n8n's Code node through real-world business scenarios. Perfect for learning data manipulation, transformation, and automation patterns that you can immediately apply to client projects. What This Template Teaches: Data Filtering & Transformation - Filter employees by age, calculate bonuses, format contact information Statistical Analysis - Generate team statistics, averages, role distributions, and KPIs Multi-Format Export - Create CSV files, email lists, and API-ready payloads from raw data n8n Best Practices - Proper JSON handling, return formats, and data flow patterns How It Works: Manual Trigger starts the workflow with sample employee data Set Sample Data provides realistic business data (employees with roles, salaries, ages) Three Code Node Examples process the same data differently: Filter & Transform: Creates adult employee list with calculated bonuses Calculate Stats: Generates comprehensive team analytics and reports Format for Export: Prepares data for external systems (APIs, emails, CSV) Key Learning Points: Access input data using items[0].json.propertyName Return proper n8n format with [{ json: data }] structure Use JSON.parse() for string-to-object conversion Apply JavaScript array methods (filter, map, reduce) for data processing Handle multiple output scenarios and data aggregation Perfect For: n8n beginners learning Code node fundamentals Developers transitioning to n8n automation Client demos showing data processing capabilities Team training and onboarding sessions Foundation for building custom business automation workflows Business Use Cases: Transform this template for lead qualification, customer segmentation, report generation, data enrichment, and API integrations. Each Code node pattern can be adapted for different industries and automation needs.
by Jonathan
This workflow will take a timer entry from Clockify and submit it to a matching ticket in Syncro. It saves the time entry ID from Clockify and the time entry ID from Syncro into a Google Sheets. Then, it will check if a match already exists from a previous update and will update the same time entry if the description or time is changed in Clockify. There is a Set node with the name and Syncro IDs of technicians. If you have multiple technicians with the same name, this won't work for you. Likewise, if the name in Clockify doesn't exactly match what you put in the Set, it won't work. You also need to setup a webhook in Clockify set to trigger on "Time entry updated (anyone)" and pointed at your workflow. Configured this way, you can start and stop time entries at will and it won't do anything until you change the description. > This workflow is part of an MSP collection, The original can be found here: https://github.com/bionemesis/n8nsyncro
by n8n Team
Who this template is for This template is for everyone who needs to work with XML data a lot and wants to convert it to JSON instead. Use case Many products still work with XML files as their main language. Unfortunately, not every software still supports XML, as many switched to more modern storing languages such as JSON. This workflow is designed to handle the conversion of XML data to JSON format via a webhook call, with error handling and Slack notifications integrated into the process. How this workflow works Triggering the workflow: This workflow initiates upon receiving an HTTP POST request at the webhook endpoint specified in the "POST" node. The endpoint, designated as , can be accessed externally by sending a POST request to that URL. Data routing and processing: Upon receiving the POST request, the Switch node routes the workflow's path based on conditions determined by the content type of the incoming data or any encountered errors. The Extract From File and Edit Fields (Set) nodes manage XML input processing, adapting their actions according to the data's content type. XML to JSON conversion: The XML data extracted from the input is passed through the "XML" node, which performs the conversion process, transforming it into JSON format. Response handling: If the XML-to-JSON conversion is successful, a success response is sent back with a status of "OK" and the converted JSON data. If there are any errors during the XML-to-JSON conversion process, an error response is sent back with a status of "error" and an error message. Error handling: in case of an error during processing, the workflow sends a notification to a Slack channel designated for error reporting. Set up steps Set up your own in the Webhook node. While building or testing a workflow, use a test webhook URL. When your workflow is ready, switch to using the production webhook URL. Set credentials for Slack.