by tanaypant
This is Workflow 1 in the blog tutorial Database activity monitoring and alerting. Prerequisites A Postgres database set up and credentials. Basic knowledge of JavaScript and SQL. Nodes Cron node starts the workflow every minute. Function node generates sensor data (sensor id (preset), a randomly generated value, timestamp, and notification (preset as false) ) Postgres node inserts the data into a Postgres database. You can create the database for this workflow with the following SQL statement: CREATE TABLE n8n (id SERIAL, sensor_id VARCHAR, value INT, time_stamp TIMESTAMP, notification BOOLEAN);
by Raquel Giugliano
This minimal utility workflow connects to the SAP Business One Service Layer API to verify login credentials and return the session ID. It's ideal for testing access or using as a sub-workflow to retrieve the B1SESSION token for other operations. ++⚙️ HOW IT WORKS:++ 🔹 1. Trigger Manually The workflow is initiated using a Manual Trigger. Ideal for testing or debugging credentials before automation. 🔹 2. Set SAP Login Data The Set Login Data node defines four key input variables: sap_url: Base URL of the SAP B1 Service Layer (e.g. https://sap-server:50000/b1s/v1/) sap_username: SAP B1 username sap_password: SAP B1 password sap_companydb: SAP B1 Company DB name 🔹 3. Connect to SAP A HTTP Request node performs a POST to the Login endpoint. The body is structured as: { "UserName": "your_sap_username", "Password": "your_sap_password", "CompanyDB": "your_sap_companydb" } If successful, the response contains a SessionId which is essential for authenticated requests. 🔹 4. Return Session or Error The response is branched: On success → the sessionID is extracted and returned. On failure → the error message and status code are stored separately. ++🛠 SETUP STEPS:++ 1️⃣ Create SAP Service Layer Credentials Although this workflow uses manual inputs (via Set), it's best to define your connection details as environment variables for reuse: SAP_URL=https://your-sap-host:50000/b1s/v1/ SAP_USER=your_sapuser SAP_PASSWORD=your_password SAP_COMPANY_DB=your_companyDB Alternatively, update the Set Login Data node directly with your values. 2️⃣ Run the Workflow Click "Execute Workflow" in n8n. Watch the response from SAP: If successful: sessionID will be available in the Success node. If failed: statusCode and errorMessage will be available in the Failed node. ++✅ USE CASES:++ 🔄 Reusable Login Module Export this as a reusable sub-workflow for other SAP-integrated flows. 🔐 Credential Testing Tool Validate new environments, test credentials before deployment.
by Miquel Colomer
This workflow is useful if you have lots of tasks running daily. MySQL node (or the database used to save data shown in n8n - could be Mongo, Postgres, ... -) remove old entries from execution_entity table that contains the history of the executed workflows. If you have multiple tasks executed every minute, 1024 rows will be created every day (60 minutes x 24 hours) per every task. This will increase the table size fastly. SQL query deletes entries older than 30 days taking stoppedAt column as a reference for date calculations. You only have to setup Mysql connection properly and config cron to execute once per day in a low traffic hour, this way
by Francis Njenga
AI Content Generator Workflow Introduction This workflow automates the process of creating high-quality articles using AI, organizing them in Google Drive, and tracking their progress in Google Sheets. It's perfect for marketers, bloggers, and businesses looking to streamline content creation. With minimal setup, you can have a fully operational system to generate, save, and manage your articles in one cohesive workflow. How It Works Collect Inputs: Users fill out a form with details like article title, keywords, and instructions. Generate Content: AI creates an outline and writes the article based on user inputs. Organize Files: Saves the outline and final article in Google Drive for easy access. Track Progress: Updates Google Sheets with links to the generated content for tracking. Set Up Steps Time Required**: Approximately 15–20 minutes to connect all integrations and test the workflow. Steps**: Connect Google Drive and Google Sheets: Authorize access to store files and update the spreadsheet. Set Up OpenAI Integration: Add your OpenAI API key for generating the outline and article content. Customize the Form: Modify the form fields to match the details you want to collect for each article. Test the Workflow: Run the workflow with sample inputs to ensure everything works smoothly. This workflow not only simplifies the process of article creation but also sets a foundation for expanding into additional automations, like posting to social media platforms.
by Askan
The News Site from Colt, a telecom company, does not offer an RSS feed, therefore web scraping is the choice to extract and process the news. The goal is to get only the newest posts, a summary of each post and their respective (technical) keywords. Note that the news site offers the links to each news post, but not the individual news. We collect first the links and dates of each post before extracting the newest ones. The result is sent to a SQL database, in this case a NocoDB database. This process happens each week thru a cron job. Requirements: Basic understanding of CSS selectors and how to get them via browser (usually: right click → inspect) ChatGPT API account - normal account is not sufficient A NocoDB database - of course you may choose any type of output target Assumptions: CSS selectors work on the news site The post has a date with own CSS selector - meaning date is not part of the news content "Warnings" Not every site likes to be scraped, especially not in high frequency Each website is structured in different ways, the workflow may then need several adaptations.
by n8n Team
This n8n workflow is designed to analyze email headers received via a webhook. The workflow splits into two main paths based on the presence of the received and authentication results headers. In the first path, if received headers are present, the workflow extracts IP addresses from these headers and then queries the IP Quality Score API to gather information about the IP addresses, including fraud score, abuse history, organization, and more. Geolocation data is also obtained from the IP-API API. The workflow collects and aggregates this information for each IP address. In the second path, if authentication-results headers are present, the workflow extracts SPF, DKIM, and DMARC authentication results. It then evaluates these results and sets fields accordingly (e.g., SPF pass/fail/neutral). The paths merge their results, and the workflow responds to the original webhook with the aggregated analysis, including IP information and authentication results. Potential issues during setup include ensuring proper configuration of the webhook calls with header authentication, handling authentication and API keys for the IP Quality Score API, and addressing any discrepancies or errors in the logic nodes, such as handling SPF, DKIM, and DMARC results correctly. Additionally, thorough testing with various email header formats is essential to ensure accurate analysis and response.
by Hybroht
Using Mistral API, you can use this n8n workflow to automate the process of: collecting, filtering, analyzing, and summarizing news articles from multiple sources. The sources come from pre-built RSS feeds and a custom DuckDuckGo node, which you can change if you need. It will deliver the most relevant news of the day in a concise manner. ++How It Works++** The workflow begins each weekday at noon. The news are gathered from RSS feeds and a custom DuckDuckGo node, using HTTPS GET when needed. News not from today or containing unwanted keywords are filtered out. The first AI Agent will select the top news from their titles alone and generate a general title & summary. The next AI Agent will summarize the full content of the selected top news articles. The general summary and title will be combined with the top 10 news summaries into a final output. ++Requirements++ An active n8n instance (self-hosted or cloud). Install the custom DuckDuckGo node: n8n-nodes-duckduckgo-search A Mistral API key Configure the Sub-Workflow for the content which requires HTTP GET requests. It is provided in the template itself. ++Fair Notice++ This is an older version of the template. There is a superior updated version which isn't restricted to tech news, with enhanced capabilities such as communication through different channels (email, social media) and advanced keyword filtering. It was recently published in n8n. You can find it here. If you are interested or would like to discuss specific needs, then feel free to contact us.
by Giulio
This n8n workflow template allows you to create a CRUD endpoint that performs the following actions: Create a new record Get a record Get many records Update a record Delete a record This template is connected with Airtable but you can replace the Airtable nodes with anything you need to interact with (e.g. Postgres, MySQL, Notion, Coda...). The template uses the n8n Webhook node setting 'Allow Multiple HTTP Methods' to enable multiple HTTP methods on the same node. Features Just two nodes to create 5 endpoints Use it with Airtable or replace the Airtable nodes for your own customization Add your custom logic exploiting all n8n's possibilities Workflow Steps Webhook node**: exposes the endpoints to get many records and create a new record Webhook (with ID) node**: exposes the endpoints to get, update, and delete a record. Due to a n8n limitation, this endpoint will have an additional code in the path (e.g. https://my.app.n8n.cloud/webhook/580ccc56-f308-4b64-961d-38323501a170/customers/:id). Keep this in mind when using these endpoints in your application Various Airtable nodes**: execute various specific operations to interact with Airtable records Getting Started To deploy and use this template: Import the workflow into your n8n workspace Customize the endpoint paths by tweaking the 'Path' parameters in the 'Webhook' and 'Webhook (with ID)' nodes (currently customers) Set up your Airtable credentials by following this guide and customize the Airtable nodes by selecting your base, table, and the correct fields to update. ...or... replace the Airtable nodes and connect the endpoint to any other service (e.g. Postgres, MySQL, Notion, Coda) How to use the workflow Activate the workflow Connect your app to the endpoints (production URLs) to perform the various operations allowed by the workflow Note that the Webhook nodes have two URLs, one for testing and one for production. The testing URL is activated when you click on 'Test workflow' button and can't be used for production. The production URL is available after you activate the workflow. More info here. Feel free to get in touch with me if you have questions about this workflow.
by Greg Lopez
Workflow Information 📌 Purpose 🎯 The intention of this workflow is to integrate New Shopify Orders into MS Dynamics Business Central: Point-of-Sale (POS):** POS orders will be created in Business Central as Sales Invoices given no fulfillment is expected. Web Orders:** This type of orders will be created as Business Central Sales Orders. How to use it 🚀 Edit the "D365 BC Environment Settings" node with your own account values (Company Id, Tenanant Id, Tax & Discount Items). Go to the "Shopify" node and edit the connection with your environment. More help here. Go to the "Lookup Customers" node to edit the Business Central connection details with your environment settings. Set the required filters on the "Shopify Order Filter" node. Edit the "Schedule Trigger" node with the required frequency. Useful Workflow Links 📚 Step-by-step Guide/ Integro Cloud Solutions Business Central REST API Documentation Video Demo Need Help? Contact me at: ✉️greg.lopez@integrocloudsolutions.com 📥 https://www.linkedin.com/in/greg-lopez-08b5071b/
by InfraNodus
Build a Better AI Chatbot for Your Zendesk Knowledge Portal Simple setup, no vector database needed. Uses GraphRAG to enhance user's prompts and provide high-quality and relevant up-to-date responses from your Zendesk knowledge base. Can be embedded on your Zendesk portal, also accesible via a URL. Can be customized and branded in your style. See example at support.noduslabs.com or a screenshot below: Also, compare it to the original Zendesk AI chatbot available at our other website https://infranodus.com — you will see that the quality of responses in this custom chatbot is much better than in the native Zendesk one, plus you save subscription because you won't need to activate their chat option, which is $25 per agent. Workflow Overview In this workflow, we use the n8n AI Agent Node with a custom prompt that: 1) First consults an "expert" graph from the InfraNodus GraphRAG system using the official InfraNodus GraphRAG node that will extract a reasoning ontology and a general context about your product from the graph that you create manually or automatically as described on our support portal. 2) The augmented user prompt is converted by AI agent node in a Zendesk search query that retrieves the most relevant content using their search API via n8n HTTP node. Both the results from the graph and the search results are combined and shown to the user How it works Receives a request from a user via a webhook that connects to the custom n8n chat widget. The request goes to the AI Agent node from n8n with a custom prompt (provided in the workflow) that orchestrates the following procedure: Sends the request to the knowledge graph in your InfraNodus account using the official InfraNodus GraphRAG node that contains a reasoning ontology represented as a knowledge graph based on your Zendesk knowledge support portal. Read more on how to generate this ontology here. Based on the results from InfraNodus, it reformulates the original prompt to include the reasoning logic as well as provide a fuller context to the model. Sends the request to the Zendesk search API using the n8n custom HTTP node with an enhanced search query to retrieve high-quality results. Combines Zendesk search results with InfraNodus ontology to generate a final response to the user. Sends the response back to the webhook, which is then picked up by the n8n chat widget that is shown to the user wherever the widget is embedded (e.g. on your own support portal). How to use • Get an InfraNodus API key and add it into InfraNodus GraphRAG node. • Edit the InfraNodus Graph node to provide the name of the graph that you will be using as ontology (you need to create it in InfraNodus first. • Edit the AI Agent (Support Agent) prompt to modify our custom instructions for your particular use case (do not change it too much as it works quite well and tells the agent what it should do and in what sequence). • Add the API key for your Zendesk account. In order to get it, go to your support portal Admin > Apps & Integrations > API Tokens. Usually it's located at https://noduslabs.zendesk.com/admin/apps-integrations/apis/api-tokens where instead of noduslabs you need to put the name of your support portal. Note: the official n8n Zendesk node does not have an endpoint to search and extract articles from support portal, so we use the custom HTTP node, but you can still connect to it via the Zendesk API key you have installed in your n8n. Support & Tutorials If you wan to create your own reasoning ontology graphs, please, refer to this article on generating your own knowledge graph ontologies. Specifically for this use case: Building ontology for your n8n AI chat bot. You may also be interested to watch this video that explains the logic of this approach in detail: Our support article for this workflow with real-life example: Building an embeddable AI chatbot agent for your Zendesk knowledge portal. To get support and help, contact us via support.noduslabs.com Learn more about InfraNodus at www.infranodus.com
by Incrementors
Description A natural conversational AI chatbot that collects lead information (Name, Phone, Email, Message) one question at a time without feeling like a form. Uses session-based memory to track conversations, intelligently asks only for missing details, and saves complete leads to Google Sheets automatically. What this workflow does This workflow creates a human-like booking assistant that gathers lead information through natural conversation instead of traditional forms. The AI chatbot asks ONE question at a time, remembers previous answers using session memory, never repeats questions, and only saves data to Google Sheets when all four required fields (Name, Phone Number, Email Address, User Message) are confidently collected. The conversation feels natural and friendly—users engage with the bot as if chatting with a real person, dramatically improving completion rates compared to static forms. Perfect for booking systems, consultation requests, event registrations, customer support intake, or any scenario where you need to collect contact information without friction. Key features One question at a time: The AI never overwhelms users with multiple questions. It asks for Name, then Phone, then Email, then Message—sequentially and naturally, based on what's still missing from the conversation. Session-based memory: Uses timestamp-based session tracking so the AI remembers the entire conversation context. If a user says "My name is John" in message 1, the AI won't ask for the name again in message 5. Smart field detection: The AI automatically detects which details have been collected and which are still missing. It adapts the conversation flow dynamically instead of following a rigid script. Natural language processing: Handles variations in user input ("John Doe", "I'm John", "Call me John") and validates data intelligently before saving. Complete data guarantee: Only writes to Google Sheets when all 4 required fields are present. No partial or incomplete leads clutter your tracking sheet. Webhook-based integration: Works with any website, app, or platform that can send HTTP requests. Integrate with chatbots, contact forms, booking widgets, or custom applications. Instant responses: Real-time conversation with sub-second response times. Users get immediate replies, maintaining engagement throughout the lead collection process. How it works 1. User initiates conversation via webhook A user sends a message through your website chat widget, contact form, or booking interface. This triggers a webhook that passes the message along with query parameters (name, email, phone, message, timestamp, source) to n8n. 2. AI Agent analyzes conversation state The Conversational Lead Collection Agent receives the user's message and checks the current state: Which fields are already collected (from previous messages in this session)? Which fields are still missing? What should be asked next? The AI uses the system prompt to understand its role as a booking assistant for "Spark Writers' Retreat" and follows strict conversation rules. 3. Session memory tracks context The Buffer Window Memory node uses the timestamp from the webhook as a unique session ID. This allows the AI to: Remember all previous messages in this conversation Access previously collected information (name, phone, email) Never ask the same question twice Maintain conversation continuity even if the user takes breaks 4. One question at a time Based on what's missing, the AI asks exactly ONE question in natural, friendly language: If Name is missing → "Hi! What's your name?" If Phone is missing → "Great! And what's your phone number?" If Email is missing → "Perfect! Could you share your email address?" If Message is missing → "Thanks! How can I help you today?" The AI adapts its language based on previous conversation flow—it doesn't sound robotic or repetitive. 5. Data validation and collection As the user responds, the AI: Validates input (checks if phone number looks valid, email has @ symbol, etc.) Extracts the information from natural language responses Stores it temporarily in session memory Continues asking until all 4 fields are complete If the user provides unclear input, the AI politely asks again: "I didn't quite catch that. Could you share your phone number?" 6. Save to Google Sheets (when complete) Critical rule: The AI only uses the Google Sheets tool AFTER all four details are confidently collected. This prevents partial or incomplete leads from cluttering your database. When all fields are present, the AI: Writes exactly ONE row to Google Sheets Maps data: Name → Name, Phone → Phone No., Email → Email, Message → Message Uses Timestamp as the unique identifier (matching column) Updates existing rows if the same timestamp appears again (prevents duplicates) 7. Confirmation message After successfully saving, the AI sends a polite thank you: "Thank you! 🙏 We've received your details and our team will get back to you shortly." The AI never mentions Google Sheets, tools, backend systems, or automation—it maintains the illusion of human conversation. 8. Response delivery The final AI response is sent back to the user via the webhook response. Your website or app displays this message in the chat interface, completing the conversation loop. Setup requirements Tools you'll need: Active n8n instance (self-hosted or n8n Cloud) Google Sheets with OAuth access for lead storage OpenAI API key (GPT-4.1-mini access) Website or app with chat interface (or any platform that can send webhooks) Estimated setup time: 15–20 minutes Configuration steps 1. Connect Google Sheets In n8n: Credentials → Add credential → Google Sheets OAuth2 API Complete OAuth authentication Create a Google Sheet for lead tracking with these columns: Timestamp (unique session identifier) Name Phone No. Email Message Open "Save Lead to Google Sheets" node Select your Google Sheet and the correct sheet tab Verify column mapping matches your sheet structure 2. Add OpenAI API credentials Get API key: https://platform.openai.com/api-keys In n8n: Credentials → Add credential → OpenAI API Paste your API key Open "OpenAI GPT-4.1 Mini Language Model" node Select your OpenAI credential Ensure model is set to gpt-4.1-mini 3. Copy webhook URL Open "Receive User Message via Webhook" node Copy the Webhook URL (format: https://your-n8n.cloud/webhook/[webhook-id]) This is the endpoint your website or app will send messages to 4. Integrate with your chat interface You need to send HTTP POST/GET requests to the webhook URL with these query parameters: GET https://your-n8n.cloud/webhook/[id]?name=[name]&email=[email]&phone=[phone]&message=[user_message]×tamp=[unique_timestamp]&source=[source] Query parameter details: name: User's name (empty string if not yet collected) email: User's email (empty string if not yet collected) phone: User's phone number (empty string if not yet collected) message: Current user message (required) timestamp: Unique session ID (use ISO timestamp or UUID) source: Source identifier (e.g., "website_chat", "booking_form") Example integration (JavaScript): const sessionId = new Date().toISOString(); const userMessage = "Hi, I want to book a retreat"; fetch(https://your-n8n.cloud/webhook/[id]?message=${encodeURIComponent(userMessage)}×tamp=${sessionId}&name=&email=&phone=&source=website_chat) .then(res => res.json()) .then(data => { // Display AI response in your chat UI console.log(data.output); }); 5. Customize the AI assistant Open "Conversational Lead Collection Agent" node and edit the system message to: Change the business name (currently "Spark Writers' Retreat") Modify conversation tone (formal vs. casual) Adjust the fields being collected Change the final thank you message 6. Test the workflow Activate the workflow (toggle to Active at the top) Send a test message to the webhook URL Verify the AI responds appropriately Continue the conversation by sending follow-up messages with the same timestamp Check that: AI asks for missing fields only Session memory persists across messages Lead saves to Google Sheets when all 4 fields are collected Thank you message appears after saving Use cases Booking and reservations: Hotels, retreat centers, event venues, or appointment-based businesses collect guest details conversationally instead of long booking forms. Higher completion rates mean more confirmed bookings. Lead generation for services: Agencies, consultants, coaches, or freelancers capture qualified leads through natural conversation. Users are more likely to complete the process when it feels like chatting instead of form-filling. Customer support intake: Support teams collect issue details, contact information, and problem descriptions through chat before routing to the right agent. All data automatically logged in Google Sheets for ticketing. Event registration: Conference organizers, workshop hosts, or webinar providers gather attendee information without friction. The conversational approach encourages sign-ups even from mobile users who hate forms. Sales qualification: Sales teams use the chatbot to qualify leads by collecting basic information and understanding requirements before human handoff. Complete context stored in Google Sheets for CRM integration. Consultation requests: Professional services (legal, medical, financial) collect client details and initial consultation requests through friendly conversation, reducing no-show rates by building rapport early. Customization options Change collected fields Open "Conversational Lead Collection Agent" node and modify the system message: Add new fields (e.g., Company Name, Budget, Preferred Date) Remove optional fields (e.g., make Message optional) Update the field names and data mapping Then update the Google Sheets node to include the new columns. Adjust conversation tone In the system message, change conversation style: Formal:** "May I please have your full name?" Casual:** "What's your name?" Friendly:** "Hey! What should I call you?" Add validation rules Enhance the system prompt with specific validation: Phone format (e.g., 10 digits, US format) Email domain restrictions (e.g., only business emails) Name length requirements Message minimum word count Connect to CRM or email After "Save Lead to Google Sheets" node, add: HTTP Request node** to send data to your CRM API Email node** to notify sales team of new leads Slack/Discord node** for real-time team alerts Webhook node** to trigger other workflows Multi-language support Modify the system prompt to respond in the user's language: Add language detection logic Translate questions and responses Update thank you message for each language Add conversation analytics Insert a Set node before saving to track: Number of messages per lead Time to completion Drop-off points Source performance Troubleshooting AI repeats questions already answered Memory not persisting:* Verify the *"Session Memory with Timestamp"** node is using the correct timestamp from the webhook query params. Timestamp changing:** Ensure your chat interface sends the SAME timestamp for all messages in one conversation. Generate it once and reuse it. Memory window size:** Increase the buffer window size in the memory node if conversations are very long. Leads not saving to Google Sheets Partial data:** The AI only saves when all 4 fields are collected. Check your test conversation actually provided all required information. OAuth expired:** Re-authenticate Google Sheets credentials. Sheet permissions:** Verify the connected Google account has edit access to the sheet. Column names mismatch:** Ensure sheet column names exactly match the mapping in the Google Sheets node (case-sensitive). AI saves incomplete data System prompt not followed:** Review the "Tool usage (VERY IMPORTANT)" section in the system message. Ensure it clearly states to only use Google Sheets after all fields are collected. Validation too lenient:** The AI might be guessing missing fields. Strengthen validation rules in the system prompt. Webhook not receiving messages URL incorrect:** Double-check the webhook URL in your integration code matches the n8n webhook URL exactly. CORS issues:** If calling from a browser, ensure n8n allows cross-origin requests or use server-side integration. Query params missing:** Verify all required parameters (message, timestamp) are included in the request. AI responses too slow OpenAI API latency:** GPT-4.1-mini typically responds in 1-3 seconds. If slower, check OpenAI API status. Network delays:** Verify n8n instance has good connectivity. Memory lookup slow:** Reduce buffer window size if storing hundreds of messages. Session memory not working Timestamp format inconsistent:** Use ISO format (e.g., 2026-01-28T14:38:23.720Z) and ensure it's identical across messages. Memory node misconfigured:* Check the session key expression in *"Session Memory with Timestamp"** node references the correct webhook query param. Resources n8n documentation OpenAI GPT-4 API Google Sheets API n8n Webhook node n8n AI Agent Buffer Window Memory Support Need help or custom development? 📧 Email: info@incrementors.com 🌐 Website: https://www.incrementors.com/
by Nishant Rayan
Create Video with HeyGen and Upload to YouTube Overview This workflow automates the process of creating an AI-generated avatar video using HeyGen and directly uploading it to YouTube. By sending text input via a webhook, the workflow generates a video with a chosen avatar and voice, waits for processing, downloads the completed file, and publishes it to your configured YouTube channel. This template is ideal for automating content creation pipelines, such as daily news updates, explainer videos, or narrated scripts, without manual intervention. Use Case Marketing teams**: Automate explainer or promotional video creation from text input. Content creators**: Generate AI-based avatar videos for YouTube directly from scripts. Organizations**: Streamline video generation for announcements, product updates, or tutorials. Instead of recording and editing videos manually, this template allows you to feed text content into a webhook and have a ready-to-publish video on your YouTube channel within minutes. How It Works Webhook Trigger: The workflow starts when text content and a title are sent to the webhook endpoint. Code Node: Cleans and formats the input text by removing unnecessary newlines and returns it with the title. Set Node: Prepares HeyGen parameters, including API key, avatar ID, voice ID, title, and content. HeyGen API Call: Sends the request to generate a video with the provided avatar and voice. Wait Node: Pauses briefly to allow HeyGen to process the video. Video Status Check: Polls HeyGen to check whether the video has finished processing. Conditional Check: If the video is still processing, it loops back to wait. Once complete, it moves forward. Download Node: Retrieves the generated video file. YouTube Upload Node: Uploads the video to your YouTube channel with the provided title and default settings. Requirements HeyGen API Key**: Required to authenticate with HeyGen’s video generation API. HeyGen Avatar & Voice IDs**: Unique identifiers for the avatar and voice you want to use. YouTube OAuth2 Credentials**: Connected account for video uploads. Setup Instructions Import the Workflow: Download and import this template JSON into your n8n instance. Configure the Webhook: Copy the webhook URL from n8n and use it to send requests with title and content. Example payload: { "title": "Tech News Update", "content": "Today’s top story is about AI advancements in video generation..." } Add HeyGen Credentials: Insert your HeyGen API key in the Set Node under x-api-key. Provide your chosen avatar_id and voice_id from HeyGen. To find your HeyGen avatar_id and voice_id, first retrieve your API key from the HeyGen dashboard. With this key, you can use HeyGen’s API to look up available options: run a GET request to https://api.heygen.com/v2/avatars to see a list of avatars along with their avatar_id, and then run a GET request to https://api.heygen.com/v2/voices to see a list of voices with their voice_id. Once you’ve identified the avatar and voice you want to use, copy their IDs and paste them into the Set HeyGen Parameters node in your n8n workflow. Set Up YouTube Credentials: Connect your YouTube account in n8n using OAuth2. Ensure proper permissions are granted for video uploads. To set up YouTube credentials in n8n, go to the Google Cloud Console, enable YouTube Data API v3, and create an OAuth Client ID (choose Web Application and add the redirect URI: https://<your-n8n-domain>/rest/oauth2-credential/callback). Copy the Client ID and Client Secret, then in n8n create new credentials for YouTube OAuth2 API. Enter the values, authenticate with your Google account to grant upload permissions, and test the connection. Once complete, the YouTube node will be ready to upload videos automatically. Activate the Workflow: Once configured, enable the workflow. Sending a POST request to the webhook with title and content will trigger the full process. Notes You can adjust video dimensions (default: 1280x720) in the HeyGen API request. Processing time may vary depending on script length. The workflow uses a wait-and-poll loop until the video is ready. Default YouTube upload category is Education (28) and region is US. These can be customized in the YouTube node.