by Artur
What this workflow does Monitors Google Drive: The workflow triggers whenever a new CSV file is uploaded. Uses AI to Identify PII Columns: The OpenAI node analyzes the data and identifies PII-containing columns (e.g., name, email, phone). Removes PII: The workflow filters out these columns from the dataset. Uploads Cleaned File: The sanitized file is renamed and re-uploaded to Google Drive, ensuring the original data remains intact. How to customize this workflow to your needs Adjust PII Identification: Modify the prompt in the OpenAI node to align with your specific data compliance requirements. Include/Exclude File Types: Adjust the Google Drive Trigger settings to monitor specific file types (e.g., CSV only). Output Destination: Change the folder in Google Drive where the sanitized file is uploaded. Setup Prerequisites: A Google Drive account. An OpenAI API key. Workflow Configuration: Configure the Google Drive Trigger to monitor a folder for new files. Configure the OpenAI Node to connect with your API Set the Google Drive Upload folder to a different location than the Trigger folder to prevent workflow loops.
by Mauricio Perera
n8n Workflow: Calculate the Centroid of a Set of Vectors Overview This workflow receives an array of vectors in JSON format, validates that all vectors have the same dimensions, and computes the centroid. It is designed to be reusable across different projects. Workflow Structure Nodes and Their Functions: Receive Vectors (Webhook): Accepts a GET request containing an array of vectors in the vectors parameter. Expected Input: vectors parameter in JSON format. Example Request: /webhook/centroid?vectors=[[2,3,4],[4,5,6],[6,7,8]] Output: Passes the received data to the next node. Extract & Parse Vectors (Set Node): Converts the input string into a proper JSON array for processing. Ensures vectors is a valid array. If the parameter is missing, it may generate an error. Expected Output Example: { "vectors": [[2,3,4],[4,5,6],[6,7,8]] } Validate & Compute Centroid (Code Node): Validates vector dimensions and calculates the centroid. Validation: Ensures all vectors have the same number of dimensions. Computation: Averages each dimension to determine the centroid. If validation fails: Returns an error message indicating inconsistent dimensions. Successful Output Example: { "centroid": [4,5,6] } Error Output Example: { "error": "Vectors have inconsistent dimensions." } Return Centroid Response (Respond to Webhook Node): Sends the final response back to the client. If the computation is successful, it returns the centroid. If an error occurs, it returns a descriptive error message. Example Response: { "centroid": [4, 5, 6] } Inputs JSON array of vectors, where each vector is an array of numerical values. Example Input { "vectors": [ [1, 2, 3], [4, 5, 6], [7, 8, 9] ] } Setup Guide Create a new workflow in n8n. Add a Webhook node (Receive Vectors) to receive JSON input. Add a Set node (Extract & Parse Vectors) to extract and convert the data. Add a Code node (Validate & Compute Centroid) to: Validate dimensions. Compute the centroid. Add a Respond to Webhook node (Return Centroid Response) to return the result. Function Node Script Example const input = items[0].json; const vectors = input.vectors; if (!Array.isArray(vectors) || vectors.length === 0) { return [{ json: { error: "Invalid input: Expected an array of vectors." } }]; } const dimension = vectors[0].length; if (!vectors.every(v => v.length === dimension)) { return [{ json: { error: "Vectors have inconsistent dimensions." } }]; } const centroid = new Array(dimension).fill(0); vectors.forEach(vector => { vector.forEach((val, index) => { centroid[index] += val; }); }); for (let i = 0; i < dimension; i++) { centroid[i] /= vectors.length; } return [{ json: { centroid } }]; Testing Use a tool like Postman or the n8n UI to send sample inputs and verify the responses. Modify the input vectors to test different scenarios. This workflow provides a simple yet flexible solution for vector centroid computation, ensuring validation and reliability.
by Halfbit 🚀
Jura Coffee Counter: Webhook API & Google Sheets Logger ☕️ Track how many coffees your Jura E8 espresso machine makes — fully automated via webhook and Google Sheets. This workflow exposes a custom API endpoint that can be called by smart devices, such as an ESP8266 or ESP32 reading data from a Jura E8 coffee machine via Bluetooth Low Energy (BLE). The incoming data (including total coffee count) is timestamped and appended to a Google Sheet, making it easy to visualize or analyze your machine usage. ☕ Originally built for a Jura E8, based on AlexxIT/Jura reverse-engineering project. > 📝 This workflow uses Google Sheets as a logging backend. You can easily switch it to Airtable, Notion, or a database of your choice. Live example available at: https://halfbitstudio.com/o-nas/ > 🖥️ In our setup, this workflow is used to provide real-time coffee consumption stats displayed directly on our website. > 🔌 Some Jura machines require an accessory Bluetooth transmitter to enable connectivity. Communication is based on the Bluetooth Low Energy (BLE) protocol. Use Case Tracking usage of a Jura coffee machine Logging IoT sensor data into Google Sheets Creating dashboards for daily consumption Smart office setups with coffee stats! Features ☁️ Two Webhook endpoints: POST /{{WEBHOOK_POST_PATH}} — receives JSON from ESP (coffee machine reader) GET /{{WEBHOOK_GET_PATH}} — returns latest records as JSON 📅 Timestamping via Date & Time node 🔹 Coffee counter extraction from incoming JSON 🧾 Appends structured rows to Google Sheets 📤 Webhook response for external status or dashboards Setup Instructions Jura Coffee Machine Integration (Hardware) Use an ESP device (e.g. ESP8266 or ESP32) to connect to the Jura E8 via Bluetooth Low Energy (BLE). Send POST requests with JSON payload: { "total_coffees": 123 } Reverse-engineered protocol reference: AlexxIT/Jura Google Sheets Configuration Create a new Google Sheet with column headers like: date | time | coffee counter Connect your Google account in n8n and authorize access to this sheet. Replace the documentId and sheetName fields in the Google Sheets nodes: Use full URL to your spreadsheet Use the actual sheet name (e.g. Sheet1) Environment Variables & Placeholders | Placeholder | Description | | ------------------------ | ----------------------------------------------- | | {{WEBHOOK_POST_PATH}} | Endpoint to receive coffee counter data | | {{WEBHOOK_GET_PATH}} | Endpoint to return latest data (for dashboards) | | {{SHEET_ID}} | Google Spreadsheet ID | | {{GOOGLE_CREDENTIALS}} | OAuth2 credentials for Google Sheets | | {{DATA_COLUMNS}} | Column names in the target sheet | Testing the Workflow Send test request: Use Postman or ESP to send a POST request to /{{WEBHOOK_POST_PATH}} Body should include total_coffees value Check Google Sheet: Open your sheet and verify that a new row was appended Test GET endpoint: Access the second webhook URL (e.g. /{{WEBHOOK_GET_PATH}}) in browser or fetch via API Optional: Use Respond to Webhook output in a dashboard or frontend Customization Tips Sheet format**: Add more columns if you want to track additional data (e.g. machine temperature, errors) Output format**: Replace Google Sheets with any other storage (e.g. MySQL, Notion) Auth layer**: Add basic auth or token verification if needed for public exposure Notifications**: Send alerts to Discord/Slack when reaching thresholds (e.g. 200 coffees brewed) Tags: google-sheets, iot, webhook, jura, coffee, api, automation
by Adrian Bent
This workflow takes two inputs, YouTube video URL (required) and a description of what information to extract from the video. If the description/"what you want" field is left empty, the default prompt will generate a detailed summary and description of the video's contents. However, you can ask for something more specific using this field/input. ++ Don't forget to make the workflow Active and use the production URL from the form node. Benefits Instant Summary Generation - Convert hours of watching YouTube videos to familiar, structured paragraphs and sentences in less than a minute Live Integration - Generate a summary or extract information on the contents of a YouTube video whenever, wherever Virtually Complete Automation - All that needs to be done is to add the video URL and describe what you want to know from the video Presentation - You can ask for a specific structure or tone to better help you understand or study the contents of the video How It Works Smart Form Interface: Simple N8N form captures video URL and description of what's to be extracted Designed for rapid and repeated completion anywhere and anytime Description Check: Uses JavaScript to determine if the description was filled in or left empty If the description field was left empty, the default prompt is, "Please be as descriptive as possible about the contents being spoken of in this video after giving a detailed summary." If the description field is filled, then the filled input will be used to describe what information to extract from the video HTTP Request: We're using Gemini API, specifically the video understanding endpoint We make a post HTTP request passing the video URL and the description of what information to extract Setup Instructions: HTTP Request Setup: Sign up for a Google Cloud account, join the Developer Program and get your Gemini API key Get curl for Gemini Video Understanding API The video understanding relies on the inputs from the form, code and HTTP request node, so correct mapping is essential for the workflow to function correctly. Feel free to reach out for additional help or clarification at my Gmail: terflix45@gmail.com, and I'll get back to you as soon as I can. Setup Steps: Code Node Setup: The code node is used as a filter to ensure a description prompt is always passed on. Use the JavaScript code below for that effect: // Loop over input items and add a new field called 'myNewField' to the JSON of each one for (const item of $input.all()) { item.json.myNewField = 1; if ($input.first().json['What u want?'].trim() == "") { $input.first().json['What do you want?'] = "Please be as descriptive as possible about the contents being spoken of this video after giving a detailed summary"; } } return $input.all(); // End of Code HTTP Request: To use Gemini Video Understanding, you'll need your Gemini API key Go to https://ai.google.dev/gemini-api/docs/video-understanding#youtube. This link will take you directly to the snippet. Just select REST programming language, copy that curl command, then paste it into the HTTP Request node Replace "Please summarize the video in 3 sentences." with the code node's output, which should either be the default description or the one entered by the user (second output field variable) Replace "https://www.youtube.com/watch?v=9hE5-98ZeCg" with the n8n form node's first output field, which should be the YouTube video URL variable Replace $GEMINI_API_KEY with your API key Redirect: Use n8n form node, page type "Final Ending" to redirect user to the initial n8n form for another analysis or preferred destination
by Dhruv Dalsaniya
Description: This n8n workflow automates a Discord bot to fetch messages from a specified channel and send AI-generated responses in threads. It ensures smooth message processing and interaction, making it ideal for managing community discussions, customer support, or AI-based engagement. This workflow leverages Redis for memory persistence, ensuring that conversation history is maintained even if the workflow restarts, providing a seamless user experience. How It Works The bot listens for new messages in a specified Discord channel. It sends the messages to an AI model for response generation. The AI-generated reply is posted as a thread under the original message. The bot runs on an Ubuntu server and is managed using PM2 for uptime stability. The Discord bot (Python script) acts as the bridge, capturing messages from Discord and sending them to the n8n webhook. The n8n workflow then processes these messages, interacts with the AI model, and sends the AI's response back to Discord via the bot. Prerequisites to host Bot Sign up on Pella, which is a managed hosting service for Discord Bots. (Easy Setup) A Redis instance for memory persistence. Redis is an in-memory data structure store, used here to store and retrieve conversation history, ensuring that the AI can maintain context across multiple interactions. This is crucial for coherent and continuous conversations. Set Up Steps 1️⃣ Create a Discord Bot Go to the Discord Developer Portal. Click “New Application”, enter a name, and create it. Navigate to Bot > Reset Token, then copy the Bot Token. Enable Privileged Gateway Intents (Presence, Server Members, Message Content). Under OAuth2 > URL Generator, select bot scope and required permissions. Copy the generated URL, open it in a browser, select your server, and click Authorize. 2️⃣ Deploy the Bot on Pella Create a new folder discord-bot and navigate into it: Create and configure an .env file to store your bot token: Copy the code to .env: (You can copy the webhook URL from the n8n workflow) TOKEN=your-bot-token-here WEBHOOK_URL=https://your-domain.tld/webhook/getmessage Create file main.py copy the below code and save it: Copy this Bot script to main.py: import discord import requests import json import os from dotenv import load_dotenv Load environment variables from .env file load_dotenv() TOKEN = os.getenv("TOKEN") WEBHOOK_URL = os.getenv("WEBHOOK_URL") Bot Configuration LISTEN_CHANNELS = ["YOUR_CHANNEL_ID_1", "YOUR_CHANNEL_ID_2"] # Replace with your target channel IDs Intents setup intents = discord.Intents.default() intents.messages = True # Enable message event intents.guilds = True intents.message_content = True # Required to read messages client = discord.Client(intents=intents) @client.event async def on_ready(): print(f'Logged in as {client.user}') @client.event async def on_message(message): if message.author == client.user: return # Ignore bot's own messages if str(message.channel.id) in LISTEN_CHANNELS: try: fetched_message = await message.channel.fetch_message(message.id) # Ensure correct fetching payload = { "channel_id": str(fetched_message.channel.id), # Ensure it's string "chat_message": fetched_message.content, "timestamp": str(fetched_message.created_at), # Ensure proper formatting "message_id": str(fetched_message.id), # Ensure ID is a string "user_id": str(fetched_message.author.id) # Ensure user ID is also string } headers = {'Content-Type': 'application/json'} response = requests.post(WEBHOOK_URL, data=json.dumps(payload), headers=headers) if response.status_code == 200: print(f"Message sent successfully: {payload}") else: print(f"Failed to send message: {response.status_code}, Response: {response.text}") except Exception as e: print(f"Error fetching message: {e}") client.run(TOKEN) Create requirements.txt and copy: discord python-dotenv 3️⃣ Follow the video to set up the bot which will run 24/7 Tutorial - https://www.youtube.com/watch?v=rNnK3XlUtYU Note: Free Plan will expire after 24 hours, so please opt for the Paid Plan in Pella to keep your bot running. 4️⃣ n8n Workflow Configuration The n8n workflow consists of the following nodes: Get Discord Messages (Webhook):** This node acts as the entry point for messages from the Discord bot. It receives the channel_id, chat_message, timestamp, message_id, and user_id from Discord when a new message is posted in the configured channel. Its webhook path is /getmessage and it expects a POST request. Chat Agent (Langchain Agent):** This node processes the incoming Discord message (chat_message). It is configured as a conversational agent, integrating the language model and memory to generate an appropriate response. It also has a prompt to keep the reply concise, under 1800 characters. OpenAI -4o-mini (Langchain Language Model):** This node connects to the OpenAI API and uses the gpt-4o-mini-2024-07-18 model for generating AI responses. It is the core AI component of the workflow. Message History (Redis Chat Memory):** This node manages the conversation history using Redis. It stores and retrieves chat messages, ensuring the Chat Agent maintains context for each user based on their user_id. This is critical for coherent multi-turn conversations. Calculator (Langchain Tool):** This node provides a calculator tool that the AI agent can utilize if a mathematical calculation is required within the conversation. This expands the capabilities of the AI beyond just text generation. Response fromAI (Discord):** This node sends the AI-generated response back to the Discord channel. It uses the Discord Bot API credentials and replies in a thread under the original message (message_id) in the specified channel_id. Sticky Note1, Sticky Note2, Sticky Note3, Sticky Note4, Sticky Note5, Sticky Note:** These are informational nodes within the workflow providing instructions, code snippets for the Discord bot, and setup guidance for the user. These notes guide the user on setting up the .env file, requirements.txt, the Python bot code, and general recommendations for channel configuration and adding tools. 5️⃣ Setting up Redis Choose a Redis Hosting Provider: You can use a cloud provider like Redis Labs, Aiven, or set up your own Redis instance on a VPS. Obtain Redis Connection Details: Once your Redis instance is set up, you will need the host, port, and password (if applicable). Configure n8n Redis Nodes: In your n8n workflow, configure the "Message History" node with your Redis connection details. Ensure the Redis credential ✅ redis-for-n8n is properly set up with your Redis instance details (host, port, password). 6️⃣ Customizing the Template AI Model:** You can easily swap out the "OpenAI -4o-mini" node with any other AI service supported by n8n (e.g., Cohere, Hugging Face) to use a different language model. Ensure the new language model node is connected to the ai_languageModel input of the "Chat Agent" node. Agent Prompt:** Modify the text parameter in the "Chat Agent" node to change the AI's persona, provide specific instructions, or adjust the response length. Additional Tools:** The "Calculator" node is an example of an AI tool. You can add more Langchain tool nodes (e.g., search, data lookup) and connect them to the ai_tool input of the "Chat Agent" node to extend the AI's capabilities. Refer to the "Sticky Note5" in the workflow for a reminder. Channel Filtering:** Adjust the LISTEN_CHANNELS list in the main.py file of your Discord bot to include or exclude specific Discord channel IDs where the bot should listen for messages. Thread Management:** The "Response fromAI" node can be modified to change how threads are created or managed, or to send responses directly to the channel instead of a thread. The current setup links the response to the original message ID (message_reference). 7️⃣ Testing Instructions Start the Discord Bot: Ensure your main.py script is running on Pella. Activate the n8n Workflow: Make sure your n8n workflow is active and listening for webhooks. Send a Message in Discord: Go to one of the LISTEN_CHANNELS in your Discord server and send a message. Verify Response: The bot should capture the message, send it to n8n, receive an AI-generated response, and post it as a thread under your original message. Check Redis: Verify that the conversation history is being stored and updated correctly in your Redis instance. Look for keys related to user IDs. ✅ Now your bot is running in the background! 🚀
by Derek Cheung
Purpose of workflow: The purpose of this workflow is to automate scraping of a website, transforming it into a structured format, and loading it directly into a Google Sheets spreadsheet. How it works: Web Scraping: Uses the Jina AI service to scrape website data and convert it into LLM-friendly text. Information Extraction: Employs an AI node to extract specific book details (title, price, availability, image URL, product URL) from the scraped data. Data Splitting: Splits the extracted information into individual book entries. Google Sheets Integration: Automatically populates a Google Sheets spreadsheet with the structured book data. Step by step setup: Set up Jina AI service: Sign up for a Jina AI account and obtain an API key. Configure the HTTP Request node: Enter the Jina AI URL with the target website. Add the API key to the request headers for authentication. Set up the Information Extractor node: Use Claude AI to generate a JSON schema for data extraction. Upload a screenshot of the target website to Claude AI. Ask Claude AI to suggest a JSON schema for extracting required information. Copy the generated schema into the Information Extractor node. Configure the Split node: Set it up to separate the extracted data into individual book entries. Set up the Google Sheets node: Create a Google Sheets spreadsheet with columns for title, price, availability, image URL, and product URL. Configure the node to map the extracted data to the appropriate columns.
by Angel Menendez
Who is this for? This subworkflow is ideal for developers and automation builders working with UniPile and n8n to automate message enrichment and LinkedIn lead routing. What problem is this workflow solving? UniPile separates personal and organization accounts into two different API endpoints. This flow handles both intelligently so you're not missing sender context due to API quirks or bad assumptions. What this workflow does This subworkflow is used by: LinkedIn Auto Message Router with Request Detection** LinkedIn AI Response Generator with Slack Approval** It receives a message sender ID and tries to enrich it using UniPile's /people and /organizations endpoints. It returns a clean, consistent profile object regardless of which source was used. Setup Generate a UniPile API token and save it in your n8n credentials Make sure this subworkflow is triggered correctly by your parent flows Test both people and organization lookups to verify responses are normalized How to customize this workflow to your needs Add a secondary enrichment layer using tools like Clearbit or FullContact Customize the fallback logic or error handling Expand the returned data for more AI context or user routing (e.g., job title, region)
by Miquel Colomer
This n8n workflow template automates the process of collecting and delivering the "Top Deals of the Day" from MediaMarkt, tailored to user preferences. By combining user-submitted forms, Bright Data web scraping, GPT-4o-mini deal generation, and email delivery, this workflow sends personalized product recommendations straight to a user’s inbox. > ⚠️ Note: This workflow uses community nodes (Bright Data and Document Generator) which only work on *self-hosted n8n instances*. 🚀 What It Does Collects user preferences via a form (categories + email) Scrapes MediaMarkt’s deals page using Bright Data Uses GPT-4o-mini (OpenAI) to recommend top deals Generates a structured HTML email using a template Sends the personalized deals directly via email 🧩 Community Node Integration We created and used the following community nodes: Bright Data** – To scrape MediaMarkt deals using proxy-based scraping Document Generator** – To generate a templated HTML document from deal data These nodes are not available in n8n Cloud and require self-hosted n8n. 🛠️ Step-by-Step Setup Install Community Nodes Make sure you're on a self-hosted n8n instance. Install: n8n-nodes-brightdata n8n-nodes-document-generator Configure Credentials Bright Data API Key (Proxy + Scraping setup) OpenAI API Key (GPT-4o-mini access) SMTP Credentials for sending emails Customize the Form Adapt the form node to collect desired categories and email addresses. Typical categories include appliances, phones, laptops, etc. Design Your HTML Template In the Document Generator node, you can tweak the HTML/CSS to change how deals appear in the final email. Test the Workflow Submit the form with test data and check that the entire flow—from scraping to email—executes as expected. 🧠 How It Works: Workflow Overview User Interaction via Form Users select product categories and enter their email. This triggers the workflow. Data Extraction via Bright Data Bright Data scrapes the MediaMarkt offers page and returns HTML content. HTML Parsing Key elements like product names, prices, and links are extracted for processing. GPT-4o-mini Recommendation Generation The extracted data is sent to OpenAI (GPT-4o-mini), which filters, ranks, and enhances deals based on the user’s preferences. Data Structuring & Split The result is split into individual deal items to be formatted. HTML Document Creation Document Generator populates a clean HTML template with the top recommended deals. Email Delivery The final document is emailed via SMTP to the user with a friendly message. 📨 Final Output Users receive a custom HTML email featuring a curated list of top MediaMarkt deals based on their selected categories. 🔐 Credentials Used Bright Data API** – Web scraping with proxy support OpenAI API** – Generating personalized recommendations SMTP** – Sending personalized deal emails ✨ Customization Tips Change the Data Source**: You can adapt this to scrape other e-commerce sites. Update the Email Template**: Make it match your branding or include images. Extend the Form**: Add preferences like price range or specific brands. Add Scheduling**: Use Cron to run the workflow daily or weekly. ❓Questions? Template and node created by Miquel Colomer and n8nhackers.com. Need help customizing or deploying? Contact us for consulting and support.
by bangank36
Overview This workflow retrieves all blog and event collection items from a Squarespace site and saves them into a Google Sheets spreadsheet. It uses pagination to fetch 20 items per request, ensuring all content is collected efficiently. How It Works The workflow queries your Squarespace blog and event collections. It fetches data in paginated batches (20 items per page). The retrieved data is formatted and inserted into Google Sheets. The workflow runs on demand or on a schedule, ensuring your data stays up to date. Requirements Credentials To use this template, you need: Your Squarespace collection URL Google Sheets API credentials Google Sheets Setup Use this sample Google Sheets template to get started quickly. Who Is This For? This template is designed for: Bloggers looking to manage and analyze content externally. Businesses and marketers tracking content performance. Anyone who needs an automated way to extract Squarespace blog and event data. Explore More Templates Check out my other n8n templates: 👉 n8n.io/creators/bangank36
by n8n Team
This workflow syncs Discord scheduled events to Google Calendar. On a specified schedule, a request to Discord's API is made to get the scheduled events on a particular server. Only the events that have not been created or have recently been updated will be sent to Google Calendar. Prerequisites Discord account and Discord credentials. Google account and Google credentials. How it works Triggers off on the On schedule node. Gets the scheduled events from Discord. The IDs of the Discord scheduled events are used to get the events from Google Calendar, since the IDs are the same on creation of the Google Calendar event. We can now determine which events are new or have been updated. The new or updated events are created or updated in Google Calendar.
by hani safaei
This template helps anyone track how often their website appears in Google’s AI Overview. a growing part of search results that can’t currently be tracked using traditional SEO tools. With this workflow, users can: Input a list of keywords (from Google Search Console or manual research). Use the SerpApi to pull Google search results. Extract AI Overview content and its list of sources. Map that information into a structured Google Sheet, including whether your site is listed in those sources. Setup is straightforward and fully automated, but you'll need: A SerpApi key A connected Google Sheets account Who is this for? This workflow is designed for SEO professionals, digital marketers, and site owners who want to track their website’s visibility in Google AI Overviews. What problem does it solve? AI Overviews are rapidly becoming more common in Google search results. However, there's no tool (yet) that tells you if your website is appearing in those answers. This is a blind spot for SEO. This workflow helps you check your site’s presence in AI Overviews manually, at scale. What does the workflow do? The workflow: Takes a list of target keywords (exported from GSC or elsewhere) Uses SerpApi to get search results from Google Extracts the AI Overview block and its sources Checks if your domain is among them Saves all results into a Google Sheet The final Google Sheet will contain: Keyword | AI Overview Exists | List of Sources | Is my domain listed Setup You’ll need: A SerpApi API key A Google Sheet with your list of keywords A connected Google Sheets account in n8n How to customize this workflow Change the list of keywords (pull from GSC or edit the sheet manually) Replace the placeholder domain with your own Adjust the Google Sheet column mapping as needed
by damo
Overview This workflow allows users to generate AI music using the KIE. ai API integrated with the Suno V3.5 model. It provides a simple form interface for inputting parameters like music prompts, styles, and titles. The system automatically submits the request to the API, monitors the generation status in real time until completion, and retrieves the final music output. This is perfect for musicians, content creators, or developers looking to automate custom music creation with support for various modes and intelligent generation. Prerequisites A KIE. ai account and API key: Create an account at KIE.ai and obtain your API key. An active n8n instance (self-hosted or cloud-based) with support for HTTP requests and form submissions. Familiarity with AI music prompts to optimize results, such as describing mood, instruments, and rhythm. Setup Instructions Get API Key: Sign up at KIE. ai and generate your API key. Keep it secure and input it in the form—do not disclose it to others. Import Workflow: Copy the JSON from this template and import it into your n8n editor. Configure the Form: In the form node, set fields for: prompt: Describe the music content (e.g., "A calm and relaxing piano track with soft melodies"). style: Specify the genre (e.g., "Classical", "Jazz", "Pop"). title: Provide a title for the generated music (max 80 characters). api_key: Your KIE. ai key. Test the Workflow: Click "Execute Workflow" in n8n to activate the form. Access the form URL, fill in the parameters, and submit. The workflow will send a POST request to the API, wait and poll every 10 seconds for status updates, and display the music file once ready. View Results: The output node formats the results, showing playable music files. Customization Guidance Refine Prompts**: For better results, include detailed descriptions like emotions, rhythm, instruments, or lyrics. Example: "A peaceful piano meditation track with gentle waves in the background."