by Dr. Firas
💥 Generate AI Music & Publish to YouTube Automatically with Blotato This workflow automatically generates AI music, creates a thumbnail, converts everything into a video, and publishes it directly to YouTube — fully automated. Perfect for building faceless YouTube music channels, AI content automation, or passive content generation pipelines. Who is this for? This template is ideal for: YouTube creators building AI music channels Content creators creating faceless videos Automation enthusiasts using n8n for content pipelines Agencies generating bulk YouTube content Anyone wanting to automate music creation and publishing Whether you're creating lofi music, ambient tracks, meditation sounds, or background music, this workflow removes manual work entirely. What problem is this workflow solving? Creating AI music videos typically involves multiple manual steps: Generating music Creating thumbnails Converting audio into video Uploading to YouTube Writing titles and metadata This process is time-consuming and repetitive. This workflow automates everything into a single end-to-end pipeline, allowing you to: Generate unlimited AI music Automatically create thumbnails Convert audio into video Publish directly to YouTube What this workflow does This workflow performs the following steps: User submits a music request via form AI generates an optimized music prompt ElevenLabs generates AI music AI generates thumbnail prompt Atlas Cloud generates thumbnail image Cloudinary uploads generated assets Shotstack creates video (image + audio) Blotato uploads video to YouTube automatically SEO-optimized YouTube title is generated automatically Final result: AI-generated music video automatically published to YouTube The entire process runs fully automated. Setup To use this workflow, configure the following credentials: Required APIs OpenAI API Key — Prompt generation Elevenlabs** API Key — Music generation AtlasCloud** API Key — Thumbnail generation Cloudinary Account — Media hosting Shotstack API Key — Video creation Blotato** API Key — YouTube publishing Community Node Requirement This workflow uses the Blotato community node. 🎥 Watch This Tutorial 👋 Need help or want to customize this? 📩 Contact: LinkedIn 📺 YouTube: @DRFIRASS 🚀 Workshops: Mes Ateliers n8n Need help customizing? Contact me for consulting and support : Linkedin / Youtube / 🚀 Mes Ateliers n8n
by chaufnet
Webhook to report through Mailgun phishing websites to Steam and CloudFlare (if the domain is on CloudFlare) You have to set the Credentials for webhook and Mailgun. You have to set the email from for Mailgun. This assumes it is running in n8n's Docker image where bind-tools is not readily available but installable.
by Raquel Giugliano
This workflow automates currency rate uploads into SAP Business One via Service Layer, using flexible input sources such as JSON (API), SQL Server, Google Sheets, or manual values. It leverages logic branching, AI validation, and logging for complete control and traceability. ++⚙️ HOW IT WORKS:++ 🔹 1. Receive Data via Webhook The workflow listens on the endpoint /formulario-datos via HTTP POST. The request body should include: origen: one of JSON, SQL, GoogleSheets, or Manual Depending on the value, the flow branches accordingly. 🔹 2. Authenticate with SAP Business One A POST request is sent to SAP B1’s Login endpoint. A session cookie (B1SESSION) is retrieved and used in all subsequent API calls. 🔹 3. Switch by Origin The flow branches into four processing paths based on origen: JSON: The payload is normalized using OpenAI to extract an array of rates. Each rate is sent to SAP individually after parsing. SQL: The SQL query provided in the payload is executed on a connected Microsoft SQL Server. The results are checked by AI to validate the date format. If valid, rates are sent to SAP. GoogleSheets: Rates are pulled from a connected spreadsheet. Each entry is sent to SAP in sequence. Manual: Uses currency, rate, and rateDate directly from the webhook payload. Sends the result directly to SAP. 🔹 4. AI-Powered Enhancements (Optional but enabled) Normalize JSON: Uses OpenAI (LangChain node) to convert any messy structure into a uniform array under the key rate. Date Formatting: Another OpenAI call ensures RateDate is in yyyyMMdd format (required by SAP), converting from ISO, timestamp, or other formats. 🔹 5. Send to SAP Business One (Service Layer) All paths send a POST request to: /SBOBobService_SetCurrencyRate With a payload such as: { "Currency": "USD", "Rate": "0.92", "RateDate": "20250612" } 🔹 6. Log Results All success/failure results are appended to a Google Sheets log (LOGS_N8N) The log includes method, URL, sent payload, status code, and message. ++🛠 SETUP STEPS:++ 1️⃣ Create Required Credentials: Go to Credentials > + New Credential and configure: SAP Business One (Service Layer) Type: HTTP Request Auth or Token Base URL: https://<your-host>:50000/b1s/v1/ Provide Username, Password, and CompanyDB via variables or fields Google Sheets OAuth2 connection to a Google account with access Microsoft SQL Server SQL login credentials and host OpenAI API key with access to models like GPT-4o 2️⃣ Environment Variables (Recommended) Set these variables in n8n → Settings → Variables: SAP_URL=https://<host>:50000/b1s/v1/ SAP_USER=your_username SAP_PASSWORD=your_password SAP_COMPANY_DB=your_companyDB 3️⃣ Prepare Google Sheets Sheet 1: RATE (for charging the data) Columns: Currency, Rate, RateDate Sheet 2: LOGS_N8N (to save the logs, success or failed) Columns: workflow, method, url, json, status_code, message 4️⃣ Activate and Test Deploy the webhook and grab the URL. ++✅ BONUS++ Built-in AI assistance for input validation and structure Logs all results for compliance and audit Flexible integration paths: perfect for hybrid or transitional systems
by Harshil Agrawal
This workflow allows you to store the output of a phantom in Airtable. This workflow uses the LinkedIn Profile Scraper phantom. Configure and launch this phantom from your Phantombuster account before executing the workflow. The workflow uses the following node: Phantombuster node: The Phantombuster node gets the output of the LinkedIn Profile Scraper phantom that ran earlier. You can select a different phantom from the Agent dropdown list, but make sure to configure the workflow accordingly. Set node: Using the Set node we are setting the data for the workflow. The data that we set in this node will be used by the next nodes in the workflow. Based on your use-case, you can modify the node. Airtable node: The Airtable node allows us to append the data in an Airtable. Based on your use-case you can replace this node with any other node. Instead of storing the data in Airtable, you can store the data in a database or Google Sheet, or send it as an email using the Send Email node, Gmail node, or Microsoft Outlook node.
by Marcel Claus-Ahrens
Instructions This automation overlays a background image with another image, making it easy to add watermarks or logos. You can use this automation to watermark your images by overlaying them with a transparent version of your logo. If you'd like to place your logo in a specific corner, feel free to adjust the position of the overlay image in the code node. How it Works Both images are downloaded, so we can process binary files (you can modify the source, tho.) We extract metadata, focusing on the dimensions of each image. The position of the overlay image is calculated (default: dead center of the background image). The two images are composited together. Limitations and Optimisation Opportunities The overlay image must be the same size or smaller than the background image for proper alignment. The overlay image does not automatically scale to match the proportions of the background image. Enjoy the workflow! ❤️ let the work flow — Workflow Automation & Development
by n8n Team
Who this template is for This template is for developers or teams who need to convert CSV data into JSON format through an API endpoint, with support for both file uploads and raw CSV text input. Use case Converting CSV files or raw CSV text data into JSON format via a webhook endpoint, with error handling and notifications. This is particularly useful when you need to transform CSV data into JSON as part of a larger automation or integration process. How this workflow works Receives POST requests through a webhook endpoint at /tool/csv-to-json Uses a Switch node to handle different input types: File uploads (binary data) Plain text CSV data JSON format data Processes the CSV data: For files: Uses the Extract From File node For raw text: Converts the text to CSV using a custom Code node that handles both comma and semicolon delimiters Aggregates the processed data and returns: Success response (200): Converted JSON data Error response (500): Error message with details In case of errors, sends notifications to a Slack error channel with execution details and a link to debug Set up steps Configure the webhook endpoint by deploying the workflow Set up Slack integration for error notifications: Update the Slack channel ID (currently set to "C0832GBAEN4") Configure OAuth2 authentication for Slack Test the endpoint using either: CURL for file uploads: bash Copy curl -X POST "https://yoururl.com/webhook-test/tool/csv-to-json" \ -H "Content-Type: text/csv" \ --data-binary @path/to/your/file.csv Or send raw CSV data as text/plain content type
by Ludwig
Using PostBin to Test Webhooks Without Changing WEBHOOK_URL How it Works Many new n8n users struggle with testing webhooks when running n8n on localhost, as external services cannot reach localhost. This workflow introduces a technique using PostBin, which provides a temporary, publicly accessible URL to receive webhook requests. Generates a temporary webhook endpoint via PostBin. Uses this endpoint in place of localhost to test webhooks. Captures and displays the incoming webhook request data. Enables debugging and iterating without modifying the WEBHOOK_URL environment variable. Set Up Steps Estimated time:** ~5–10 minutes Create a PostBin instance to generate a publicly accessible webhook URL. Copy the PostBin URL and use it as the webhook destination in n8n. Trigger the webhook from an external service or manually. Inspect the request payload in PostBin to verify data reception. (EXAMPLE) Using PostBin for Webhook Testing in a BambooHR Integration How it Works In this example, we apply the PostBin technique to a BambooHR integration. Instead of manually configuring a webhook in BambooHR, this workflow automates webhook registration using the BambooHR API. The workflow: Uses the BambooHR API to programmatically register the PostBin URL as a webhook. Retrieves the most recent webhook calls made by BambooHR to the PostBin URL. (Optional) Sends a personalized Slack message for new employees using OpenAI. Set Up Steps Estimated time:** ~15–20 minutes Set up PostBin using the steps from the first section. Log into BambooHR to generate an API key for authentication. Run the workflow to register the PostBin URL as a webhook in BambooHR via the API. Retrieve recent webhook calls from PostBin to validate data reception. (Optional) Send a Slack notification using the processed data.
by Yaron Been
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow automatically gathers and analyzes feature requests from multiple sources including support tickets, user forums, and feedback platforms to help prioritize product development. It saves you time by eliminating the need to manually monitor various channels and provides intelligent feature request analysis. Overview This workflow automatically scrapes support systems, user forums, social media, and feedback platforms to collect feature requests from customers. It uses Bright Data to access various platforms without being blocked and AI to intelligently categorize, prioritize, and analyze feature requests based on frequency and user impact. Tools Used n8n**: The automation platform that orchestrates the workflow Bright Data**: For scraping support platforms and user forums without being blocked OpenAI**: AI agent for intelligent feature request categorization and analysis Google Sheets**: For storing feature requests and generating prioritization reports How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Bright Data: Add your Bright Data credentials to the MCP Client node Set Up OpenAI: Configure your OpenAI API credentials Configure Google Sheets: Connect your Google Sheets account and set up your feature request tracking spreadsheet Customize: Define feedback sources and feature request identification parameters Use Cases Product Management**: Prioritize roadmap items based on customer demand Development Teams**: Understand which features users need most Customer Success**: Track and respond to feature requests proactively Strategy Teams**: Make data-driven decisions about product direction Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Bright Data**: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #featurerequests #productmanagement #brightdata #webscraping #productdevelopment #n8nworkflow #workflow #nocode #roadmapping #customervoice #productinsights #featureanalysis #productfeedback #userresearch #productdata #featuretracking #productplanning #customerneeds #featurediscovery #productprioritization #featurebacklog #uservoice #productintelligence #developmentplanning #featuremonitoring #productdecisions #feedbackgathering #productautomation
by Joseph LePage
💡🌐 Essential Multipage Website Scraper with Jina.ai Use responsibly and follow local rules and regulations This N8N workflow enables automated multi-page website scraping using Jina.ai's powerful web scraping capabilities, with seamless integration to Google Drive for content storage. Here's how it works: Main Features The workflow automatically scrapes multiple pages from a website's sitemap and saves each page's content as a separate Google Drive document. Key Components Input Configuration Starts with a sitemap URL (default: https://ai.pydantic.dev/sitemap.xml)** Processes the sitemap to extract individual page URLs Includes filtering options to target specific topics or pages Scraping Process Uses Jina.ai's web scraper to extract content from each URL Converts webpage content into clean markdown format Extracts page titles automatically for document naming Storage Integration Creates individual Google Drive documents for each scraped page Names documents using the format "URL - Page Title" Saves content in markdown format for better readability Usage Instructions Set your target website's sitemap URL in the "Set Website URL" node Configure the "Filter By Topics or Pages" node to select specific content Adjust the "Limit" node (default: 20 pages) to control batch size Connect your Google Drive account Run the workflow to begin automated scraping Additional Features Built-in rate limiting through the Wait node to prevent overloading servers Batch processing capability for handling large sitemaps The workflow requires no API key for Jina.ai, making it accessible for immediate use while maintaining responsible scraping practices.
by Don Jayamaha Jr
This workflow powers the Binance Spot Market Quant AI Agent, acting as the Financial Market Analyst. It fuses real-time market structure data (price, volume, kline) with multiple timeframe technical indicators (15m, 1h, 4h, 1d) and returns a structured trading outlook—perfect for intraday and swing traders who want actionable analysis in Telegram. 🔗 Requires the following sub-workflows to function: • Binance SM 15min Indicators Tool • Binance SM 1hour Indicators Tool • Binance SM 4hour Indicators Tool • Binance SM 1day Indicators Tool • Binance SM Price/24hStats/Kline Tool ⚙️ How It Works Triggered via webhook (typically by the Quant AI Agent). Extracts user symbol + timeframe from input (e.g., "DOGE outlook today"). Calls all linked sub-workflows to retrieve indicators + live price data. Merges the data and formats a clean trading report using GPT-4o-mini. Returns HTML-formatted message suitable for Telegram delivery. 📥 Sample Input { "message": "SOLUSDT", "sessionId": "654321123" } ✅ Telegram Output Format 📊 SOLUSDT Market Snapshot 💰 Price: $156.75 📉 24h Stats: High $160.10 | Low $149.00 | Volume: 1.1M SOL 🧪 4h Indicators: • RSI: 58.2 (Neutral-Bullish) • MACD: Crossover Up • BB: Squeezing Near Upper Band • ADX: 25.7 (Rising Trend) 📈 Resistance: $163 📉 Support: $148 🔍 Use Cases | Scenario | Outcome | | ------------------------------- | --------------------------------------------------------- | | User asks for “BTC outlook” | Returns 1h + 4h + 1d indicators + live price + key levels | | Telegram bot prompt: “DOGE now” | Returns short-term 15m + 1h analysis snapshot | | Strategy trigger inside n8n | Enables other workflows to consume structured signal data | 🎥 Watch Tutorial: 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding or redistribution permitted. 🔗 For support: LinkedIn – Don Jayamaha
by The { AI } rtist
Este workflow es para trabajar con tratamiento de texto usando n8n y poder iniciarte en como funciona. How To, Paso a Paso: https://comunidad-n8n.com/tratamiento-de-textos/ Comunidad de telegram: https://t.me/comunidadn8n
by Harshil Agrawal
This workflow allows you to insert and retrieve data from a table in Stackby. Set node: The Set node is used to set the values for the name and id fields for a new record. You might want to add data from an external source, for example an API or a CRM. Based on your use-case, add the respective node before the Set node and configure your Set node accordingly. Stackby node: This node appends data from the previous node to a table in Stackby. Based on the values you want add to your table, enter the column names in the Column field. Stackby1 node: This node fetches all the data that is stored in the table in Stackby.