by Jenny
Vector Database as a Big Data Analysis Tool for AI Agents Workflows from the webinar "Build production-ready AI Agents with Qdrant and n8n". This series of workflows shows how to build big data analysis tools for production-ready AI agents with the help of vector databases. These pipelines are adaptable to any dataset of images, hence, many production use cases. Uploading (image) datasets to Qdrant Set up meta-variables for anomaly detection in Qdrant Anomaly detection tool KNN classifier tool For anomaly detection The first pipeline to upload an image dataset to Qdrant. 2. This is the second pipeline to set up cluster (class) centres & cluster (class) threshold scores needed for anomaly detection. The third is the anomaly detection tool, which takes any image as input and uses all preparatory work done with Qdrant to detect if it's an anomaly to the uploaded dataset. For KNN (k nearest neighbours) classification The first pipeline to upload an image dataset to Qdrant. The second is the KNN classifier tool, which takes any image as input and classifies it on the uploaded to Qdrant dataset. To recreate both You'll have to upload crops and lands datasets from Kaggle to your own Google Storage bucket, and re-create APIs/connections to Qdrant Cloud (you can use Free Tier cluster), Voyage AI API & Google Cloud Storage. [This workflow] Setting Up Cluster (Class) Centres & Cluster (Class) Threshold Scores for Anomaly Detection Preparatory workflow to set cluster centres and cluster threshold scores so anomalies can be detected based on these thresholds. Here, we're using two approaches to set up these centres: the "distance matrix approach" and the "multimodal embedding model approach".
by Jonathan
This is the fourth workflow for the Mattermost Standup Bot. This workflow sends the team a message every morning to ask them three standup questions. What have you accomplished since your last report? What do you want to accomplish until your next report? Is anything blocking your progress? Once answered, the answers are sent to a Mattermost channel. The "Read Config" nodes will need to be updated to point to the ID of the "Standup Bot - Read Config" workflow and the "Override Config" node will need to point to "Standup Bot - Override Config"
by MANISH KUMAR
Automated YouTube Shorts Creator with yt-dlp & FFmpeg Description How It Works • Downloads videos/music from YouTube using yt-dlp • Merges assets with dynamic text overlays • Automatically uploads to YouTube as Shorts (9:16 format) • Tracks everything in Google Sheets Set Up Steps (~10 minutes) Install yt-dlp and FFmpeg in your n8n environment Connect Google Sheets (for video/music pools) Set up YouTube OAuth credentials Configure text overlay font (NotoSerif included) Key Features Dual Pipeline System Video Downloader (MP4) + Music Downloader (MP3 with thumbnails) Random pairing for endless combinations Professional Text Overlays Dynamic line wrapping for perfect 9:16 formatting Customizable fonts/colors YouTube API Integration Automatic upload with metadata (titles/descriptions) Privacy/license controls Google Sheets Tracking Logs download paths, YouTube URLs, timestamps Prevents duplicate processing
by Harshil Agrawal
This workflow allows you to add candidates’ profile assessments to Notion before an interview. Prerequisites Add an input field on your Calendly Invite page where the candidate can enter their LinkedIn URL. Create credentials for your Calendly account. Follow the steps mentioned in the documentation to learn how to do that. Create credentials for Humantic AI following the steps mentioned here. Create a page on Notion similar to this page. Create credentials for the Notion node by following the steps in the documentation. Calendly Trigger node: This node will trigger the workflow when an interview gets scheduled. Make sure to add a field to collect the candidates' LinkedIn URL on your invite page. Humantic AI: This node uses the LinkedIn URL received by the previous node to create a candidate profile in Humantic AI. Humantic AI1: This node will analyze the candidates' profile. Notion node: This node will create a new page in Notion using the information from the previous node.
by Nishant Rayan
Create Video with HeyGen and Upload to YouTube Overview This workflow automates the process of creating an AI-generated avatar video using HeyGen and directly uploading it to YouTube. By sending text input via a webhook, the workflow generates a video with a chosen avatar and voice, waits for processing, downloads the completed file, and publishes it to your configured YouTube channel. This template is ideal for automating content creation pipelines, such as daily news updates, explainer videos, or narrated scripts, without manual intervention. Use Case Marketing teams**: Automate explainer or promotional video creation from text input. Content creators**: Generate AI-based avatar videos for YouTube directly from scripts. Organizations**: Streamline video generation for announcements, product updates, or tutorials. Instead of recording and editing videos manually, this template allows you to feed text content into a webhook and have a ready-to-publish video on your YouTube channel within minutes. How It Works Webhook Trigger: The workflow starts when text content and a title are sent to the webhook endpoint. Code Node: Cleans and formats the input text by removing unnecessary newlines and returns it with the title. Set Node: Prepares HeyGen parameters, including API key, avatar ID, voice ID, title, and content. HeyGen API Call: Sends the request to generate a video with the provided avatar and voice. Wait Node: Pauses briefly to allow HeyGen to process the video. Video Status Check: Polls HeyGen to check whether the video has finished processing. Conditional Check: If the video is still processing, it loops back to wait. Once complete, it moves forward. Download Node: Retrieves the generated video file. YouTube Upload Node: Uploads the video to your YouTube channel with the provided title and default settings. Requirements HeyGen API Key**: Required to authenticate with HeyGen’s video generation API. HeyGen Avatar & Voice IDs**: Unique identifiers for the avatar and voice you want to use. YouTube OAuth2 Credentials**: Connected account for video uploads. Setup Instructions Import the Workflow: Download and import this template JSON into your n8n instance. Configure the Webhook: Copy the webhook URL from n8n and use it to send requests with title and content. Example payload: { "title": "Tech News Update", "content": "Today’s top story is about AI advancements in video generation..." } Add HeyGen Credentials: Insert your HeyGen API key in the Set Node under x-api-key. Provide your chosen avatar_id and voice_id from HeyGen. To find your HeyGen avatar_id and voice_id, first retrieve your API key from the HeyGen dashboard. With this key, you can use HeyGen’s API to look up available options: run a GET request to https://api.heygen.com/v2/avatars to see a list of avatars along with their avatar_id, and then run a GET request to https://api.heygen.com/v2/voices to see a list of voices with their voice_id. Once you’ve identified the avatar and voice you want to use, copy their IDs and paste them into the Set HeyGen Parameters node in your n8n workflow. Set Up YouTube Credentials: Connect your YouTube account in n8n using OAuth2. Ensure proper permissions are granted for video uploads. To set up YouTube credentials in n8n, go to the Google Cloud Console, enable YouTube Data API v3, and create an OAuth Client ID (choose Web Application and add the redirect URI: https://<your-n8n-domain>/rest/oauth2-credential/callback). Copy the Client ID and Client Secret, then in n8n create new credentials for YouTube OAuth2 API. Enter the values, authenticate with your Google account to grant upload permissions, and test the connection. Once complete, the YouTube node will be ready to upload videos automatically. Activate the Workflow: Once configured, enable the workflow. Sending a POST request to the webhook with title and content will trigger the full process. Notes You can adjust video dimensions (default: 1280x720) in the HeyGen API request. Processing time may vary depending on script length. The workflow uses a wait-and-poll loop until the video is ready. Default YouTube upload category is Education (28) and region is US. These can be customized in the YouTube node.
by Don Jayamaha Jr
This workflow acts as a central API gateway for all technical indicator agents in the Binance Spot Market Quant AI system. It listens for incoming webhook requests and dynamically routes them to the correct timeframe-based indicator tool (15m, 1h, 4h, 1d). Designed to power multi-timeframe analysis at scale. 🎥 Watch Tutorial: 🎯 What It Does Accepts requests via webhook with a token symbol and timeframe Forwards requests to the correct internal technical indicator tool Returns a clean JSON payload with RSI, MACD, BBANDS, EMA, SMA, and ADX Can be used directly or as a microservice by other agents 🛠️ Input Format Webhook endpoint: POST /webhook/indicators Body format: { "symbol": "DOGEUSDT", "timeframe": "15m" } 🔄 Routing Logic | Timeframe | Routed To | | --------- | -------------------------------- | | 15m | Binance SM 15min Indicators Tool | | 1h | Binance SM 1hour Indicators Tool | | 4h | Binance SM 4hour Indicators Tool | | 1d | Binance SM 1day Indicators Tool | 🔎 Use Cases | Use Case | Description | | -------------------------------------------------- | ------------------------------------------------------ | | 🔗 Used by Binance Financial Analyst Tool | Automatically triggers all indicator tools in parallel | | 🤖 Integrated in Binance Quant AI System | Supports reasoning, signal generation, and summaries | | ⚙️ Can be called independently for raw data access | Useful for dashboards or advanced analytics | 📤 Output Example { "symbol": "DOGEUSDT", "timeframe": "15m", "rsi": 56.7, "macd": "Bearish Crossover", "bbands": "Stable", "ema": "Price above EMA", "adx": 19.4 } ✅ Prerequisites Make sure all the following workflows are installed and operational: Binance SM 15min Indicators Tool Binance SM 1hour Indicators Tool Binance SM 4hour Indicators Tool Binance SM 1day Indicators Tool OpenAI credentials (for any agent using LLM formatting) 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company All architectural routing logic and endpoint structuring is IP-protected. No unauthorized rebranding or resale permitted. 🔗 Need help? Connect on LinkedIn – Don Jayamaha
by jason
This workflow will gather data every minute from the GitHub (https://github.com), Docker (https://www.docker.com/), npm (https://www.npmjs.com/) and Product Hunt (https://www.producthunt.com/) website APIs and display select information on a Smashing (https://smashing.github.io/) dashboard. For convenience sake, the dashboard piece can be easily downloaded as a docker container (https://hub.docker.com/r/tephlon/n8n_dashboard) and installed into your docker environment.
by Jan Oberhauser
Simpe API which queries the received country code via GraphQL and returns it. Example URL: https://n8n.exampl.ecom/webhook/1/webhook/webhook?code=DE Receives country code from an incoming HTTP Request Reads data via GraphQL Converts the data to JSON Constructs return string
by Eduard
An example workflow for a multilanguage Telegram bot. It allows adding many new languages to the bot without editing the workflow. Important note! Due to some breaking API changes in NocoDB some of its node options are not working at the moment (MAY 2022). These two nodes were replaced by HTTP request nodes. Functionality is still the same.
by Colleen Brady
Who is this for? This workflow is built for anyone who works with YouTube content, whether you're: A learner looking to understand a video’s key points A content creator repurposing video material A YouTube manager looking to update titles, descriptions A social media strategist searching for the most shareable clips Don't just ask questions about what's said. Find out what's going on in a video too. Video Overview: https://www.youtube.com/watch?v=Ovg_KfKxnC8 What problem does this solve? YouTube videos hold valuable insights, but watching and processing them manually takes time. This workflow automates: Quick content extraction**: Summarize key ideas without watching full videos Visual analysis**: Understand what’s happening beyond spoken words Clip discovery**: Identify the best moments for social sharing How the workflow works This n8n-powered automation: Uses Google’s Gemini 1.5 Flash AI for intelligent video analysis Provides multiple content analysis templates tailored to different needs What makes this workflow powerful? The easiest place to start is by requesting a summary or transcript. From there, you can refine the prompts to match your specific use case and the type of video content you’re working with. But what's even more amazing? You can ask questions about what’s happening in the video — and get detailed insights about the people, objects, and scenes. It's jaw-dropping. This workflow is versatile — the actions adapt based on the values set. That means you can use a single workflow to: Extract transcripts Generate an extended YouTube description Write a summary blog post You can also modify the trigger based on how you want to run the workflow — use a webhook, connect it to an event in Airtable, or leave it as-is for on-demand use. The output can then be sent anywhere: Notion, Airtable, CMS platforms, or even just stored for reference. How to set it up Connect your Google API key Paste a YouTube video URL Select an analysis method Run the workflow and get structured results Analysis Templates Basic & Timestamped Transcripts**: Extract spoken content Summaries**: Get concise takeaways Visual Scene Analysis**: Detect objects, settings, and people Clip Finder**: Locate shareable moments Actionable Insights**: Extract practical information Customization Options Modify templates to fit your needs Connect with external platforms Adjust formatting preferences Advanced Configuration This workflow is designed for use with gemini-1.5-flash. In the future, you can update the flow to work with different models or even modify the HTTP request node to define which API endpoint should be used. It's also been designed so you can use this flow on it's own or add to a new / existing worflow. This workflow helps you get the most out of YouTube content — quickly and efficiently.
by WeblineIndia
This n8n workflow automates the process of capturing and storing incoming email details in a structured spreadsheet format, such as Google Sheets or Excel. Whenever a new email is received, the workflow extracts key details—including the sender’s email, subject, email body, and optional attachments—and logs them as a new row in the spreadsheet. You can customise this workflow to extract additional details, filter emails based on specific criteria, or send notifications when new entries are added. Pre-conditions & Requirements Before setting up this workflow, ensure that: You have access to the email provider (e.g., Gmail, Outlook, or IMAP-supported email services). The Gmail Node must be enabled in n8n. You must authenticate n8n with Google OAuth2 to access your inbox. Ensure that the Gmail API is enabled in the Google Cloud Console. You have an existing Google Sheet where data will be stored. The Google Sheets API is enabled. You authenticate n8n with your Google account. Steps Step 1: Add the Gmail Trigger Node Click on "Add Node" and search for "Gmail". Select "Gmail Trigger" and click to add it. Under Authentication, click "Create New" and authenticate with your Google account. (If you have already connected your Google account, simply select it.) In the Trigger Event field, select "Message Received". Under Filters, you can specify: Label/Mailbox: If you want to listen to emails from a specific folder (optional). From Address: If you only want to receive emails from specific senders (optional). Click "Execute Node" to test the connection. Click "Save". What This Does: This node listens for new incoming emails in your Gmail inbox. Step 2: Store Email Data in Google Sheets Click on "Add Node" and search for "Google Sheets" (or Microsoft Excel, if applicable) Under Authentication, connect your Google account Select the target Spreadsheet and Sheet Name where the data will be stored Set the Operation to "Append Row" Map the extracted email data to the correct columns. Click "Execute Node" to test and verify data storage Click "Save" What This Does: This node automatically adds a new row for each incoming email, ensuring a structured and searchable email log. Final Step Attach both node and execute the workflow. Who’s behind this? WeblineIndia’s AI development team. We've delivered 3500+ software projects across 25+ countries since 1999. From no-code automations to complex AI systems — our AI team builds tools that drive results. Looking to hire AI developers? Start with us.
by rangelstoilov
This workflow goes through the teachable webhook request types and adds a user, updates him and tags him with #unsubscribe or removes the #unsubscribe tag. It also tags the user with the tag of the name of the course. Enjoy!