by Huzaifa Tahir
š¬ What it does This workflow creates an engaging YouTube Short with a single click ā from script to voiceover, to visuals and background music. It combines several AI tools to automate content creation and final video assembly. āļø How it works Accepts an input prompt or topic Generates script using GPT Converts script to voiceover using ElevenLabs Generates b-roll style images via Leonardo.Ai Matches background music Assembles a vertical 1080Ć1920 MP4 video using JSON render config Optionally uploads to YouTube or saves to Cloudinary š§° Setup steps Add your credentials: Leonardo API (image generation) ElevenLabs (voiceover) Cloudinary (upload destination) Any GPT-based text generator Drop your audio/music file in the right node Replace API expressions with your own credentials > šØ Full step-by-step instructions are in sticky notes inside the workflow.
by Marcelo Abreu
What this workflow does Runs automatically every Monday morning at 8 AM Collects your Google Search Console from the last month and the month before that for a given url (date range is configurable) Formats the data, aggregating it by date, query, page, device and country Generates AI-driven analysis and insights on your results, providing actionable recommendations Renders the report as a visually appealing PDF with charts and tables Sends the report via Slack (you can also add email or WhatsApp) A sample for the first page of the report: Setup Guide Create an account of pdforge and use the pre-made Meta Ads template. Connect Google OAuth2 (guide on the template), OpenAI and Slack to n8n Set your site url and date range (opcional) Customize the scheduling date and time Requirements Google OAuth2 (via Google Search Console): Documentation pdforge access: Create an account AI API access (e.g. via OpenAI, Anthropic, Google or Ollama) Slack acces (via OAuth2): Documentation Feel free to contact me via Linkedin, if you have any questions! šš»
by Facundo Cabrera
Automated Meeting Minutes from Video Recordings This workflow automatically transforms video recordings of meetings into structured, professional meeting minutes in Notion. It uses local AI models (Whisper for transcription and Ollama for summarization) to ensure privacy and cost efficiency, while uploading the original video to Google Drive for safekeeping. Ideal for creative teams, production reviews, or any scenario where visual context is as important as the spoken word. š How It Works Wait & Detect: The workflow monitors a local folder. When a new .mkv video file is added, it waits until the file has finished copying. Prepare Audio: The video is converted into a .wav audio file optimized for transcription (under 25 MB with high clarity). Transcribe Locally: The local Whisper model generates a timestamped text transcript. Generate Smart Minutes: The transcript is sent to a local Ollama LLM, which produces structured, summarized meeting notes. Store & Share: The original video is uploaded to Google Drive, a new page is created in Notion with the notes and a link to the video, and a completion notification is sent via Discord. ā±ļø Setup Steps Estimated Time**: 10ā15 minutes (for technically experienced users). Prerequisites**: Install Python, FFmpeg, and required packages (openai-whisper, ffmpeg-python). Run Ollama locally with a compatible model (e.g., gpt-oss:20b, llama3, mistral). Configure n8n credentials for Google Drive, Notion, and Discord. Workflow Configuration**: Update the file paths for the helper scripts (wait-for-file.ps1, create_wav.py, transcribe_return.py) in the respective "Execute Command" nodes. Change the input folder path (G:\OBS\videos) in the "File" node to your own recording directory. Replace the Google Drive folder ID and Notion database/page ID in their respective nodes. > š” Note: Detailed instructions for each step, including error handling and variable setup, are documented in the Sticky Notes within the workflow itself. š Helper Scripts Documentation wait-for-file.ps1 A PowerShell script that checks if a file is still being written to (i.e., locked by another process). It returns 0 if the file is free and 1 if it is still locked. Usage: .\wait-for-file.ps1 -FilePath "C:\path\to\your\file.mkv" create_wav.py A Python script that converts a video file into a .wav audio file. It automatically calculates the necessary audio bitrate to keep the output file under 25 MBāa common requirement for many transcription services. Usage: python create_wav.py "C:\path\to\your\file.mkv" transcribe_return.py A Python script that uses a local Whisper model to transcribe an audio file. It can auto-detect the language or use a language code specified in the filename (e.g., meeting.en.mkv for English, meeting.es.mkv for Spanish). The transcript is printed directly to stdout with timestamps, which is then captured by the n8n workflow. Usage: Auto-detect language python transcribe_return.py "C:\path\to\your\file.mkv" Force language via filename python transcribe_return.py "C:\path\to\your\file.es.mkv" `
by Automate With Marc
šØ Instagram Carousel & Caption Generator on Autopilot (GPT-5 + Nano Banana + Blotato + Google Sheets) Description Watch the full step-by-step tutorial on YouTube: https://youtu.be/id22R7iBTjo Disclaimer (self-hosted requirement): This template assumes you have valid API credentials for OpenAI, Wavespeed/Nano Banana, Blotato, and Google. If using n8n Self-Hosted, ensure HTTPS access and credentials are set in your instance. How It Works Chat Trigger ā Receive a topic/idea (e.g. ā5 best podcast tipsā). Image Prompt Generator (GPT-5) ā Creates 5 prompts using the āHook ā Problem ā Insight ā Solution ā CTAā framework. Structured Output Parser ā Formats output into a JSON array. Generate Images (Nano Banana) ā Converts prompts into high-quality visuals. Wait for Render ā Ensures image generation completes. Fetch Rendered Image URLs ā Retrieves image links. Upload to Blotato ā Hosts and prepares images for posting. Collect Media URLs ā Gathers all uploaded image URLs. Log to Google Sheets ā Stores image URLs + timestamps for tracking. Caption Generator (GPT-5) ā Writes an SEO-friendly caption. Merge Caption + Images ā Combines data. Post Carousel (Blotato) ā Publishes directly to Instagram. Step-by-Step Setup Instructions 1) Prerequisites n8n (Cloud or Self-Hosted) OpenAI API Key (GPT-5) Wavespeed API Key (Nano Banana) Blotato API credentials (connected to Instagram) Google Sheets OAuth credentials 2) Add Credentials in n8n OpenAI: Settings ā Credentials ā Add āOpenAI APIā Wavespeed: HTTP Header Auth (e.g. Authorization: Bearer <API_KEY>) Blotato: Add āBlotato APIā Google Sheets: Add āGoogle Sheets OAuth2 APIā 3) Configure & Test Run with an idea like āTop 5 design hacksā. Check generated images, caption, and logged sheet entry. Confirm posting works via Blotato. 4) Optional Add a Schedule Trigger for weekly automation. Insert a Slack approval loop before posting. Customization Guide āļø Change design style: Modify adjectives in the Image Prompt Generator. š Adjust number of slides: Change Split node loop count. š¬ Tone of captions: Edit Caption Generatorās system prompt. ā±ļø Adjust render wait time: If image generation takes longer, increase the Wait node duration from 30 seconds to 60 seconds or more. šļø Log extra data: Add columns in Google Sheets for campaign or topic. š Swap posting tool: Replace Blotato with your scheduler or email node. Requirements OpenAI API key (GPT-5 or compatible) Wavespeed API key (Nano Banana) Blotato API credentials Google Sheets OAuth credentials n8n account (Cloud or Self-Hosted)
by Growth AI
N8N UGC Video Generator - Setup Instructions Transform Product Images into Professional UGC Videos with AI This powerful n8n workflow automatically converts product images into professional User-Generated Content (UGC) videos using cutting-edge AI technologies including Gemini 2.5 Flash, Claude 4 Sonnet, and VEO3 Fast. Who's it for Content creators** looking to scale video production E-commerce businesses** needing authentic product videos Marketing agencies** creating UGC campaigns for clients Social media managers** requiring quick video content How it works The workflow operates in 4 distinct phases: Phase 0: Setup - Configure all required API credentials and services Phase 1: Image Enhancement - AI analyzes and optimizes your product image Phase 2: Script Generation - Creates authentic dialogue scripts based on your input Phase 3: Video Production - Generates and merges professional video segments Requirements Essential Services & APIs Telegram Bot Token** (create via @BotFather) OpenRouter API** with Gemini 2.5 Flash access Anthropic API** for Claude 4 Sonnet KIE.AI Account** with VEO3 Fast access N8N Instance** (cloud or self-hosted) Technical Prerequisites Basic understanding of n8n workflows API key management experience Telegram bot creation knowledge How to set up Step 1: Service Configuration Create Telegram Bot Message @BotFather on Telegram Use /newbot command and follow instructions Save the bot token for later use OpenRouter Setup Sign up at openrouter.ai Purchase credits for Gemini 2.5 Flash access Generate and save API key Anthropic Configuration Create account at console.anthropic.com Add credits to your account Generate Claude API key KIE.AI Access Register at kie.ai Subscribe to VEO3 Fast plan Obtain bearer token Step 2: N8N Credential Setup Configure these credentials in your n8n instance: Telegram API Credential Name: telegramApi Bot Token: Your Telegram bot token OpenRouter API Credential Name: openRouterApi API Key: Your OpenRouter key Anthropic API Credential Name: anthropicApi API Key: Your Anthropic key HTTP Bearer Auth Credential Name: httpBearerAuth Token: Your KIE.AI bearer token Step 3: Workflow Configuration Import the Workflow Copy the provided JSON workflow Import into your n8n instance Update Telegram Token Locate the "Edit Fields" node Replace "Your Telegram Token" with your actual bot token Configure Webhook URLs Ensure all Telegram nodes have proper webhook configurations Test webhook connectivity Step 4: Testing & Validation Test Individual Nodes Verify each API connection Check credential configurations Confirm node responses End-to-End Testing Send a test image to your Telegram bot Follow the complete workflow process Verify final video output How to customize the workflow Modify Image Enhancement Prompts Edit the HTTP Request node for Gemini Adjust the prompt text to match your style preferences Test different aspect ratios (current: 1:1 square format) Customize Script Generation Modify the Basic LLM Chain node prompt Adjust video segment duration (current: 7-8 seconds each) Change dialogue style and tone requirements Video Generation Settings Update VEO3 API parameters in HTTP Request1 node Modify aspect ratio (current: 16:9) Adjust model settings and seeds for consistency Output Customization Change final video format in MediaFX node Modify Telegram message templates Add additional processing steps before delivery Workflow Operation Phase 1: Image Reception and Enhancement User sends product image via Telegram System prompts for enhancement instructions Gemini AI analyzes and optimizes image Enhanced square-format image returned Phase 2: Analysis and Script Creation System requests dialogue concept from user AI analyzes image details and environment Claude generates realistic 2-segment script Scripts respect physical constraints of original image Phase 3: Video Generation Two separate videos generated using VEO3 System monitors generation status Videos merged into single flowing sequence Final video delivered via Telegram Troubleshooting Common Issues API Rate Limits**: Implement delays between requests Webhook Failures**: Verify URL configurations and SSL certificates Video Generation Timeouts**: Increase wait node duration Credential Errors**: Double-check all API keys and permissions Error Handling The workflow includes automatic error detection: Failed video generation triggers error message Status checking prevents infinite loops Alternative outputs for different scenarios Advanced Features Batch Processing Modify trigger to handle multiple images Add queue management for high-volume usage Implement user session tracking Custom Branding Add watermarks or logos to generated videos Customize color schemes and styling Include brand-specific dialogue templates Analytics Integration Track usage metrics and success rates Monitor API costs and optimization opportunities Implement user behavior analytics Cost Optimization API Usage Management Monitor token consumption across services Implement caching for repeated requests Use lower-cost models for testing phases Efficiency Improvements Optimize image sizes before processing Implement smart retry mechanisms Use batch processing where possible This workflow transforms static product images into engaging, professional UGC videos automatically, saving hours of manual video creation while maintaining high quality output perfect for social media platforms.
by David Roberts
The workflow first populates a Pinecone index with vectors from a Bitcoin whitepaper. Then, it waits for a manual chat message. When received, the chat message is turned into a vector and compared to the vectors in Pinecone. The most similar vectors are retrieved and passed to OpenAI for generating a chat response. Note that to use this template, you need to be on n8n version 1.19.4 or later.
by Dmitry Mikheev
TelegramĀ RichĀ OutputĀ HelperĀ Workflow Who is this for? Builders of Telegram chatābots, AI assistants, or notification services who already run n8n and need to convert long, mixedāmedia answers from an LLM (or any upstream source) into Telegramāfriendly messages. Prerequisites A Telegram bot created with @BotFather. The botās HTTP API token saved as a Telegram API credential in n8n. n8nĀ ā„Ā 1.0 with the builtāin Telegram node still installed. A parent workflow that calls this one via ExecuteĀ Workflow and passes: chatIdĀ ā the destination chat ID (integer). outputĀ ā a string that can contain plain text and HTTPĀ links to images, audio, or video. What the workflow does ExtractĀ Links ā A JavaScript Code node scans output, deduplicates URLs, and classifies each by file extension. Link Path If no media links exist, the text path is used. Otherwise, each link is routed through a Switch node that triggers the correct Telegram call (sendPhoto, sendAudio, sendVideo) so users get inline previews or players. Text Path An IF node checks whether the remaining text exceeds Telegramās 1āÆ000ācharacter limit. When it does, a Code node slices the text at line boundaries; SplitInBatches then sends the chunks sequentially so nothing is lost. All branches converge, keeping the whole exchange inside one execution. Customisation tips Adjust the character limit** ā edit the first expression in āIf text too longā. Filter/enrich links** ā extend the regex or add MIME checks before dispatch. Captions & keyboards** ā populate additionalFields in the three āSend backā nodes. Throughput vs. order* ā tweak the batch size in both *SplitInBatches** nodes. With this template in place, your users receive the complete message, playable media, and zero manual formatting ā all within Telegramās API limits.
by NanaB
Description This n8n workflow automates the entire process of creating and publishing AI-generated videos, triggered by a simple message from a Telegram bot (YTAdmin). It transforms a text prompt into a structured video with scenes, visuals, and voiceover, stores assets in MongoDB, renders the final output using Creatomate, and uploads the video to YouTube. Throughout the process, YTAdmin receives real-time updates on the workflowās progress. This is ideal for content creators, marketers, or businesses looking to scale video production using automation and AI. You can see a video demonstrating this template in action here: https://www.youtube.com/watch?v=EjI-ChpJ4xA&t=200s How it Works Trigger: Message from YTAdmin (Telegram Bot) The flow starts when YTAdmin sends a content prompt. Generate Structured Content A Mistral language model processes the input and outputs structured content, typically broken into scenes. Split & Process Content into Scenes The content is split into categorized parts for scene generation. Generate Media Assets For each scene: Images: Generated using OpenAIās image model. Voiceovers: Created using OpenAIās text-to-speech. Audio files are encoded and stored in MongoDB. Scene Composition Assets are grouped into coherent scenes. Render with Creatomate A complete payload is generated and sent to the Creatomate rendering API to produce the video. Progress messages are sent to YTAdmin. The flow pauses briefly to avoid rate limits. Render Callback Once Creatomate completes rendering, it sends a callback to the flow. If the render fails, an error message is sent to YTAdmin. If the render succeeds, the flow proceeds to post-processing. Generate Title & Description A second Mistral prompt generates a compelling title and description for YouTube. Upload to YouTube The rendered video is retrieved from Creatomate. Itās uploaded to YouTube with the AI-generated metadata. Final Update A success message is sent to YTAdmin, confirming upload completion. Set Up Steps (Approx. 10ā15 Minutes)Step 1: Set Up YTAdmin Bot Create a Telegram bot via BotFather and get your API token. Add this token in n8n's Telegram credentials and link to the "Receive Message from YTAdmin" trigger. Step 2: Connect Your AI Providers Mistral: Add your API key under HTTP Request or AI Model nodes. OpenAI: Create an account at platform.openai.com and obtain an API key. Use it for both image generation and voiceover synthesis. Step 3: Configure Audio File Storage with MongoDB via Custom API Receives the Base64 encoded audio data sent in the request body. Connects to the configured MongoDB instance (connection details are managed securely within the API- code below). Uses the MongoDB driver and GridFS to store the audio data. Returns the unique _id (ObjectId) of the stored file in GridFS as a response. This _id is crucial as it will be used in subsequent steps to generate the download URL for the audio file. My API code can be found here for reference: https://github.com/nanabrownsnr/YTAutomation.git Step 4: Set Up Creatomate Create a Creatomate account, define your video templates, and retrieve your API key. Configure the HTTP request node to match your Creatomate payload requirements. Step 5: Connect YouTube In n8n, add OAuth2 credentials for your YouTube account. Make sure your Google Cloud project has YouTube Data API enabled. Step 6: Deploy and Test Send a message to YTAdmin and monitor the flow in n8n. Verify that content is generated, media is created, and the final video is rendered and uploaded. Customization Options Change the AI Prompts Modify the generation prompts to adjust tone, voice, or content type (e.g., news recaps, product videos, educational summaries). Switch Messaging Platform Replace Telegram (YTAdmin) with Slack, Discord, or WhatsApp by swapping out the trigger and response nodes. Add Subtitles or Effects Integrate Whisper or another speech-to-text tool to generate subtitles. Add overlay or transition effects in the Creatomate video payload. Use Local File Storage Instead of MongoDB Swap out MongoDB upload http nodes with filesystem or S3-compatible storage. Repurpose for Other Platforms Swap YouTube upload with TikTok, Instagram, or Vimeo endpoints for broader publishing. **Need Help or Want to Customize This Workflow? If you'd like assistance setting this up or adapting it for a different use case, feel free to reach out to me at nanabrownsnr@gmail.com. I'm happy to help!**
by David Harvey
iMessage AI-Powered Smart Calorie Tracker > š What it looks like in use: > This image shows a visual of the workflow in action. Use it for reference when replicating or customizing the template. This n8n template transforms a user-submitted food photo into a detailed, friendly, AI-generated nutritional report ā sent back seamlessly as a chat message. It combines OpenAI's visual reasoning, Postgres-based memory, and real-time messaging with Blooio to create a hands-free calorie and nutrition tracker. š§ Use Cases Auto-analyze meals based on user-uploaded images. Daily/weekly/monthly diet summaries with no manual input. Virtual food journaling integrated into messaging apps. Nutrition companion for healthcare, fitness, and wellness apps. š Good to Know ā ļø This uses GPT-4 with image capabilities, which may incur higher usage costs depending on your OpenAI pricing tier. Review OpenAIās pricing. The model uses visual reasoning and estimation to determine nutritional info ā results are estimates and should not replace medical advice. Blooio is used for sending/receiving messages. You will need a valid API key and project set up with webhook delivery. A Postgres database is required for long-term memory (optional but recommended). You can use any memory node with it. āļø How It Works Webhook Trigger The workflow begins when a message is received via Blooio. This webhook listens for user-submitted content, including any image attachments. Image Validation and Extraction A conditional check verifies the presence of attachments. If images are found, their URLs are extracted using a Code node and prepared for processing. Image Analysis via AI Agent Images are passed to an OpenAI-based agent using a custom system prompt that: Identifies the meal, Estimates portion sizes, Calculates calories, macros, fiber, sugar, and sodium, Scores the meal with a health and confidence rating, Responds in a chatty, human-like summary format. Memory Integration A Postgres memory node stores user interactions for recall and contextual continuity, allowing day/week/month reports to be generated based on cumulative messages. Response Aggregation & Summary Messages are aggregated and summarized by a second AI agent into a single concise message to be sent back to the user via Blooio. Message Dispatch The final message is posted back to the originating conversation using the Blooio Send Message API. š How to Use The included webhook can be triggered manually or programmatically by linking Blooio to a frontend chat UI. You can test the flow using a manual POST request containing mock Blooio payloads. Want to use a different messages app? Replace the Blooio nodes with your preferred messaging API (e.g., Twilio, Slack, Telegram). ā Requirements OpenAI API access with GPT-4 Vision or equivalent multimodal support. Blooio account with access to incoming and outgoing message APIs. Optional: Postgres DB (e.g., via Neon) for tracking message context over time. š ļø Customising This Workflow Prompt Tuning** Tailor the system prompt in the AI Agent node to fit specific diets (e.g., keto, diabetic), age groups, or regionally-specific foods. Analytics Dashboards** Hook up your Postgres memory to a data visualization tool for nutritional trends over time. Multilingual Support** Adjust the response prompt to translate messages into other languages or regional dialects. Image Preprocessing** Insert a preprocessing node before sending images to the model to resize, crop, or enhance clarity for better results.
by Cooper
Chat with thing This n8n template lets you build a smart AI chat assistant that can handle text, images, and PDFs ā using OpenAI's GPT-4o multimodal model. It supports dynamic conversations and file analysis, making it great for AI-driven support bots, personal assistants, or embedded chat widgets. š How it Works The chat trigger node kicks off a session using n8n's hosted chat UI. Users can send text or upload images or PDFs ā the workflow checks if a file was included. If an image is uploaded, the file is converted to base64 and analyzed using GPT-4o's vision capabilities. GPT-4o generates a natural language description of the image and responds to the user's question in context. A memory buffer keeps track of the conversation thread, so follow-up questions are handled intelligently. OpenAIās chat model handles both text-only and mixed media input seamlessly. š§Ŗ How to Use You can embed this in a website or use it with your own webhook/chat interface. The logic is modular ā just swap out the chatTrigger node for another input (e.g. form or API). To use with documents, you can modify the logic to pass PDF content to GPT-4 directly. You can extend it with action nodes, e.g. saving results to Notion, Airtable, or sending replies via email or Slack. š Requirements Your OpenAI GPT-4o API key Set File Upload on the chat š Use Cases PDF explainer bot Internal knowledge chat with media support Personal assistant for mixed content
by Gulfiia
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Automated Data-Driven UX Persona Creation ā Try It Out!* About You can create personas based on your website, region, and industry. Unlike traditional persona creation, this process uses reliable data sources and can estimate market size for each persona. UX personas have a wide range of applications: use them to better define your target users during product development, align your team around user goals during workshops, or inspire new features and ideas by deeply understanding user needs and behaviors. How It Works The flow is triggered via a web form Perplexity analyzes the market and creates a data foundation for the personas An AI agent transforms the data into detailed persona descriptions and publishes them in a Google Doc We use DALLĀ·E 3 to generate an image for each persona, which is saved to your Google Drive How To Use Import the package into your N8N interface Set up the credentials in each node to access the necessary tools Wait for the process to run (it takes just a few seconds) Check the final output in Google Docs and your Google Drive Requirements Perplexity for research OpenAI for LLM and Image generation Google Doc Google Drive to upload images
by dmr
This n8n workflow implements a version of the Adaptive Retrieval-Augmented Generation (RAG) framework. It recognizes that the best way to retrieve information often depends on the type of question asked. Instead of a one-size-fits-all approach, this workflow adapts its strategy based on the user's query intent. š How it Works Receive Query: Takes a user query as input (along with context like a chat session ID and Vector Store collection ID if used as sub-workflow). Classify Query: First, the workflow classifies the query into a predefined category. This template uses four examples: Factual: For specific facts. Analytical: For deeper explanations or comparisons. Opinion: For subjective viewpoints. Contextual: For questions relying on specific background. Select & Adapt Strategy: Based on the classification, it selects a corresponding strategy to prepare for information retrieval. The example strategies aim to: Factual: Refine the query for precision. Analytical: Break the query into sub-questions for broad coverage. Opinion: Identify different viewpoints to look for. Contextual: Incorporate implied or user-specific context. Retrieve Info: Uses the output of the selected strategy to search the specified knowledge base (Qdrant vector store - change as needed) for relevant documents. Generate Response: Constructs a response using the retrieved documents, guided by a prompt tailored to the original query type. By adapting the retrieval strategy, this workflow aims to provide more relevant results tailored to the user's intent. āļø Usage & Flexibility Sub-Workflow:** Designed to be called from other n8n workflows, passing user_query, chat_memory_key, and vector_store_id as inputs. Chat Testing:** Can also be triggered directly via the n8n Chat interface for easy testing and interaction. Customizable Framework:** The query categories (Factual, Analytical, etc.) and the associated retrieval strategies are examples. You can modify or replace them entirely to fit your specific domain or requirements. š ļø Requirements Credentials:** You will need API credentials configured in your n8n instance for: Google Gemini (AI Models) Qdrant (Vector Store)