by Lakindu Siriwardana
🔧 Automated Video Generator (n8n Workflow) 🚀 Features End-to-End Video Creation from user idea or transcript AI-Powered Scriptwriting using LLMs (e.g., DeepSeek via OpenRouter) Voiceover Generation with customizable TTS voices Image Scene Generation using generative models like together.ai Clip Creation & Concatenation into a full video Dynamic Caption Generation with styling options Google Drive & Sheets Integration for asset storage and progress tracking ⚙️ How It Works User Submits Form with: Main topic or transcript Desired duration TTS voice Visual style (e.g., Pixar, Lego, Cyberpunk) Image generation provider AI generates a script: A catchy title, description, hook, full script, and CTA using a language model. Text-to-Speech (TTS): The script is turned into audio using the selected voice, with timestamped captions generated. Scene Segmentation: The script is split into 5–6 second segments for visual storyboarding. Image Prompt Creation: Each scene is converted into an image prompt in the selected style (e.g., "anime close-up of a racing car"). Image Generation: Prompts are sent to together.ai or fal.ai to generate scenes. Clip Creation: Each image is turned into a short video clip (Ken Burns-style zoom) based on script timing. Video Assembly: All clips are concatenated into a single video. Captions are overlaid using the earlier timestamps. Final Output is uploaded to Google Drive, Telegram and links are saved in Google Sheets. 🛠 Inital Setup 🗣️ 1. Set Up TTS Voice (Text-to-Speech) Run your TTS server locally using Docker. 🧰 2. Set Up NCA-Toolkit The nca-toolkit appears to be a custom video/image processing backend used via HTTP APIs: http://host.docker.internal:9090/v1/image/transform/video http://host.docker.internal:9090/v1/video/concatenate http://host.docker.internal:9090/v1/ffmpeg/compose 🔧 Steps: Clone or build the nca-toolkit container (if it's a private tool): Ensure it exposes port 9090. It should support endpoints for: Image to video (zoom effect) Video concatenation Audio + video merging Caption overlay via FFmpeg Run it locally with Docker: docker run -d -p 9090:80 your-nca-toolkit-image 🧠 3. Set Up together.ai (Image Generation) (Optional You can use ChatGPT API Instead) This handles image generation using models like FLUX.1-schnell. 🔧 Steps: Create an account at: https://www.together.ai Generate your API key
by Deb Mukherjee
Who’s it for Creators who want to create faceless videos automatically, while keeping human oversight and quality control. How it works / What it does AI generates 8 story beats, which can be reviewed, edited, or re-ordered by a human. Each beat is converted into narration (audio), imagery, and short clips. Final video is assembled and stored in Google Drive, ready for review and regeneration if needed. Chat commands trigger each step, giving full human control. How to set up Set up Google Drive and Google Sheets Get necessary credentials Requirements Google Drive account for storing videos. Access to AI tools for text, voice, and visuals. Basic familiarity with triggering chat commands or automation steps. How to customize the workflow Adjust the number of story beats or narration style Use models of your choice Use for any theme by updating Story prompt
by Intuz
This n8n template from Intuz provides a complete and automated solution for transforming a static product image and a creative idea into a dynamic, AI-generated video ad. Using Google's state-of-the-art Veo 3 model, this workflow manages the entire creative process from concept to a final, downloadable video file. Who's this workflow for? E-commerce Brands & Marketers Advertising Agencies Social Media Content Creators Product Managers How it works 1. Submit a Creative Brief: The workflow starts when a user submits a creative idea via a simple web form (e.g., "A Pepsi can exploding into a vibrant disco party"). 2. Upload a Product Image: The user is then prompted to upload a corresponding image (e.g., a high-quality photo of the Pepsi can). 3. Log the Project in Airtable: The idea and the uploaded image are saved to an Airtable base, which acts as the central tracking system for all video generation projects. 4. AI Creative Analysis: Google Gemini analyzes both the user's text prompt and the uploaded image. It acts as an "AI Creative Director," generating a detailed video brief that reinterprets the static image according to the user's creative vision. 5. Generate Video with Veo 3: The detailed creative brief is sent to Google's Veo 3 AI video generation model. The workflow initiates a long-running task to create the video. 6. Retrieve the Final Video: After a brief waiting period, the workflow polls the Veo 3 API to retrieve the finished video, converts it into a binary file, and makes it available for download directly from the n8n execution log. Key Requirements to Use This Template n8n Instance & Required Nodes: An active n8n account (Cloud or self-hosted). This workflow uses the official n8n LangChain integration (@n8n/n8n-nodes-langchain). If you are using a self-hosted version of n8n, please ensure this package is installed. Google Cloud Account: A Google Cloud Project with the Vertex AI API enabled. You must have access to both the Gemini and Veo 3 models within your project. You will need a Gemini API Key and a Google OAuth2 Credential configured for the Vertex AI scope. Airtable Account: An Airtable base with a table set up to track the video projects. It should have columns for Image Prompt, Image (Attachment), Video (Attachment/URL), and Status. Setup Instructions 1. Airtable Configuration (Crucial): In the Create a record, Get a record, and Update record nodes, connect your Airtable credentials and update the Base ID and Table ID to match your setup. In the Uploading Image in Airtable (HTTP Request) node, you must edit the URL and the "Authorization" header to include your Base ID, Table ID, and Personal Access Token. 2. Google AI Configuration (Gemini & Veo): In the Analyze image (Google Gemini) node, select your Gemini API credentials. In both the Generate Video Veo 3 and Get the the Video (HTTP Request) nodes: You must replace [Project ID] and [Location] in the URLs with your own Google Cloud Project ID and region (e.g., us-central1). Select your Google OAuth2 credentials for authentication. 3. Customize Video Parameters (Optional): In the Parse Request (Code) node, you can modify the JavaScript code to change video generation settings like aspectRatio, durationSeconds, and resolution. 4. Execute the Workflow: Activate the workflow. Open the Form URL from the Prompt your Idea node to start the process. Sample Videos Connect with us Website: https://www.intuz.com/services Email: getstarted@intuz.com LinkedIn: https://www.linkedin.com/company/intuz Get Started: https://n8n.partnerlinks.io/intuz For Custom Workflow Automation Click here- Get Started
by Sk developer
🎨 AI Image Generator with Flux AI Generate realistic, high-quality images from text prompts using the Flux AI Text-to-Image Generator API via RapidAPI, and seamlessly store the results in Google Drive and log them in Google Sheets — all automated using n8n. 🧠 What This Workflow Does This no-code automation enables you to: 🖋️ Enter a custom text prompt using a web form. 🖼️ Generate a photorealistic image using Flux AI’s Text-to-Image Generator via RapidAPI. ☁️ Upload the image to Google Drive. 📊 Log the prompt and result in a Google Sheet. ⚠️ Capture and log errors in a fallback sheet. 💡 Use Case Ideal for: Digital artists and marketers Social media managers Brand mockup creators Rapid concept prototyping All without writing a single line of code. ✅ Benefits No-code automation** for AI-generated images Cloud storage** and structured logging Error handling** built-in Fast content creation** for design, branding, or concept testing Powered by* the Flux AI Text-to-Image Generator API via *RapidAPI** 🧩 Node-by-Node Breakdown 1. 📝 On Form Submission Accepts user input for a creative text prompt. 🔍 Example: “A silver can with vapor and blue lightning background.” 💡 Benefit: No technical knowledge needed. 2. 🌐 HTTP Request — Flux AI API Sends the prompt to the Flux AI Text-to-Image Generator API via RapidAPI. 📦 Returns an image encoded in base64. 💡 Benefit: Seamless integration with cutting-edge image generation. 3. 🧪 Code Node — Base64 Decoder Converts the base64 image to a binary .jpg file. 💡 Benefit: Readies the image for upload/download/sharing. 4. 📁 Google Drive Uploads the generated image to your Google Drive folder. 💡 Benefit: Secure, sharable cloud storage. 5. 📊 Google Sheets — Success Log Appends a row with the original prompt, filename, and generation date. 💡 Benefit: Tracks history of all generated images. 6. ⚠️ IF Node — Error Detection Checks if the image generation failed. 💡 Benefit: Prevents workflow from halting and routes to error logging. 7. 📉 Google Sheets — Error Log Logs failed prompts and error messages. 💡 Benefit: Helps identify what went wrong (e.g. malformed prompt). 🛠️ Challenges Solved | Problem | How This Workflow Fixes It | |--------|-----------------------------| | Manual prompt-based image generation is slow | Fully automated with Flux AI | | No storage pipeline for generated images | Integrated with Google Drive | | No audit trail for prompts/images | Logged into Google Sheets | | Errors go unnoticed in image generation | Built-in error check and logging | | Users lack API access or dev experience | Friendly web form UI | 🔗 API Spotlight This workflow is powered by the Flux AI Text-to-Image Generator API — available exclusively on RapidAPI. Why use this API? Ultra-fast text-to-image rendering High-resolution results Developer-friendly and cost-effective Great for branding, mockups, and visuals We’ve integrated this API to make advanced image generation accessible with just a prompt — no AI or dev experience required.
by Harshil Agrawal
This workflow updates your Twitter profile banner when you have a new follower. To use this workflow: Configure Header Auth in the Fetch New Followers to connect to your Twitter account. Update the URL of the template image in the Fetch BG node. Create and configure your Twitter OAuth 1.0 credentials in the last HTTP Request node. You can configure the size, and position of the avatar images in the Edit Image nodes. Check out this video to learn how to build it from scratch: How to automatically update your Twitter Profile Banner
by PiAPI
What this workflow does? This workflow converts orthographic three-view drawings into 360° rotation videos through PiAPI's GPT-4o-Image and Kling APIs (unofficial). The workflow could be set with our 3D Figurine Orthographic Views workflow for generation. Who is the workflow for? Designers**: Generate inspiration into 3D designs and make them spin to gain concrete details in a efficient way. Online shoppers**: Show protential products from all angles in videos and preview overall texture of models. Content Creators** (including toy bloggers): Make fun videos of collectible models. Step-by-step Instructions 1.Fill in basic params with your X-API-Key of your PiAPI account and 3-View image url. 2.Click test workflow. 3.Get the final video in the last node. Use Case Input Image Output Video
by Thong
🧠 What This Workflow Does This n8n workflow allows you to upload a T-shirt mockup design (even if it's rough or outdated), and automatically turns it into a refined, print-ready artwork using the power of AI. It starts with an image of a T-shirt design, analyzes it using OpenAI's vision model, and then generates a cleaner, upgraded prompt to be used with OpenAI’s image generation API (gpt-image-1). The final output is a new T-shirt graphic optimized for printing on solid black background, with no visible shirt or mockup framing. ⚙️ How It Works User Sends a T-shirt Mockup Image Link The workflow begins when the user drops an image link (T-shirt mockup) into a chat interface or input trigger. AI Analyzes the Image (OpenAI Vision) Using OpenAI’s GPT-4 vision capabilities, the workflow extracts the key design elements from the image: Characters, text, layout Graphic style, composition Visual tone and focus AI Agent Creates a Refined Prompt The extracted details are passed to an AI agent that: Preserves the original layout and message Enhances the visual composition and typography Removes mockup elements like shirt collar, sleeves, shadows. Locks the artwork on a pure black background only Outputs a clean, artistic, JSON-safe one-line prompt for generation Text Escaping for API Compatibility A JavaScript function node escapes the prompt (quotes, slashes, line breaks) to make it safe for use in downstream JSON requests. Image Generation via GPT-Image-1 API or IMAGEN 4 from GOOGLE The final prompt is sent to OpenAI’s gpt-image-1 to generate a brand-new artwork — ideal for direct printing on a black T-shirt. ⚠️ Cost Notice for gpt-image-1 Usage This workflow uses OpenAI's gpt-image-1 model to generate high-quality T-shirt artwork from refined prompts. Please note that this model is a paid service, and each image generation request may cost approximately $0.25 per design, depending on resolution and usage. We strongly recommend users to review their OpenAI API usage plan and be mindful of costs when running this workflow, especially if generating in bulk or integrating into larger automation flows. You can monitor your usage at: https://platform.openai.com/docs/models/gpt-image-1 (Optional) You can send the result to Telegram, upload to Notion, or store it in your design system. ✅ Key Features Works from any uploaded mockup image Converts design concepts into print-ready artwork prompts Avoids outputting shirt models, collars, or product mockups Optimized for solid black background with no distractions Modular and easy to connect with file delivery or approval flows 🚀 How to Use Import the .json workflow into n8n Configure your OpenAI credentials for both vision and image APIs Trigger the flow by sending an image url of a T-shirt mockup Let the workflow generate and return a brand-new design from that concept
by Max Mitcham
Want to check out all my flows, follow me on: https://maxmitcham.substack.com/ https://www.linkedin.com/in/max-mitcham/ This automation flow is designed to generate comprehensive, research-backed lead magnet articles based on a user-submitted topic, conduct deep research across multiple sources, and automatically create a professional Google Doc ready for LinkedIn sharing. ⚙️ How It Works (Step-by-Step): 📝 Chat Input (Entry Point) A user submits a topic through the chat interface: Topic for lead magnet content Target audience (automatically detected) Company context (when relevant) 🔍 Query Builder Agent An AI agent refines the input by: Converting the topic into 5 targeted research queries Determining if topic relates to *company for specialized research Using structured output parsing for consistent results 📚 Research Leader Agent Conducts comprehensive research that: Uses Perplexity API for real-time web research Integrates *company knowledge base when relevant Creates detailed table of contents with research insights Identifies key trends, expert opinions, and case studies 📋 Project Planner Agent Structures the content by: Generating professional title and subtitle Creating 8-10 logical chapter outlines Developing detailed writing prompts for each section Ensuring step-by-step actionable guidance ✍️ Research Assistant Team Multiple AI agents write simultaneously: Each agent writes one chapter with proper citations Maintains consistent voice across all sections Includes real-world examples and implementation steps Uses both web research and *company knowledge 📝 Editor Agent Professional content polishing: Refines tone for authenticity and engagement Adds image placeholders where appropriate Ensures proper flow between chapters Optimizes for LinkedIn lead magnet format 📄 Google Docs Creation Automated document generation: Creates new Google Doc with formatted content Sets proper sharing permissions (public link) Organizes in designated company folder Returns shareable URL for immediate use 🛠️ Tools Used: n8n: Workflow orchestration platform Anthropic Claude: Primary AI model for content generation OpenRouter: Backup AI model options Perplexity API: Real-time research capabilities *Company Knowledge Hub: Internal documentation access Google Docs API: Document creation and formatting Google Drive API: File management and sharing 📦 Key Features: End-to-end automation from topic to published document Multi-agent approach ensures comprehensive coverage Real-time research with proper citations Company-specific knowledge integration Professional editing and formatting Automatic Google Docs creation with sharing Scalable content generation (3-5 minutes per article) 🚀 Ideal Use Cases: B2B companies building thought leadership content Sales teams creating industry-specific lead magnets Marketing departments scaling content production Consultants developing expertise-demonstrating resources SaaS companies creating feature-focused educational content Startups establishing market presence without content teams `
by Samir Saci
Tags*: Ghost CMS, SEO Audit, Image Optimisation, Alt Text, Google Sheets, Automation Context Hi! I’m Samir — a Supply Chain Engineer and Data Scientist based in Paris, and founder of LogiGreen Consulting. I help companies and content creators use automation and analytics to improve visibility, enhance performance, and reduce manual work. > Let’s use n8n to automate SEO audits to increase your traffic! 📬 For business inquiries, feel free to connect on LinkedIn Who is this template for? This workflow is perfect for bloggers, marketers, or content teams using Ghost CMS who want to: Extract and review all images from articles Detect missing or short alt texts Check image file size and filename SEO compliance Push the audit results into a Google Sheet How does it work? This n8n workflow extracts all blog posts from Ghost CMS, scans the HTML to collect all embedded images, then evaluates each image for: ✅ Presence and length of alt text 📏 File size in kilobytes 🔤 Filename SEO quality (e.g. lowercase, hyphenated, no special chars) All findings are written to Google Sheets for further analysis or manual cleanup. 🧭 Workflow Steps: 🚀 Trigger the workflow manually or on schedule 📰 Extract blog post content from Ghost CMS 🖼️ Parse all ` tags with src and alt` attributes 📤 Store image metadata in a Google Sheet (step 1) 🌐 Download each image using HTTP request 🧮 Extract file size, extension, and filename SEO flag 📄 Update the audit sheet with size and format insights What do I need to get started? This workflow requires: A Ghost Content API key A Google Sheet (to log audit results) No AI or external APIs required — works fully with built-in nodes Next Steps 🗒️ Follow the sticky notes inside the workflow to: Plug in your Ghost blog credentials Select or create a Google Sheet Run the audit and start improving your SEO! This template was built using n8n v1.93.0 Submitted: June 8, 2025
by Jacob
Unlock the full potential of your YouTube channel with our powerful integration that connects Google Sheets and DeepSeek AI — designed to skyrocket your video visibility and engagement without manual hassle. What this integration does for you: Automates video data management by pulling your YouTube URLs straight from Google Sheets — no more copy-pasting or manual tracking. Extracts your current titles and descriptions directly from YouTube, giving you a clear starting point. Generates 3 high-impact, SEO-optimized titles plus 1 compelling, conversion-focused description — crafted by DeepSeek’s AI to grab attention and rank higher. Updates your Google Sheet automatically with new optimized titles and descriptions — keeping all your video info in one place, ready to publish. Why it matters: In the crowded world of YouTube, having the right title and description can make all the difference between millions of views or being lost in the noise. This integration takes the guesswork out of optimization, saving you time and boosting your channel’s growth with proven AI-driven content. You need to Sheet with columns: Url Keyword Status Old Title New Title Old Description New Description My contact: jacobmarketingservice@gmail.com
by David Roberts
This workflow allows you to define multiple tickets/issues in a Notion page, then easily import them into Linear. Why is it useful? We use this workflow internally at n8n for collaboration between Product and Engineering teams: Engineering needs all work to be in our ticketing system (Linear) in order to keep track of it Product prefers to review features in Notion. This is because it and can be used to dump all your thoughts and organise them into themes afterwards, plus it better supports rich content like videos Features Supports rich formatting (bullets, images, videos, links, etc.) Keeps links between the Notion and Linear version, in case you need to refer back Allows you to assign each issue to a team member in the Notion definition Avoids importing the same issues twice if you run it again on the same page (meaning you can issues incrementally) You can see an example of the required format of the Notion page here.
by PiAPI
What's the workflow used for? Leverage this Kling API (unofficial) provided by PiAPI workflow to streamline virtual try-on video creation. This tool is designed for e-commerce platforms, fashion brands, content creators and content influencers. By uploading model and clothing images and linking PiAPI account, users can swiftly generate a realistic video of the model sporting the outfit with a 360° turn, offering an immersive viewing experience. Step-by-step Instruction For basic settings of virtual try-on, check API doc to get best practice. Fill in your X-API-Key of your PiAPI account in Preset Parameters node. Upload the model photo and provide target clothing image urls. Click Test Workflow to generate virtual try-on image. Get the video output in the final node. Param Settings If you want to change into a dress, input the model_input URL and the dress_input URL in the parameters. If you want to change into separates, input model_input URL, upper_input URL and lower_input URL in Preset Parameters. Use Case Input images: Output Video The output demonstrates that the model is wearing the clothing from the specified image and showcases a rotating runway-style view. This workflow enables you to efficiently test garment-on-model presentation effects while reducing business model validation costs to a certain extent.