by Davide
This workflow automates the process of creating short videos from multiple image references (up to 7 images). It uses "Vidu Reference to Video" model, a video generation API to transform a user-provided prompt and image set into a consistent, AI-generated video. This workflow automates the process of generating AI-powered videos from a set of reference images and then uploading them to TikTok and Youtube. The process is initiated via a user-friendly web form. Advantages ✅ Consistent Video Creation: Uses multiple reference images to maintain subject consistency across frames. ✅ Easy Input: Just a simple form with prompt + image URLs. ✅ Automation: No manual waiting—workflow checks status until video is ready. ✅ SEO Optimization: Automatically generates a catchy, optimized YouTube title using AI. ✅ Multi-Platform Publishing: Uploads directly to Google Drive, YouTube, and TikTok in one flow. ✅ Time Saving: Removes repetitive tasks of video generation, download, and manual uploading. ✅ Scalable: Can run periodically or on-demand, perfect for content creators and marketing teams. ✅ UGC & Social Media Ready: Designed for creating viral short videos optimized for platforms like TikTok and YouTube Shorts. How It Works Form Trigger: A user submits a web form with two key pieces of information: a text Prompt describing the desired video and a list of Reference images (URLs separated by commas or new lines). Data Processing: The workflow processes the submitted image URLs, converting them from a text string into a proper array format for the AI API. AI Video Generation: The processed data (prompt and image array) is sent to the Fal.ai VIDU API endpoint (reference-to-video) to start the video generation job. This node returns a request_id. Status Polling: The workflow enters a loop where it periodically checks the status of the generation job using the request_id. It waits for 60 seconds and then checks if the status is "COMPLETED". If not, it waits and checks again. Result Retrieval: Once the video is ready, the workflow fetches the URL of the generated video file. Title Generation: Simultaneously, the original user prompt is sent to an AI model (GPT-4o-mini via OpenRouter) to generate an optimized, engaging title for the social media post. Upload & Distribution: The video file is downloaded from the generated URL. A copy is saved to a specified Google Drive folder for storage. The video, along with the AI-generated title, is automatically uploaded to YouTube and TikTok via the Upload-Post.com API service. Set Up Steps This workflow requires configuration and API keys from three external services to function correctly. Step 1: Configure Fal.ai for Video Generation Create an account and obtain your API key. In the "Create Video" HTTP node, edit the "Header Auth" credentials. Set the following values: Name: Authorization Value: Key YOUR_FAL_API_KEY (replace YOUR_FAL_API_KEY with your actual key) Step 2: Configure Upload-Post.com for Social Media Uploads Get an API key from your Upload-Post Manage Api Keys dashboard (10 free uploads per month). In both the "HTTP Request" (YouTube) and "Upload on TikTok" nodes, edit their "Header Auth" credentials. Set the following values: Name: Authorization Value: Apikey YOUR_UPLOAD_POST_API_KEY (replace YOUR_UPLOAD_POST_API_KEY with your actual key) Crucial: In the body parameters of both upload nodes, find the user field and replace YOUR_USERNAME with the exact name of the social media profile you configured on Upload-Post.com (e.g., my_youtube_channel). Step 3: Configure Google Drive (Optional Storage) The "Upload Video" node is pre-configured to save the video to a Google Drive folder named "Fal.run". Ensure your Google Drive credentials in n8n are valid and that you have access to this folder, or change the folderId parameter to your desired destination. Step 4: Configure AI for Title Generation The "Generate title" node uses OpenAI to access the gpt-5-mini model.. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Thiago Vazzoler Loureiro
Description Automates the forwarding of messages from WhatsApp (via Evolution API) to Chatwoot, enabling seamless integration between external WhatsApp users and internal Chatwoot agents. It supports both text and media messages, ensuring that customer conversations are centralized and accessible for support teams. What Problem Does This Solve? Managing conversations across multiple platforms can lead to fragmented support and lost context. This subworkflow bridges the gap between WhatsApp and Chatwoot, automatically forwarding messages received via the Evolution API to a Chatwoot inbox. It simplifies communication flow, centralizes conversations, and enhances the support team's productivity. Features Support for plain text messages Support for media messages: images, videos, documents, and audio Automatic media upload to Chatwoot with proper attachment rendering Automatic contact association using WhatsApp number and Chatwoot API Designed to work with Evolution API webhooks or any message source Prerequisites Before using this automate, make sure you have: Evolution API credentials with incoming message webhook configured A Chatwoot instance with access token and API endpoint An existing Chatwoot inbox (preferably API channel) A configured HTTP Request node in n8n for Chatwoot API calls Suggested Usage This subworkflow should be attached to a parent workflow that receives WhatsApp messages via the Evolution API webhook. Ideal for: Centralized customer service operations WhatsApp-to-CRM/chat routing Hybrid automation workflows where human agents need to reply from Chatwoot It ensures that all incoming WhatsApp messages are properly converted and forwarded to Chatwoot, preserving message content and structure.
by gotoHuman
Collaborate with an AI Agent on a joint document, e.g. for creating your content marketing strategy, a sales plan, project status updates, or market analysis. The AI Agent generates markdown text that you can review and edit it in gotoHuman, and only then is the existing Google Doc updated. In this example we use AI to update our company's content strategy for the next quarter. How It Works The AI Agent has access to other documents that provide enough context to write the content strategy. We ask it to generate the text in markdown format. To ensure our strategy document is not changed without our approval, we request a human review using gotoHuman. There the markdown content can be edited and properly previewed. Our workflow resumes once the review is completed. We check if the content was approved and then write the (potentially edited) markdown to our Google Docs file via the Google Drive node. How to set up Most importantly, install the verified gotoHuman node before importing this template! (Just add the node to a blank canvas before importing. Works with n8n cloud and self-hosted) Set up your credentials for gotoHuman, OpenAI, and Google Docs/Drive In gotoHuman, select and create the pre-built review template "Strategy agent" or import the ID: F4sbcPEpyhNKBKbG9C1d Select this template in the gotoHuman node Requirements You need accounts for gotoHuman (human supervision) OpenAI (Doc writing) Google Docs/Drive How to customize Let the workflow run on a schedule, or create and connect a manual trigger in gotoHuman that lets you capture additional human input to feed your agent Provide the agent with more context to write the content strategy Use the gotoHuman response (or a Google Drive file change trigger) to run additional AI agents that can execute on the new strategy
by InfyOm Technologies
✅ What problem does this workflow solve? Sending a plain PDF resume doesn’t stand out anymore. This workflow allows candidates to convert their resume and photo into a personalized video resume. Recruiters get a more engaging first impression, while candidates showcase their profile in a modern, impactful way. ⚙️ What does this workflow do? Presents a form for uploading: 📄 Resume (PDF) 🖼 Photo (headshot) Extracts key details from the resume (education, experience, skills). Detects gender from the photo to choose a suitable voice/avatar. Generates a script (spoken resume summary) based on the extracted information. Uploads the photo to HeyGen to create an avatar. Requests video generation on HeyGen: Uses the avatar photo Uses gender-specific settings Uses the generated script as narration Monitors video generation status until completion. Stores the final video URL in a Google Sheet for easy access and tracking. 🔧 Setup Instructions Google Services Connect Google Sheets to n8n to store records with: Candidate name Resume link Video link HeyGen Setup Get an API key from HeyGen. Configure: Avatar upload endpoint (image upload) Video generation endpoint (image ID + script) Form Setup Use the n8n Form Trigger to allow candidates to upload: Resume (PDF) Photo (JPEG/PNG) 🧠 How it Works – Step-by-Step 1. Candidate Submission A candidate fills out a form and uploads: Resume (PDF) Photo 2. Extract Resume Data The resume PDF is processed using OCR/AI to extract: Name Experience Skills Education highlights 3. Gender Detection The uploaded photo is analyzed to detect gender (used for voice/avatar selection). 4. Script Generation Based on the extracted resume info, a concise, natural script is generated automatically. 5. Avatar Upload & Video Creation The photo is uploaded to HeyGen to create a custom avatar. A video generation request is made using: The script The avatar (image ID) A matching voice for the detected gender 6. Video Status Monitoring The workflow polls HeyGen’s API until the video is ready. 7. Save Final Video URL Once complete, the video link is added to a Google Sheet alongside the candidate’s details. 👤 Who can use this? This workflow is ideal for: 🧑🎓 Students and job seekers looking to stand out 🧑💼 Recruitment agencies offering modern resume services 🏢 HR teams wanting engaging candidate submissions 🎥 Portfolio builders for professionals 🚀 Impact Instead of a static PDF, you can now send a dynamic video resume that captures attention, adds personality, and makes a lasting impression.
by Khairul Muhtadin
The Prompt converter workflow tackles the challenge of turning your natural language video ideas into perfectly formatted JSON prompts tailored for Veo 3 video generation. By leveraging Langchain AI nodes and Google Gemini, this workflow automates and refines your input to help you create high-quality videos faster and with more precision—think of it as your personal video prompt translator that speaks fluent cinematic! 💡 Why Use Prompt Converter? Save time: Automate converting complex video prompts into structured JSON, cutting manual formatting headaches and boosting productivity. Avoid guesswork: Eliminate unclear video prompt details by generating detailed, cinematic descriptions that align perfectly with Veo 3 specs. Improve output quality: Optimize every parameter for Veo 3's video generation model to get realistic and stunning results every time. Gain a creative edge: Turn vague ideas into vivid video concepts with AI-powered enhancement—your video project's secret weapon. ⚡ Perfect For Video creators: Content developers wanting quick, precise video prompt formatting without coding hassles. AI enthusiasts: Developers and hobbyists exploring Langchain and Google Gemini for media generation. Marketing teams: Professionals creating video ads or visuals who need consistent prompt structuring that saves time. 🔧 How It Works ⏱ Trigger: User submits a free text prompt via message or webhook. 📎 Process: The text goes through an AI model that understands and reworks it into detailed JSON parameters tailored for Veo 3. 🤖 Smart Logic: Langchain nodes parse and optimize the prompt with cinematic details, set reasonable defaults, and structure the data precisely. 💌 Output: The refined JSON prompt is sent to Google Gemini for video generation with optimized settings. 🔐 Quick Setup Import the JSON file to your n8n instances Add credentials: Azure OpenAI, Gemini API, OpenRouter API Customize: Adjust prompt templates or default parameters in the Prompt converter node Test: Run your workflow with sample text prompts to see videos come to life 🧩 You'll Need Active n8n instances Azure OpenAI API Gemini API Key OpenRouter API (alternative AI option) 🛠️ Level Up Ideas Add integration with video hosting platforms to auto-upload generated videos 🧠 Nodes Used Prompt Input** (Chat Trigger) OpenAI** (Azure OpenAI GPT model) Alternative** (OpenRouter API) Prompt converter** (Langchain chain LLM for JSON conversion) JSON parser** (structured output extraction) Generate a video** (Google Gemini video generation) Made by: Khaisa Studio Tags: video generation, AI, Langchain, automation, Google Gemini Category: Video Production Need custom work? Contact me
by Daniel
Harness OpenAI's Sora 2 for instant video creation from text or images using fal.ai's API—powered by GPT-5 for refined prompts that ensure cinematic quality. This template processes form submissions, intelligently routes to text-to-video (with mandatory prompt enhancement) or image-to-video modes, and polls for completion before redirecting to your generated clip. 📋 What This Template Does Users submit prompts, aspect ratios (9:16 or 16:9), models (sora-2 or pro), durations (4s, 8s, or 12s), and optional images via a web form. For text-to-video, GPT-5 automatically refines the prompt for optimal Sora 2 results; image mode uses the raw input. It calls one of four fal.ai endpoints (text-to-video, text-to-video/pro, image-to-video, image-to-video/pro), then loops every 60s to check status until the video is ready. Handles dual modes: Text (with GPT-5 enhancement) or image-seeded generation Supports pro upgrades for higher fidelity and longer clips Auto-uploads images to a temp host and polls asynchronously for hands-free results Redirects directly to the final video URL on completion 🔧 Prerequisites n8n instance with HTTP Request and LangChain nodes enabled fal.ai account for Sora 2 API access OpenAI account for GPT-5 prompt refinement 🔑 Required Credentials fal.ai API Setup Sign up at fal.ai and navigate to Dashboard → API Keys Generate a new key with "sora-2" permissions (full access recommended) In n8n, create "Header Auth" credential: Name it "fal.ai", set Header Name to "Authorization", Value to "Key [Your API Key]" OpenAI API Setup Log in at platform.openai.com → API Keys (top-right profile menu) Click "Create new secret key" and copy it (store securely) In n8n, add "OpenAI API" credential: Paste key, select GPT-5 model in the LLM node ⚙️ Configuration Steps Import the workflow JSON into your n8n instance via Settings → Import from File Assign fal.ai and OpenAI credentials to the relevant HTTP Request and LLM nodes Activate the workflow—the form URL auto-generates in the trigger node Test by submitting a sample prompt (e.g., "A cat chasing a laser"); monitor executions for video output Adjust polling wait (60s node) for longer generations if needed 🎯 Use Cases Social Media Teams**: Generate 9:16 vertical Reels from text ideas, like quick product animations enhanced by GPT-5 for professional polish Content Marketers**: Animate uploaded images into 8s promo clips, e.g., turning a static ad graphic into a dynamic story for email campaigns Educators and Trainers**: Create 4s explainer videos from outlines, such as historical reenactments, using pro mode for detailed visuals App Developers**: Embed as a backend service to process user prompts into Sora 2 videos on-demand for creative tools ⚠️ Troubleshooting API quota exceeded**: Check fal.ai dashboard for usage limits; upgrade to pro tier or extend polling waits Prompt refinement fails**: Ensure GPT-5 credential is set and output matches JSON schema—test LLM node independently Image upload errors**: Confirm file is JPG/PNG under 10MB; verify tmpfiles.org endpoint with a manual curl test Endless polling loop**: Add an IF node after 10 checks to timeout; increase wait to 120s for 12s pro generations
by mike
This is an example of how you can make Merge by Key work. The “Data 1” and “Data 2” nodes simply provide mock data. You can replace them with your own data sources. Then the “Convert Data” nodes are important. They make sure that the different array items are actually different items in n8n. After that, you have then the merge with the merged data.
by GiovanniSegar
Super simple workflow to convert image URLs to an uploaded attachment in Airtable. You'll need to adjust the field names to match your specific data, including in the filter formula where it says "Cover image URL". Just replace that with the field name where you are storing the image URL.
by Automate With Marc
🎥 Telegram Image-to-Video Generator Agent (Veo3 / Seedance Integration) ⚠️ This template uses [community nodes] and some credential-based HTTP API calls (e.g. Seedance/Wavespeed). Ensure proper credentials are configured before running. 🛠️ In the accompanying video tutorial, this logic is built as two separate workflows: Telegram → Image Upload + Prompt Agent Prompt Output → Video Generation via API Watch Full Video Tutorial: https://youtu.be/iaZHef5bZAc&list=PL05w1TE8X3baEGOktlXtRxsztOjeOb8Vg&index=1 ✨ What This Workflow Does This powerful automation allows you to generate short-form videos from a Telegram image input and user prompt — perfect for repurposing content into engaging reels. From the moment a user sends a photo with a caption to your Telegram bot, this n8n workflow: 📸 Captures the image and saves it to Google Drive 🧠 Uses an AI Agent (via LangChain + OpenAI) to craft a Seedance/Veo3-compatible video prompt 📑 Logs the interaction to a Google Sheet 🎞️ Sends the prompt + image to the Seedance (Wavespeed) API to generate a video 🚀 Sends the resulting video back to the user on Telegram — fully automated 🔗 How It Works (Step-by-Step) Telegram Bot Trigger Listens for incoming images and captions Conditional Logic Filters out invalid inputs AI Agent (LangChain) Uses OpenAI GPT to: Generate a video prompt Attach the most recent image URL (from Google Sheet) Google Drive Upload Saves the Telegram image and logs the share link Google Sheets Logging Appends a new row with date + file link Wavespeed (Seedance/Veo3) API Calls the /bytedance/seedance-v1-pro-i2v-480p endpoint with image and prompt Video Polling & Output Waits for generation completion Sends back final video file to Telegram user 🛠️ Tools & APIs Used Telegram Bot (Trigger + Video Reply) LangChain Agent Node OpenAI GPT-4.1-mini for Prompt Generation Simple Memory & Tools (Google Sheets) Google Drive (Image upload) Google Sheets (Log prompts + image URLs) Wavespeed / Seedance API (Image-to-video generation) 🧩 Requirements Before running this workflow: ✅ Set up a Telegram Bot and configure credentials ✅ Connect your Google Drive and Google Sheets credentials ✅ Sign up for Wavespeed / Seedance and generate an API key ✅ Replace placeholder values in: HTTP Request nodes Google Drive folder ID Google Sheet document ID 📦 Suggested Use Cases Generate short-form videos from image ideas Reformat static images into dynamic reels Repurpose visual content for TikTok/Instagram
by Muhammad Farooq Iqbal
Transform any product image into engaging UGC (User-Generated Content) videos and images using AI automation. This comprehensive workflow analyzes uploaded images via Telegram, generates realistic product images, and creates authentic UGC-style videos with multiple scenes. Key Features: 📱 Telegram Integration**: Upload images directly via Telegram bot 🔍 AI Image Analysis**: Automatically analyzes and describes uploaded images using GPT-4 Vision 🎨 Smart Image Generation**: Creates realistic product images using Fal.ai's nano-banana model with reference images 🎬 UGC Video Creation**: Generates 3-scene UGC-style videos using KIE.ai's Veo3 model 📹 Video Compilation**: Automatically combines multiple video scenes into a final output 📤 Instant Delivery**: Sends both generated images and final videos back to Telegram Perfect For: E-commerce businesses creating authentic product content Social media marketers needing UGC-style content Influencers and content creators Marketing agencies automating content production Anyone looking to scale UGC content creation What It Does: Receives product images via Telegram Analyzes image content with AI vision Generates realistic product images with UGC styling Creates 3-scene video prompts (Hook → Product → CTA) Generates individual video scenes Combines scenes into final UGC video Delivers both image and video results Technical Stack: OpenAI GPT-4 Vision for image analysis Fal.ai for image generation and video merging KIE.ai Veo3 for video generation Telegram for input/output interface Ready to automate your UGC content creation? This workflow handles everything from image analysis to final video delivery! Updated
by Dhruv Dalsaniya
This workflow is designed for e-commerce, marketing teams, or creators who want to automate the production of high-quality, AI-generated product visuals and ad creatives. Here is what the workflow does: It accepts a product description and other creative inputs through a web form. It uses AI to transform your text input into a detailed, creative prompt. This prompt is then used to generate a product image. The workflow analyzes the generated image and creates a new prompt to generate a second image that includes a model, adding a human element to the visual. A final prompt is created from the model image to generate a short, cinematic video. All generated assets (images and video) are automatically uploaded to your specified hosting platform, providing you with direct URLs for immediate use. This template is an efficient solution for scaling your content creation efforts, reducing time spent on manual design, and producing a consistent stream of visually engaging content for your online store, social media, and advertising campaigns. Prerequisites: OpenRouter Account:** Required for AI agents to generate image and video prompts. GOAPI Account:** Used for the final video generation process. Media Hosting Platform:** A self-hosted service like MediaUpload, or any alternative like Google Drive or a similar service that can provide a direct URL for uploaded images and videos. This is essential for passing the visuals between different steps of the workflow.
by Snehasish Konger
How it works: This template takes approved Notion pages and syncs them to a Webflow CMS collection as draft items. It reads pages marked Status = Ready for publish in a specific Notion database/project, merges JSON content stored across page blocks into a single object, then either creates a new CMS item or updates the existing one by name. On success it sets the Notion page to 5. Done; on failure it switches the page to On Hold for review.  Step-by-step: Manual Trigger You start the run with When clicking ‘Execute workflow’. Get Notion Pages (Notion → Database: Tech Content Tasks) Pull all pages with Status = Ready for publish scoped to the target Project. Loop Over Items (Split In Batches) Process one Notion page at a time. Code (Pass-through) Expose page fields (e.g., name, id, url, sector) for downstream nodes. Get Notion Block (children) Fetch all blocks under the page id. Merge Content (Code) Concatenate code-block fragments, parse them into one mergedContent JSON, and attach the page metadata. Get Webflow Items (HTTP GET) List items in the target Webflow collection to see if an item with the same name already exists. Update or Create (Switch) No match: Create Webflow Item (POST) with isDraft: true, mapping all fieldData (e.g., category titles, meta title, excerpt, hero copy/image, benefits, problem pointers, FAQ, ROI). Match: Update Webflow Item (Draft) (PATCH) for that id. Keep the existing slug, write latest fieldData, leave isDraft: true. Write Back Status (Notion) Success path → set Status = 5. Done. Error path → set Status = On Hold. Log Submission (Code) Log a compact object with status, notionPageId, webflowItemId, timestamp, and action. Wait → Loop Short pause, then continue with the next page. Tools integration: Notion** — source database and page blocks for approved content. Webflow CMS API* — destination collection; items created/updated as *drafts**. n8n Code** — JSON merge and lightweight logging. Split In Batches + Wait** — controlled, item-wise processing. Want hands-free publishing? Add a Cron trigger before step 2 to run on a schedule.