by Alfred Nutile
How it works This workflow provides a streamlined process for uploading files to Digital Ocean Spaces, making them publicly accessible. The process happens in three main steps: User submits the form with file, in this case I needed it to upload images I use in my seo tags. File is automatically uploaded to Digital Ocean Spaces using S3-compatible storage Form completion confirmation is provided Setup steps Initial setup typically takes 5-10 minutes Configure your Digital Ocean Spaces credentials and bucket settings Test the upload functionality with a small sample file Verify public access permissions are working as expected Important notes Credentials are tricky check the screenshot above for how I set the url, bucket etc. I am just using the S3 Node Set the ACL as seen below Troubleshooting Bucket name might be incorrect Region Wrong Check Space permissions if uploads fail Verify API credentials are correctly configured You can see a video here. (live in 24 hours) https://youtu.be/pYOpy3Ntt1o
by VEED
Generate social videos with AI avatars using VEED and Claude Overview This n8n workflow automatically generates TikTok/Reels-ready talking head videos from scratch. You provide a topic and intention, and the workflow handles everything: scriptwriting, avatar generation, voiceover creation, and video rendering. Output: Vertical (9:16) AI-generated videos with lip-synced avatars, ready for social media posting. What It Does Topic + Intention → Claude writes script → OpenAI generates avatar → OpenAI creates voiceover → VEED renders video → Saved to Google Drive + logged to Sheets Pipeline Breakdown | Step | Tool | What Happens | |------|------|--------------| | 1. Script Generation | Claude Sonnet 4 | Creates hook, script (30-45 sec), caption, and image prompt based on your topic and intention | | 2. Avatar Generation | OpenAI gpt-image-1 | Generates photorealistic portrait image (1024×1536) | | 3. Voiceover | OpenAI TTS-1-HD | Converts script to natural speech (Nova voice) | | 4. Video Rendering | VEED Fabric 1.0 | Lip-syncs avatar to audio, creates final video | | 5. Storage | Google Drive | Uploads final MP4 | | 6. Logging | Google Sheets | Records all metadata (script, caption, URLs, timestamps) | Required Connections API Keys (entered in Configuration node) | Service | Key Type | Where to Get | |---------|----------|--------------| | Anthropic | API Key | https://console.anthropic.com/settings/keys | | OpenAI | API Key | https://platform.openai.com/api-keys | > ⚠️ OpenAI Note: gpt-image-1 requires organization verification. Go to https://platform.openai.com/settings/organization/general to verify. n8n Credentials (connect in n8n) | Node | Credential Type | Purpose | |------|-----------------|---------| | �� Generate Video (VEED) | FAL.ai API | VEED video rendering | | �� Upload to Drive | Google Drive OAuth2 | Store final videos | | �� Log to Sheets | Google Sheets OAuth2 | Track all generated content | Configuration Options Edit the ⚙️ Workflow Configuration node to customize. The configuration uses a JSON format: { "topic": "AI video creation tools", "intention": "informative", "brand_name": "YOUR_BRAND_NAME", "target_audience": "content creators and marketers", "trending_hashtags": "#AIvideo #ContentCreation #VideoMarketing #AItools #TikTokTips", "num_videos": 1, "anthropic_api_key": "YOUR_ANTHROPIC_API_KEY", "openai_api_key": "YOUR_OPENAI_API_KEY", "video_resolution": "720p", "video_aspect_ratio": "9:16", "custom_avatar_description": "", "custom_script": "" } Configuration Fields Explained | Field | Required | Description | |-------|----------|-------------| | topic | ✅ | The subject of your video (e.g., "AI productivity tools") | | intention | ✅ | Content style: informative, lead_generation, or disruption | | brand_name | ✅ | Your brand/product name to mention | | target_audience | ✅ | Who you're creating content for | | trending_hashtags | ✅ | Hashtags to include in the caption | | num_videos | ✅ | How many videos to generate (1-5 recommended) | | anthropic_api_key | ✅ | Your Anthropic API key | | openai_api_key | ✅ | Your OpenAI API key | | video_resolution | ✅ | Video quality: 720p or 1080p | | video_aspect_ratio | ✅ | Aspect ratio: 9:16 (vertical) or 16:9 (horizontal) | | custom_avatar_description | ❌ | Optional: Describe your avatar (leave empty for AI-generated) | | custom_script | ❌ | Optional: Your own script (leave empty for AI-generated) | Intention Types | Intention | Content Style | Best For | |-----------|---------------|----------| | informative | Educational, value-driven, builds trust | Thought leadership, tutorials | | lead_generation | Creates curiosity, soft CTA | Product awareness, funnels | | disruption | Bold, provocative, scroll-stopping | Viral potential, brand awareness | Custom Avatar & Script Options The workflow supports flexible content generation - you can let Claude generate everything, or provide your own inputs. Custom Avatar Description Leave custom_avatar_description empty to let Claude decide, or provide your own: "custom_avatar_description": "a female influencer in her 30s, with a coworking space in the background, attractive but charismatic" Examples: "a woman in her 20s with gym clothes" "a bearded man in his 30s wearing a hoodie" "a professional woman with glasses in business casual" Custom Script Leave custom_script empty to let Claude write it, or provide your own: "custom_script": "This is my custom script. VEED is a great platform for creating videos like this. You can try it too!" Guidelines for custom scripts: Keep it 30-45 seconds when read aloud Maximum ~450 characters Avoid special characters for TTS compatibility Write naturally, as if speaking Behavior Matrix | custom_avatar_description | custom_script | What Claude Generates | |---------------------------|---------------|----------------------| | Empty | Empty | Avatar + Script + Caption | | Provided | Empty | Script + Caption | | Empty | Provided | Avatar + Caption | | Provided | Provided | Caption only | Content Angles (auto-rotated) When generating multiple videos, the workflow automatically varies the approach: | # | Angle | Hook Style | |---|-------|------------| | 1 | Problem-solution | Opens with a question | | 2 | Myth-busting | Opens with controversial statement | | 3 | Quick-tip | Opens with a number/statistic | | 4 | Before-after | Opens with transformation | | 5 | Trend-commentary | Opens with news/timely angle | Output Per Video Generated | Asset | Format | Location | |-------|--------|----------| | Final Video | MP4 (720p, 9:16) | Google Drive folder | | Avatar Image | PNG (1024×1536) | tmpfiles.org (temporary) | | Voiceover | MP3 | tmpfiles.org (temporary) | | Metadata | Row entry | Google Sheets | Google Sheets Columns | Column | Description | |--------|-------------| | topic | Video topic | | intention | Content intention used | | brand_name | Brand mentioned | | content_theme | 2-3 word theme summary | | script_audio | Full voiceover script | | script_image | Image generation prompt | | caption | Ready-to-post TikTok caption with hashtags | | image_url | Temporary avatar image URL | | audio_url | Temporary audio URL | | video_url | Google Drive link to final video | | status | done/error | | created_at | Timestamp | Estimated Costs Per Video | Service | Usage | Approximate Cost | |---------|-------|------------------| | Claude Sonnet 4 | 2K tokens | $0.01 | | OpenAI gpt-image-1 | 1 image (1024×1536) | ~$0.04-0.08 | | OpenAI TTS-1-HD | 450 characters | $0.01 | | VEED/FAL.ai | 1 video render | ~$0.10-0.20 | | Total | | ~$0.15-0.30 per video | > Costs vary based on script length and current API pricing. Setup Checklist Step 1: Import Workflow [ ] Import generate-social-videos-with-ai-avatars-using-veed-and-claude.json into n8n Step 2: Configure API Keys [ ] Open the ⚙️ Workflow Configuration node [ ] Replace YOUR_ANTHROPIC_API_KEY with your actual Anthropic API key [ ] Replace YOUR_OPENAI_API_KEY with your actual OpenAI API key [ ] Verify your OpenAI organization at https://platform.openai.com/settings/organization/general (required for gpt-image-1) Step 3: Connect n8n Credentials [ ] Click on �� Generate Video (VEED) node → Add FAL.ai credential [ ] Click on �� Upload to Drive node → Add Google Drive OAuth2 credential [ ] Click on �� Log to Sheets node → Add Google Sheets OAuth2 credential Step 4: Configure Storage [ ] Update the �� Upload to Drive node with your Google Drive folder URL [ ] Update the �� Log to Sheets node with your Google Sheets URL [ ] Create column headers in your Google Sheet: topic, intention, brand_name, content_theme, script_audio, script_image, caption, image_url, audio_url, video_url, status, created_at Step 5: Customize Content [ ] Update topic, brand_name, target_audience, and trending_hashtags [ ] Optionally add custom_avatar_description and/or custom_script Step 6: Test [ ] Set num_videos: 1 for initial testing [ ] Execute the workflow [ ] Check Google Drive for the output video [ ] Verify metadata was logged to Google Sheets MCP Integration (Optional) This workflow can also be exposed to Claude Desktop via n8n's Model Context Protocol (MCP) integration, allowing you to generate videos through natural language prompts. To enable MCP: Add a Webhook Trigger node to the workflow (in addition to the Manual Trigger) Connect it to the same ⚙️ Workflow Configuration node Go to Settings → Instance-level MCP → Enable the workflow Configure Claude Desktop with your n8n MCP server URL Claude Desktop Configuration (Windows): { "mcpServers": { "n8n-mcp": { "command": "supergateway", "args": [ "--streamableHttp", "https://YOUR_N8N_INSTANCE.app.n8n.cloud/mcp-server/http", "--header", "authorization:Bearer YOUR_MCP_ACCESS_TOKEN" ] } } } > Note: Install supergateway globally first: npm install -g supergateway Limitations & Notes Technical Limitations tmpfiles.org**: Temporary file URLs expire after ~1 hour. Final videos are safe in Google Drive. VEED processing**: Takes 2-5 minutes per video depending on length. n8n Cloud network**: Some external domains are blocked; workflow uses base64 for images to avoid this. Content Considerations Scripts are optimized for 30-45 seconds (TTS-friendly) Avatar images are AI-generated (not real people) Captions include hashtags automatically Each video in a batch gets a different content angle Best Practices Start small: Test with 1 video before scaling to 5 Review scripts: Claude generates good content but review before posting Monitor costs: Check API usage dashboards weekly Backup sheets: The Google Sheet serves as your content database Troubleshooting | Issue | Solution | |-------|----------| | "Organization must be verified" | Verify at platform.openai.com/settings/organization/general | | VEED authentication error | Re-add FAL.ai credential to VEED node | | Google Drive "no binary field" | Ensure Download Video outputs to field named data | | JSON parse error from Claude | Workflow has fallback content; check Claude node output | | Image URL blocked | Workflow uses base64 to avoid this; ensure gpt-image-1 model | | MCP "Server disconnected" (Windows) | Install supergateway globally: npm install -g supergateway | | MCP path error on Windows | Use supergateway directly instead of npx | Version History | Version | Date | Changes | |---------|------|---------| | 2.0 | Jan 2026 | Added custom avatar/script options, MCP integration support, improved configuration | | 1.0 | Jan 2026 | Initial release with portrait mode, gpt-image-1, native VEED node | Credits Built with: n8n** - Workflow automation Anthropic Claude** - Script generation OpenAI** - Image & audio generation VEED Fabric** - AI video rendering Google Workspace** - Storage & logging
by Ludwig
How it works: This workflow automates tagging for WordPress posts using AI: Fetch blog post content and metadata. Generate contextually relevant tags using AI. Verify existing tags in WordPress and create new ones if necessary. Automatically update posts with accurate and optimized tags. Set up steps: Estimated time: ~15 minutes. Configure the workflow with your WordPress API credentials. Connect your content source (e.g., RSS feed or manual input). Adjust tag formatting preferences in the workflow settings. Run the workflow to ensure proper tag creation and assignment. This workflow is perfect for marketers and content managers looking to streamline their content categorization and improve SEO efficiency.
by Dr. Firas
💥 Generate product images with NanoBanana Pro to Veo videos and Blotato Who is this for? This workflow is designed for: Content creators and marketers E-commerce and product-based businesses Agencies producing social media visuals and videos Automation builders looking for AI-powered creative pipelines It is ideal for anyone who wants to automate product image and video creation using AI and publish content without manual work. What problem is this workflow solving? / Use case Creating product visuals and marketing videos usually requires multiple tools, manual prompt writing, and repetitive steps. This workflow solves: Manual image and video creation Inconsistent visual quality across assets Time-consuming prompt iteration Manual video publishing to social platforms The workflow automates the entire process from image generation to video publishing using AI. What this workflow does This workflow provides an end-to-end automation pipeline: Generates high-quality product images using NanoBanana Pro Applies Contact Sheet Prompting to explore multiple visual variations Converts selected images into short marketing videos using Veo 3.1 Automatically publishes the final videos via BLOTATO The result is a fully automated creative workflow that turns AI prompts into ready-to-publish video content. Setup To use this workflow, you need the following services and credentials: OpenAI API** Used for image analysis and prompt generation NanoBanana Pro (fal.ai)** Product image generation API: https://fal.ai/models/fal-ai/nano-banana-pro/edit/api Veo 3.1 (fal.ai)** Video generation API: https://fal.ai/models/fal-ai/veo3.1/first-last-frame-to-video Blotato** Video publishing to social platforms Sign up at BLOTATO All credentials must be added in n8n before running the workflow. How to customize this workflow to your needs You can easily adapt this workflow by: Modifying AI prompts to match your brand style Adjusting image composition and realism parameters in NanoBanana Pro Changing video motion, pacing, and aspect ratio in Veo 3.1 Selecting different social platforms or publishing rules in Blotato Replacing or extending individual steps while keeping the same architecture The workflow is modular and can be reused for multiple products or campaigns. 🎥 Watch This Tutorial 👋 Need help or want to customize this? 📩 Contact: LinkedIn 📺 YouTube: @DRFIRASS 🚀 Workshops: Mes Ateliers n8n 📄 Documentation: Notion Guide Need help customizing? Contact me for consulting and support : Linkedin / Youtube / 🚀 Mes Ateliers n8n
by Pratyush Kumar Jha
Book2Audio Pro Workflow brief Book2Audio Pro turns an uploaded book PDF into organized audio files. The workflow starts with a file upload form, extracts the book text, uses AI to detect the chapter structure and generate splitting logic, converts each chunk into audio, and then saves the final MP3 files into a Google Drive folder created for that upload. How it works The user uploads a PDF through the form trigger. The PDF text is extracted from the uploaded binary file. An AI agent analyzes the first part of the book to detect chapter/section patterns and generates JavaScript code for structuring the full text. A code node executes the generated logic to split the book into chapters and smaller sentence-safe chunks. The text chunks are passed to OpenAI audio generation. Each generated audio file is uploaded to a Google Drive folder named after the uploaded book. Quick Setup Guide 👉 Demo & Setup Video 👉 Course Nodes of interest Book Pdf Upload** — Form trigger for uploading the book PDF. Extract Book Content** — Extracts text from the uploaded PDF. AI Agent** — Detects chapter patterns and generates parsing logic. Structured Output Parser** — Enforces a clean AI response format. Structure The Content** — Runs the generated code to split the book into chunks. Generate audio** — Converts text chunks into audio using OpenAI. Create folder** — Creates a Google Drive folder for the book. Loop Over Items** — Processes each chunk one by one. Upload file** — Uploads the final MP3 files to Google Drive. Merge** — Combines the folder metadata with the processed content. What you’ll need Credentials Google Drive OAuth2 credentials** for creating folders and uploading MP3 files. OpenAI API credentials** for: the chat model used by the AI agent audio generation Input requirements A valid PDF book file A book with reasonably detectable chapter or section markers for best results Recommended settings & best practices Keep audio chunks under 3900 characters to avoid request limits and improve generation quality. Split on sentence boundaries to prevent unnatural audio cuts. Use a consistent chapter pattern in source books whenever possible, such as Chapter 1, CHAPTER I, or Part One. Make sure the binary property name matches the uploaded file field exactly. Keep the workflow idempotent by creating a separate Drive folder per upload. Test with a short PDF first to confirm extraction, parsing, and audio output are working correctly. If books have unusual formatting, improve the AI prompt so it can detect more chapter styles reliably. Customization ideas Add voice selection for different narration styles. Add language detection and generate audio in the original language. Add chapter-level naming for cleaner MP3 filenames. Add file naming rules based on book title, chapter number, and part number. Add error handling for scanned PDFs or extraction failures. Add a status notification after upload completion. Save chapter metadata in Google Sheets or a database. Support multiple output formats, such as MP3 and WAV. Tags book-to-audio, pdf-to-speech, openai, google-drive, n8n, text-to-audio, automation, ai-workflow, audiobook, document-processing
by Madame AI
Generate visual resumes from Telegram inputs using Google Gemini This workflow transforms text-based resume data into visually stunning images by leveraging Google Gemini's reasoning and vision capabilities. It autonomously analyzes the candidate's profile, selects an appropriate design template based on their industry, and renders a high-quality resume image directly in Telegram. Target Audience Job seekers, career coaches, resume writers, and recruitment agencies looking to automate design generation. How it works Classify Input: The workflow starts with a Telegram trigger. A Google Gemini agent analyzes the incoming message to determine if it is a casual chat or a resume generation request. Fetch Context: If it is a resume request, a BrowserAct node triggers a workflow (using the "AI Resume Replicant" template) to fetch necessary external context or data. Ingest Designs (Optional): If a reference image is provided, CloudConvert standardizes the file, and Google Gemini Vision reverse-engineers the layout and style, saving the "Visual DNA" to Google Sheets. Draft Blueprint: The "Resume Writer" AI agent selects a stored design template that matches the candidate's industry (e.g., "Corporate" for Finance, "Creative" for Design) and maps the text content to the layout. Generate Prompt: A "Visualizer" AI agent converts the structured blueprint into a highly detailed natural language prompt for image generation. Render & Deliver: Google Gemini generates the final resume image, which is then sent back to the user via Telegram. How to set up Configure Credentials: Connect your Telegram, Google Gemini, Google Sheets, CloudConvert, and BrowserAct accounts in n8n. Prepare BrowserAct: Ensure the AI Resume Replicant template is saved in your BrowserAct account. Setup Google Sheet: Create a new Google Sheet with the required header (listed below). Connect Sheet: Open the Google Sheets nodes (Clear, Get, Append) and select your new spreadsheet. Configure Telegram: Ensure your Telegram Bot is connected to the Trigger and Message nodes. Google Sheet Headers To use this workflow, create a Google Sheet with the following header: Resume Details Requirements BrowserAct* account (Template: *AI Resume Replicant**). Google Gemini** account. Telegram** account (Bot Token). CloudConvert** account. Google Sheets** account. How to customize the workflow Refine Design Logic: Modify the system prompt in the "Resume Writer" agent to change how the AI matches industries to design styles (e.g., force specific colors for specific roles). Change Output Format: Replace the Telegram response node with a Google Drive node to save the generated images as PDF or PNG files instead of sending them. Switch Image Model: Update the "Generate an image" node to use a different image generation model if preferred (e.g., OpenAI DALL-E). Need Help? How to Find Your BrowserAct API Key & Workflow ID How to Connect n8n to BrowserAct How to Use & Customize BrowserAct Templates Workflow Guidance and Showcase Video I Built a Resume Bot that CLONES Any Template! 🤖 (BrowserAct + n8n + Gemini Tutorial)
by Blumpo
Generate AI ad creatives from website, logo, and product image with Claude + NanoBanana Who is this for? This workflow is designed for marketers, founders, agencies, and content teams who want to generate static ad creatives faster from minimal brand input. It works especially well if you already have: a website a logo a product image, screenshot, or UI visual and want to turn that into a structured ad concept and final creative without building everything manually. What problem is this workflow solving? / Use case Creating decent ad creatives usually takes more than just prompting an image model. You need to: understand what the product actually does pull useful messaging from the website figure out who the product is for write a clear value proposition decide what visual direction makes sense then generate the final ad This workflow solves that by automating the full process from website + brand assets → insights → ad concept → generated image. What this workflow does Collects a website URL, logo, and product image through a form Analyzes the uploaded product image with Claude to understand what kind of visual it is Fetches the homepage and selected internal pages from the website Extracts and cleans website text into one usable source Builds structured brand insights such as: product summary customer group problems key features key benefits brand voice Creates a marketing brief and ad concept with Claude Generates a static ad creative with NanoBanana through OpenRouter Converts the output into a file and uploads it to Google Drive Setup Connect your accounts: Anthropic API** for brand insights and ad concept generation OpenRouter** for image analysis and final image generation Google Drive** if you want to store the final output Set your credentials in the respective nodes. Make sure your form accepts: .jpg** .png** .webp** If you do not want file export, disable the Upload file node. How to customize this workflow to your needs Brand analysis: Adjust the prompt in the brand insight step if you want different fields, such as competitor angles, tone categories, or ICP detail. Page selection: Change the subpage selection prompt if you want to prioritize pages like pricing, testimonials, integrations, or case studies. Ad concept style: Edit the concept generation prompt to control tone, structure, and creative direction. Visual output: Update the image generation prompt to make outputs more minimal, more editorial, more SaaS-like, or more product-focused. Export flow: Replace Google Drive with your own storage, CMS, or downstream creative workflow. How it works The workflow starts with a form submission containing a website, logo, and optional product image. The uploaded assets are processed first: the logo is prepared for generation, while the product image is analyzed to understand whether it is a UI, product shot, illustration, object, or another type of visual. Next, the workflow fetches the homepage, extracts navigation links, and uses Claude to select a few useful internal pages likely to contain stronger marketing input. Those pages are fetched and converted into text. That content is then cleaned and merged into one source. Claude uses it to build structured brand insights and turn them into a full ad concept, including headline, subheadline, CTA, visual direction, and layout direction. Finally, the concept and uploaded assets are passed to the image model to generate the final ad creative, which can then be exported automatically. Result With this workflow, you go from website + assets → brand insights → ad concept → generated creative in one flow, with much less manual prompting and much more structure.
by Facundo Cabrera
Automated Meeting Minutes from Video Recordings This workflow automatically transforms video recordings of meetings into structured, professional meeting minutes in Notion. It uses local AI models (Whisper for transcription and Ollama for summarization) to ensure privacy and cost efficiency, while uploading the original video to Google Drive for safekeeping. Ideal for creative teams, production reviews, or any scenario where visual context is as important as the spoken word. 🔄 How It Works Wait & Detect: The workflow monitors a local folder. When a new .mkv video file is added, it waits until the file has finished copying. Prepare Audio: The video is converted into a .wav audio file optimized for transcription (under 25 MB with high clarity). Transcribe Locally: The local Whisper model generates a timestamped text transcript. Generate Smart Minutes: The transcript is sent to a local Ollama LLM, which produces structured, summarized meeting notes. Store & Share: The original video is uploaded to Google Drive, a new page is created in Notion with the notes and a link to the video, and a completion notification is sent via Discord. ⏱️ Setup Steps Estimated Time**: 10–15 minutes (for technically experienced users). Prerequisites**: Install Python, FFmpeg, and required packages (openai-whisper, ffmpeg-python). Run Ollama locally with a compatible model (e.g., gpt-oss:20b, llama3, mistral). Configure n8n credentials for Google Drive, Notion, and Discord. Workflow Configuration**: Update the file paths for the helper scripts (wait-for-file.ps1, create_wav.py, transcribe_return.py) in the respective "Execute Command" nodes. Change the input folder path (G:\OBS\videos) in the "File" node to your own recording directory. Replace the Google Drive folder ID and Notion database/page ID in their respective nodes. > 💡 Note: Detailed instructions for each step, including error handling and variable setup, are documented in the Sticky Notes within the workflow itself. 📁 Helper Scripts Documentation wait-for-file.ps1 A PowerShell script that checks if a file is still being written to (i.e., locked by another process). It returns 0 if the file is free and 1 if it is still locked. Usage: .\wait-for-file.ps1 -FilePath "C:\path\to\your\file.mkv" create_wav.py A Python script that converts a video file into a .wav audio file. It automatically calculates the necessary audio bitrate to keep the output file under 25 MB—a common requirement for many transcription services. Usage: python create_wav.py "C:\path\to\your\file.mkv" transcribe_return.py A Python script that uses a local Whisper model to transcribe an audio file. It can auto-detect the language or use a language code specified in the filename (e.g., meeting.en.mkv for English, meeting.es.mkv for Spanish). The transcript is printed directly to stdout with timestamps, which is then captured by the n8n workflow. Usage: Auto-detect language python transcribe_return.py "C:\path\to\your\file.mkv" Force language via filename python transcribe_return.py "C:\path\to\your\file.es.mkv" `
by Gilbert Onyebuchi
Automate video creation: AI generates ideas, Vertex AI renders videos, and auto-uploads to Google Drive with complete tracking. What You Get Gemini AI for creative prompts Vertex AI video generation Auto-upload to Google Drive Complete Google Sheets logging Smart retry logic Base64 to MP4 conversion Setup Enable Vertex AI in Google Cloud Get Gemini API key Run gcloud auth print-access-token for ACCESS TOKEN Import workflow & configure credentials Add prompts & test Flow Schedule → Gemini AI → Vertex AI → Wait → Convert → Upload → Log Resources Google Sheets Template ⚠️ Note: ACCESS TOKEN expires hourly - refresh using gcloud auth print-access-token 📧 LinkedIn: linkedin.com/in/yourprofile 🔗 More n8n Products: Click here
by Growth AI
N8N UGC Video Generator - Setup Instructions Transform Product Images into Professional UGC Videos with AI This powerful n8n workflow automatically converts product images into professional User-Generated Content (UGC) videos using cutting-edge AI technologies including Gemini 2.5 Flash, Claude 4 Sonnet, and VEO3 Fast. Who's it for Content creators** looking to scale video production E-commerce businesses** needing authentic product videos Marketing agencies** creating UGC campaigns for clients Social media managers** requiring quick video content How it works The workflow operates in 4 distinct phases: Phase 0: Setup - Configure all required API credentials and services Phase 1: Image Enhancement - AI analyzes and optimizes your product image Phase 2: Script Generation - Creates authentic dialogue scripts based on your input Phase 3: Video Production - Generates and merges professional video segments Requirements Essential Services & APIs Telegram Bot Token** (create via @BotFather) OpenRouter API** with Gemini 2.5 Flash access Anthropic API** for Claude 4 Sonnet KIE.AI Account** with VEO3 Fast access N8N Instance** (cloud or self-hosted) Technical Prerequisites Basic understanding of n8n workflows API key management experience Telegram bot creation knowledge How to set up Step 1: Service Configuration Create Telegram Bot Message @BotFather on Telegram Use /newbot command and follow instructions Save the bot token for later use OpenRouter Setup Sign up at openrouter.ai Purchase credits for Gemini 2.5 Flash access Generate and save API key Anthropic Configuration Create account at console.anthropic.com Add credits to your account Generate Claude API key KIE.AI Access Register at kie.ai Subscribe to VEO3 Fast plan Obtain bearer token Step 2: N8N Credential Setup Configure these credentials in your n8n instance: Telegram API Credential Name: telegramApi Bot Token: Your Telegram bot token OpenRouter API Credential Name: openRouterApi API Key: Your OpenRouter key Anthropic API Credential Name: anthropicApi API Key: Your Anthropic key HTTP Bearer Auth Credential Name: httpBearerAuth Token: Your KIE.AI bearer token Step 3: Workflow Configuration Import the Workflow Copy the provided JSON workflow Import into your n8n instance Update Telegram Token Locate the "Edit Fields" node Replace "Your Telegram Token" with your actual bot token Configure Webhook URLs Ensure all Telegram nodes have proper webhook configurations Test webhook connectivity Step 4: Testing & Validation Test Individual Nodes Verify each API connection Check credential configurations Confirm node responses End-to-End Testing Send a test image to your Telegram bot Follow the complete workflow process Verify final video output How to customize the workflow Modify Image Enhancement Prompts Edit the HTTP Request node for Gemini Adjust the prompt text to match your style preferences Test different aspect ratios (current: 1:1 square format) Customize Script Generation Modify the Basic LLM Chain node prompt Adjust video segment duration (current: 7-8 seconds each) Change dialogue style and tone requirements Video Generation Settings Update VEO3 API parameters in HTTP Request1 node Modify aspect ratio (current: 16:9) Adjust model settings and seeds for consistency Output Customization Change final video format in MediaFX node Modify Telegram message templates Add additional processing steps before delivery Workflow Operation Phase 1: Image Reception and Enhancement User sends product image via Telegram System prompts for enhancement instructions Gemini AI analyzes and optimizes image Enhanced square-format image returned Phase 2: Analysis and Script Creation System requests dialogue concept from user AI analyzes image details and environment Claude generates realistic 2-segment script Scripts respect physical constraints of original image Phase 3: Video Generation Two separate videos generated using VEO3 System monitors generation status Videos merged into single flowing sequence Final video delivered via Telegram Troubleshooting Common Issues API Rate Limits**: Implement delays between requests Webhook Failures**: Verify URL configurations and SSL certificates Video Generation Timeouts**: Increase wait node duration Credential Errors**: Double-check all API keys and permissions Error Handling The workflow includes automatic error detection: Failed video generation triggers error message Status checking prevents infinite loops Alternative outputs for different scenarios Advanced Features Batch Processing Modify trigger to handle multiple images Add queue management for high-volume usage Implement user session tracking Custom Branding Add watermarks or logos to generated videos Customize color schemes and styling Include brand-specific dialogue templates Analytics Integration Track usage metrics and success rates Monitor API costs and optimization opportunities Implement user behavior analytics Cost Optimization API Usage Management Monitor token consumption across services Implement caching for repeated requests Use lower-cost models for testing phases Efficiency Improvements Optimize image sizes before processing Implement smart retry mechanisms Use batch processing where possible This workflow transforms static product images into engaging, professional UGC videos automatically, saving hours of manual video creation while maintaining high quality output perfect for social media platforms.
by Olaf Titel
Setup & Instructions — fluidX: Create Session, Analyze & Notify Goal: This workflow demonstrates the full fluidX THE EYE integration — starting a live session, inviting both the customer (via SMS) and the service agent (via email), and then accessing the media (photos and videos) created during the session. Captured images are automatically analyzed with AI, uploaded to an external storage (such as Google Drive), and a media summary for the session is generated at the end. The agent receives an email with a link to join the live session. The customer receives an SMS with a link to start sharing their camera. Once both are connected, the agent can view the live feed, and the system automatically stores uploaded images and videos in Google Drive. When the session ends, the workflow collects all media and creates a complete AI-powered session summary (stored and updated in Google Drive). Below is an example screenshot from the customer’s phone: Prerequisites Developer account:* https://live.fluidx.digital (activate the *TEST plan**, €0) API docs (Swagger):** fluidX.digital API 🔐 Required Credentials 1️⃣ fluidX API key (HTTP Header Auth) • Credential name in n8n: fluidx API key • Header name: x-api-key • Header value: YOUR_API_KEY 2️⃣ SMTP account (for outbound email) • Credential name in n8n: SMTP account • Configure host, port, username, and password according to your provider • Enable TLS/SSL as required 3️⃣ Google Drive account • Used to store photos, videos, and automatically update the session summary files. 4️⃣ OpenAI API (for AI analysis & summary) •Used in the Analyze Images (AI) and Generate Summary parts of the workflow. • Credential type: OpenAI • Credential name (suggested): OpenAI account • API Key: your OpenAI API key • Model: e.g. gpt-4.1, gpt-4o, or similar (choose in the OpenAI node settings) ⚙️ Configuration (in the “Set Config” node) BASE_URL: https://live.fluidx.digital company / project / billingcode / sku: adjust as needed emailAgent: set before running (empty in template) phoneNumberUser: set before running (empty in template) Flow Overview Form Trigger → Create Session → Set Session Vars → Send SMS (User) → Send Email (Agent) → Monitor Media → Analyze Images (AI) → Upload Files to Google Drive → Generate Summary → Update Summary File The workflow starts automatically when a Form submission is received. Users enter the customer’s phone number and agent’s email, and the system creates a new fluidX THE EYE session. As media is uploaded during the session, the workflow automatically retrieves, stores, analyzes, and summarizes it — providing a complete end-to-end automation example for remote inspection, support, or field-service use cases. Notes Do not store real personal data inside the template. Manage API keys and secrets via n8n Credentials or environment variables. Log out of https://live.fluidx.digital in the agent’s browser before testing, to ensure a clean invite flow and session creation.
by Blumpo
Generate AI Ecommerce Ads from Product Page and Images with Claude + NanoBanana 👥 Who is this for? This workflow is designed for ecommerce brands, marketers, agencies, and content teams who want to generate static product ads faster from minimal product input. It works especially well if you already have: a product page URL a logo a product image a need to turn that into a more structured ecommerce ad concept and final creative without building everything manually 🧩 What problem is this workflow solving? / Use case Creating decent ecommerce ad creatives usually takes more than just prompting an image model. You need to: understand what the product actually is extract the real use cases and customer needs identify the most important product features and benefits pull proof points, pricing, or offer context from the page decide what visual direction makes sense for the product then generate the final ad This workflow solves that by automating the full process from product page + brand assets → product insights → ad concept → generated image. ⚙️ What this workflow does Collects a product URL, logo, and product image through a form Analyzes the uploaded product image with Claude to understand what kind of product visual it is Fetches the product page Extracts and cleans product page text into one usable source Builds structured product insights such as: product name product summary product category customer group use cases problems / needs key product features key benefits proof / trust signals offer / pricing brand voice Creates an ecommerce ad concept with Claude Generates a static ecommerce ad creative with NanoBanana through OpenRouter Converts the output into a file and uploads it to Google Drive 🔌 Setup Connect your accounts: Anthropic API** for product insight extraction and ad concept generation OpenRouter** for image analysis and final image generation Google Drive** if you want to store the final output Set your credentials in the respective nodes. Make sure your form accepts: .jpg** .png** .webp** If you do not want file export, disable the Upload file node. 🛠️ How to customize this workflow to your needs Product analysis: Adjust the product insight prompt if you want different fields, such as ingredients, materials, objections, bundle logic, or audience segments. Ad concept style: Edit the concept generation prompt to control tone, structure, and creative direction. Visual output: Update the image generation prompt to make outputs more minimal, more premium, more editorial, more offer-led, or more product-shot focused. Copy structure: Change the allowed copy structures if you want more offer-first, testimonial-first, or badge-led ecommerce ads. Export flow: Replace Google Drive with your own storage, CMS, or downstream creative workflow. 🔍 How it works The workflow starts with a form submission containing a product URL, logo, and optional product image. The uploaded assets are processed first: the logo is prepared for generation, while the product image is analyzed to understand whether it is a product shot, packaging, illustration, object, or another kind of asset. Next, the workflow fetches the product page and converts it into readable text. That content is then cleaned and turned into one usable source. Claude uses it to build structured product insights, including product summary, category, customer group, problems or needs, use cases, features, benefits, trust signals, pricing or offer, and brand voice. Based on that, Claude creates one ecommerce ad concept with the copy structure, main text, optional supporting text, badges or microcopy, CTA, visual direction, layout direction, and style direction. Finally, the concept and uploaded assets are passed to the image model to generate the final ecommerce ad creative, which can then be exported automatically. ✅ Result With this workflow, you go from product page + assets → product insights → ad concept → generated ecommerce creative in one flow, with much less manual prompting and much more structure. This is also close to the idea behind Blumpo: better ads come from better context first, not just better prompting.