by GiovanniSegar
Super simple workflow to convert image URLs to an uploaded attachment in Airtable. You'll need to adjust the field names to match your specific data, including in the filter formula where it says "Cover image URL". Just replace that with the field name where you are storing the image URL.
by Muhammad Farooq Iqbal
Transform any product image into engaging UGC (User-Generated Content) videos and images using AI automation. This comprehensive workflow analyzes uploaded images via Telegram, generates realistic product images, and creates authentic UGC-style videos with multiple scenes. Key Features: 📱 Telegram Integration**: Upload images directly via Telegram bot 🔍 AI Image Analysis**: Automatically analyzes and describes uploaded images using GPT-4 Vision 🎨 Smart Image Generation**: Creates realistic product images using Fal.ai's nano-banana model with reference images 🎬 UGC Video Creation**: Generates 3-scene UGC-style videos using KIE.ai's Veo3 model 📹 Video Compilation**: Automatically combines multiple video scenes into a final output 📤 Instant Delivery**: Sends both generated images and final videos back to Telegram Perfect For: E-commerce businesses creating authentic product content Social media marketers needing UGC-style content Influencers and content creators Marketing agencies automating content production Anyone looking to scale UGC content creation What It Does: Receives product images via Telegram Analyzes image content with AI vision Generates realistic product images with UGC styling Creates 3-scene video prompts (Hook → Product → CTA) Generates individual video scenes Combines scenes into final UGC video Delivers both image and video results Technical Stack: OpenAI GPT-4 Vision for image analysis Fal.ai for image generation and video merging KIE.ai Veo3 for video generation Telegram for input/output interface Ready to automate your UGC content creation? This workflow handles everything from image analysis to final video delivery! Updated
by Dhruv Dalsaniya
This workflow is designed for e-commerce, marketing teams, or creators who want to automate the production of high-quality, AI-generated product visuals and ad creatives. Here is what the workflow does: It accepts a product description and other creative inputs through a web form. It uses AI to transform your text input into a detailed, creative prompt. This prompt is then used to generate a product image. The workflow analyzes the generated image and creates a new prompt to generate a second image that includes a model, adding a human element to the visual. A final prompt is created from the model image to generate a short, cinematic video. All generated assets (images and video) are automatically uploaded to your specified hosting platform, providing you with direct URLs for immediate use. This template is an efficient solution for scaling your content creation efforts, reducing time spent on manual design, and producing a consistent stream of visually engaging content for your online store, social media, and advertising campaigns. Prerequisites: OpenRouter Account:** Required for AI agents to generate image and video prompts. GOAPI Account:** Used for the final video generation process. Media Hosting Platform:** A self-hosted service like MediaUpload, or any alternative like Google Drive or a similar service that can provide a direct URL for uploaded images and videos. This is essential for passing the visuals between different steps of the workflow.
by Automate With Marc
🎥 Telegram Image-to-Video Generator Agent (Veo3 / Seedance Integration) ⚠️ This template uses [community nodes] and some credential-based HTTP API calls (e.g. Seedance/Wavespeed). Ensure proper credentials are configured before running. 🛠️ In the accompanying video tutorial, this logic is built as two separate workflows: Telegram → Image Upload + Prompt Agent Prompt Output → Video Generation via API Watch Full Video Tutorial: https://youtu.be/iaZHef5bZAc&list=PL05w1TE8X3baEGOktlXtRxsztOjeOb8Vg&index=1 ✨ What This Workflow Does This powerful automation allows you to generate short-form videos from a Telegram image input and user prompt — perfect for repurposing content into engaging reels. From the moment a user sends a photo with a caption to your Telegram bot, this n8n workflow: 📸 Captures the image and saves it to Google Drive 🧠 Uses an AI Agent (via LangChain + OpenAI) to craft a Seedance/Veo3-compatible video prompt 📑 Logs the interaction to a Google Sheet 🎞️ Sends the prompt + image to the Seedance (Wavespeed) API to generate a video 🚀 Sends the resulting video back to the user on Telegram — fully automated 🔗 How It Works (Step-by-Step) Telegram Bot Trigger Listens for incoming images and captions Conditional Logic Filters out invalid inputs AI Agent (LangChain) Uses OpenAI GPT to: Generate a video prompt Attach the most recent image URL (from Google Sheet) Google Drive Upload Saves the Telegram image and logs the share link Google Sheets Logging Appends a new row with date + file link Wavespeed (Seedance/Veo3) API Calls the /bytedance/seedance-v1-pro-i2v-480p endpoint with image and prompt Video Polling & Output Waits for generation completion Sends back final video file to Telegram user 🛠️ Tools & APIs Used Telegram Bot (Trigger + Video Reply) LangChain Agent Node OpenAI GPT-4.1-mini for Prompt Generation Simple Memory & Tools (Google Sheets) Google Drive (Image upload) Google Sheets (Log prompts + image URLs) Wavespeed / Seedance API (Image-to-video generation) 🧩 Requirements Before running this workflow: ✅ Set up a Telegram Bot and configure credentials ✅ Connect your Google Drive and Google Sheets credentials ✅ Sign up for Wavespeed / Seedance and generate an API key ✅ Replace placeholder values in: HTTP Request nodes Google Drive folder ID Google Sheet document ID 📦 Suggested Use Cases Generate short-form videos from image ideas Reformat static images into dynamic reels Repurpose visual content for TikTok/Instagram
by Tomoki
Video Processing Pipeline with Thumbnail Generation and CDN Distribution Summary Automated video processing system that monitors S3 for new uploads, generates thumbnails and preview clips, extracts metadata, transcodes to multiple formats, and distributes to CDN with webhook notifications. Detailed Description A comprehensive video processing workflow that receives S3 events or manual triggers, validates video files, extracts metadata via FFprobe, generates thumbnails at key frames, creates animated GIF previews, transcodes to multiple resolutions, invalidates CDN cache, and sends completion notifications. Key Features S3 Event Monitoring**: Automatic detection of new video uploads Thumbnail Generation**: Multiple sizes at key frame intervals Video Metadata**: FFprobe extraction of duration, resolution, codec info Preview GIF**: Animated preview clips for video galleries Multi-Format Transcoding**: Convert to 1080p, 720p, 480p CDN Distribution**: Cloudflare cache invalidation and signed URLs Webhook Callbacks**: Notify origin system on completion Use Cases Video hosting platforms Media asset management systems Content delivery networks Video streaming services Social media platforms E-learning video processing User-generated content platforms Required Credentials AWS S3 Credentials (for video storage) FFmpeg API credentials (via HTTP) Cloudflare API Token (for CDN) Slack Bot Token (for notifications) Google Sheets OAuth (for logging) Node Count: 24 (19 functional + 5 sticky notes) Unique Aspects Uses Webhook for S3 event notifications Uses Code nodes for S3 info extraction and URL generation Uses If node for video format validation Uses HTTP Request nodes for FFprobe, FFmpeg, and CDN APIs Uses Aggregate node for collecting parallel processing results Uses Merge nodes for multiple workflow path consolidation Implements parallel processing for thumbnails, GIF, and transcoding Workflow Architecture [S3 Event Webhook] [Manual Webhook] | | +--------+----------+ | v [Merge Triggers] | v [Extract S3 Info] (Code) | v [Check Is Video] (If) / \ Yes No | | v v [Get Video Metadata] [Invalid Response] (FFprobe) | | | v | [Parse Video Metadata] | (Code) | /|\ | / | \ | v v v | Thumbs[Transcode] | \ | / | \ | / | v v | [Aggregate Results] | | | v | [Invalidate CDN Cache] | | | v | [Generate Signed URLs] | / \ | / \ | v v | [Log Sheet] [Slack] | \ / | \ / | v | [Merge Output Paths] | | | +---------+-------+ | v [Merge All Paths] | v [Respond to Webhook] Configuration Guide S3 Event: Configure S3 bucket notification to send events to webhook FFmpeg API: Use a hosted FFmpeg service (e.g., api.ffmpeg-service.com) Cloudflare: Set zone ID and API token for cache invalidation Slack Channel: Set #video-processing for notifications Google Sheets: Connect for processing metrics logging Supported Video Formats | Extension | MIME Type | |-----------|----------| | .mp4 | video/mp4 | | .mov | video/quicktime | | .avi | video/x-msvideo | | .mkv | video/x-matroska | | .webm | video/webm | | .m4v | video/x-m4v | Thumbnail Generation | Size | Dimensions | Suffix | |------|------------|--------| | Large | 1280x720 | _large | | Medium | 640x360 | _medium | | Small | 320x180 | _small | Thumbnails generated at: 10%, 30%, 50%, 70%, 90% of video duration Transcoding Presets | Preset | Resolution | Bitrate | Codec | |--------|------------|---------|-------| | 1080p | 1920x1080 | 5000k | H.264 | | 720p | 1280x720 | 2500k | H.264 | | 480p | 854x480 | 1000k | H.264 | Output Structure { "job_id": "job_1705312000_abc123", "status": "completed", "original": { "filename": "video.mp4", "resolution": "1920x1080", "duration": "00:05:30" }, "thumbnails": { "large": "https://cdn/thumbnails/job_id/thumb_0_large.jpg", "medium": "https://cdn/thumbnails/job_id/thumb_0_medium.jpg", "small": "https://cdn/thumbnails/job_id/thumb_0_small.jpg" }, "preview_gif": "https://cdn/previews/job_id/preview.gif", "transcoded": { "1080p": "https://cdn/transcoded/job_id/video_1080p.mp4", "720p": "https://cdn/transcoded/job_id/video_720p.mp4", "480p": "https://cdn/transcoded/job_id/video_480p.mp4" } } `
by simonscrapes
Overview This n8n automation is a complete LinkedIn Content Engine that turns simple topic ideas into fully written, visual, and scheduled posts. It features a "Human-in-the-Loop" design, meaning AI handles the heavy lifting of writing and image creation, but nothing goes live until you manually approve it in Google Sheets. How It Works The system runs two separate workflows in parallel: 1. The "Creator" Workflow Input: Detects when you add a new topic to your "Content Calendar" Google Sheet. Brand Alignment: Pulls your specific "Brand Voice" guidelines from a separate tab to ensure the AI sounds like you. Creation: Uses Gemini Flash 1.5 to write the post and DALL-E 3 to generate a matching professional image. Drafting: Uploads the image to ImgBB and saves the full draft back to your sheet with a status of "Draft." 2. The "Publisher" Workflow Daily Scan: Wakes up every morning to check your Content Calendar. Verification: Looks for posts that match two criteria: Date Scheduled matches today's date. Status is marked as "Approved" (by you). Publishing: If both match, it automatically uploads the text and image to LinkedIn and updates the sheet status to "Posted." Tools Used: n8n, Google Sheets, OpenRouter (Gemini / OpenAI), ImgBB. Connect & Learn More YouTube Channel: Simon Scrapes – More tutorials on AI & Automation. Community: Skool Community – Master AI & Automation with us. Full Video Tutorial: Watch the step-by-step build here
by Snehasish Konger
How it works: This template takes approved Notion pages and syncs them to a Webflow CMS collection as draft items. It reads pages marked Status = Ready for publish in a specific Notion database/project, merges JSON content stored across page blocks into a single object, then either creates a new CMS item or updates the existing one by name. On success it sets the Notion page to 5. Done; on failure it switches the page to On Hold for review.  Step-by-step: Manual Trigger You start the run with When clicking ‘Execute workflow’. Get Notion Pages (Notion → Database: Tech Content Tasks) Pull all pages with Status = Ready for publish scoped to the target Project. Loop Over Items (Split In Batches) Process one Notion page at a time. Code (Pass-through) Expose page fields (e.g., name, id, url, sector) for downstream nodes. Get Notion Block (children) Fetch all blocks under the page id. Merge Content (Code) Concatenate code-block fragments, parse them into one mergedContent JSON, and attach the page metadata. Get Webflow Items (HTTP GET) List items in the target Webflow collection to see if an item with the same name already exists. Update or Create (Switch) No match: Create Webflow Item (POST) with isDraft: true, mapping all fieldData (e.g., category titles, meta title, excerpt, hero copy/image, benefits, problem pointers, FAQ, ROI). Match: Update Webflow Item (Draft) (PATCH) for that id. Keep the existing slug, write latest fieldData, leave isDraft: true. Write Back Status (Notion) Success path → set Status = 5. Done. Error path → set Status = On Hold. Log Submission (Code) Log a compact object with status, notionPageId, webflowItemId, timestamp, and action. Wait → Loop Short pause, then continue with the next page. Tools integration: Notion** — source database and page blocks for approved content. Webflow CMS API* — destination collection; items created/updated as *drafts**. n8n Code** — JSON merge and lightweight logging. Split In Batches + Wait** — controlled, item-wise processing. Want hands-free publishing? Add a Cron trigger before step 2 to run on a schedule.
by Elvis Sarvia
Validate AI-generated outputs before your workflow acts on them. This template sends a support ticket through AI classification, parses the JSON response, and checks that categories, urgency levels, and confidence scores are all within valid ranges. What you'll do Send a support ticket to the AI for classification. Watch the Code node parse and validate the AI's JSON response against a defined schema. See how valid outputs continue through the workflow while invalid ones get flagged. What you'll learn How to structure AI prompts to return valid JSON How Code nodes parse and validate AI output against expected schemas How to check confidence scores, valid categories, and urgency levels programmatically How to build retry and fallback paths for malformed AI responses Why it matters AI models don't always return what you expect. A confidence score of "high" instead of 0.95, a missing category field, or a malformed JSON response can silently break downstream steps. This template catches those failures before they propagate. This template is a learning companion to the Production AI Playbook, a series that explores strategies, shares best practices, and provides practical examples for building reliable AI systems in n8n. https://go.n8n.io/PAP-D&A-Blog
by Alok Kumar
📒 Generate Product Requirements Document (PRD) and test scenarios form input to PDF with OpenRouter and APITemplate.io This workflow generates a Product Requirements Document (PRD) and test scenarios from structured form inputs. It uses OpenRouter LLMs (GPT/Claude) for natural language generation and APITemplate.io for PDF export. Who’s it for This template is designed for product managers, business analysts, QA teams, and startup founders who need to quickly create Product Requirement Documents (PRDs) and test cases from structured inputs. How it works A Form Trigger collects key product details (name, overview, audience, goals, requirements). The LLM Chain (OpenRouter GPT/Claude) generates a professional, structured PRD in Markdown format. A second LLM Chain creates test scenarios and Gherkin-style test cases based on the PRD. Data is cleaned and merged using a Set node. The workflow sends the formatted document to APITemplate.io to generate a polished PDF. Finally, the workflow returns the PDF via a Form Completion node for easy download. ⚡ Requirements OpenRouter API Key (or any LLM) APITemplate.io account 🎯 Use cases Rapid PRD drafting for startups. QA teams generating test scenarios automatically. Standardized documentation workflows. 👉 Customize by editing prompts, PDF templates, or extending with integrations (Slack, Notion, Confluence). Need Help? Ask in the n8n Forum! Happy Automating with n8n! 🚀
by Robert Breen
This n8n workflow automatically generates a custom YouTube thumbnail using OpenAI’s DALL·E based on a YouTube video’s transcript and title. It uses Apify actors to extract video metadata and transcript, then processes the data into a prompt for DALL·E and creates a high-resolution image for use as a thumbnail. ✅ Key Features 📥 Form Trigger**: Accepts a YouTube URL from the user. 🧠 GPT-4o Prompt Creation**: Summarizes transcript and title into a descriptive DALL·E prompt. 🎨 DALL·E Image Generation**: Produces a clean, minimalist YouTube thumbnail with OpenAI’s image model. 🪄 Automatic Image Resizing**: Resizes final image to YouTube specs (1280x720). 🔍 Apify Integration**: Uses two Apify actors: Youtube-Transcript-Scraper to extract transcript youtube-scraper to get video metadata like title, channel, etc. 🧰 What You'll Need OpenAI API Key** Apify Account & API Token** YouTube video URL** n8n instance (cloud or self-hosted)** 🔧 Step-by-Step Setup 1️⃣ Form & Parameter Assignment Node**: Form Trigger How it works**: Collects the YouTube URL via a form embedded in your n8n instance. API Required**: None Additional Node**: Set Converts the single input URL into the format Apify expects: an array of { url } objects. 2️⃣ Apify Actors for Data Extraction Node**: HTTP Request (Query Metadata) URL: https://api.apify.com/v2/acts/streamers~youtube-scraper/run-sync-get-dataset-items Payload: JSON with startUrls array and filtering options like maxResults, isHD, etc. Node**: HTTP Request (Query Transcript) URL: https://api.apify.com/v2/acts/topaz_sharingan~Youtube-Transcript-Scraper/run-sync-get-dataset-items Payload: startUrls array API Required**: Apify API Token (via HTTP Query Auth) Notes**: You must have an Apify account and actor credits to use these actors. 3️⃣ OpenAI GPT-4o & DALL·E Generation Node**: OpenAI (Prompt Creator) Uses the transcript and title to generate a DALL·E-compatible visual prompt. Node**: OpenAI (Image Generator) Resource: image Model: DALL·E (default with GPT-4o key) API Required**: OpenAI API Key Prompt Strategy**: Create a minimalist YouTube thumbnail in an illustration style. The background should be a very simple, uncluttered setting with soft, ambient lighting that subtly reflects the essence of the transcript. The overall mood should be professional and non-cluttered, ensuring that the text overlay stands out without distraction. Do not include any text. 4️⃣ Resize for YouTube Format Node**: Edit Image Purpose**: Resize final image to 1280x720 with ignoreAspectRatio set to true. No API required** — this runs entirely in n8n. 👤 Created By Robert Breen Automation Consultant | AI Workflow Designer | n8n Expert 📧 robert@ynteractive.com 🌐 ynteractive.com 🔗 LinkedIn 🏷️ Tags openai dalle youtube thumbnail generator apify ai automation image generation illustration prompt engineering gpt-4o
by Max aka Mosheh
How it works • Webhook triggers from content creation system in Airtable • Downloads media (images/videos) from Airtable URLs • Uploads media to Postiz cloud storage • Schedules or publishes content across multiple platforms via Postiz API • Tracks publishing status back to Airtable for reporting Set up steps • Sign up for Postiz account at https://postiz.com/?ref=max • Connect your social media channels in Postiz dashboard • Get channel IDs and API key from Postiz settings • Add Postiz API key to n8n credentials (Header Auth) • Update channel IDs in "Prepare for Publish" node • Connect Airtable with your content database • Customize scheduling times per platform as needed • Full setup details in workflow sticky notes
by Davide
This workflow is a beginner-friendly tutorial demonstrating how to use the Evaluation tool to automatically score the AI’s output against a known correct answer (“ground truth”) stored in a Google Sheet. Advantages ✅ Beginner-friendly – Provides a simple and clear structure to understand AI evaluation. ✅ Flexible input sources – Works with both Google Sheets datasets and manual test entries. ✅ Integrated with Google Gemini – Leverages a powerful AI model for text-based tasks. ✅ Tool usage – Demonstrates how an AI agent can call external tools (e.g., calculator) for accurate answers. ✅ Automated evaluation – Outputs are automatically compared against ground truth data for factual correctness. ✅ Scalable testing – Can handle multiple dataset rows, making it useful for structured AI model evaluation. ✅ Result tracking – Saves both answers and correctness scores back to Google Sheets for easy monitoring. How it Works The workflow operates in two distinct modes, determined by the trigger: Manual Test Mode: Triggered by "When clicking 'Execute workflow'". It sends a fixed question ("How much is 8 * 3?") to the AI agent and returns the answer to the user. This mode is for quick, ad-hoc testing. Evaluation Mode: Triggered by "When fetching a dataset row". This mode reads rows of data from a linked Google Sheet. Each row contains an input (a question) and an expected_output (the correct answer). It processes each row as follows: The input question is sent to the AI Agent node. The AI Agent, powered by a Google Gemini model and equipped with a Calculator tool, processes the question and generates an answer (output). The workflow then checks if it's in evaluation mode. Instead of just returning the answer, it passes the AI's actual_output and the sheet's expected_output to another Evaluation node. This node uses a second Google Gemini model as a "judge" to evaluate the factual correctness of the AI's answer compared to the expected one, generating a Correctness score on a scale from 1 to 5. Finally, both the AI's actual_output and the automated correctness score are written back to a new column in the same row of the Google Sheet. Set up Steps To use this workflow, you need to complete the following setup steps: Credentials Configuration: Set up the Google Sheets OAuth2 API credentials (named "Google Sheets account"). This allows n8n to read from and write to your Google Sheet. Set up the Google Gemini (PaLM) API credentials (named "Google Gemini(PaLM) (Eure)"). This provides the AI language model capabilities for both the agent and the evaluator. Prepare Your Google Sheet: The workflow is pre-configured to use a specific Google Sheet. You must clone the provided template sheet (the URL is in the Sticky Note) to your own Google Drive. In your cloned sheet, ensure you have at least two columns: one for the input/question (e.g., input) and one for the expected correct answer (e.g., expected_output). You may need to update the node parameters that reference $json.input and $json.expected_output to match your column names exactly. Update Document IDs: After cloning the sheet, get its new Document ID from its URL and update the documentId field in all three Evaluation nodes ("When fetching a dataset row", "Set output Evaluation", and "Set correctness") to point to your new sheet instead of the original template. Activate the Workflow: Once the credentials and sheet are configured, toggle the workflow to Active. You can then trigger a manual test run or set the "When fetching a dataset row" node to poll your sheet automatically to evaluate all rows. Need help customizing? Contact me for consulting and support or add me on Linkedin.