by Dhruv Dalsaniya
This workflow is designed for e-commerce, marketing teams, or creators who want to automate the production of high-quality, AI-generated product visuals and ad creatives. Here is what the workflow does: It accepts a product description and other creative inputs through a web form. It uses AI to transform your text input into a detailed, creative prompt. This prompt is then used to generate a product image. The workflow analyzes the generated image and creates a new prompt to generate a second image that includes a model, adding a human element to the visual. A final prompt is created from the model image to generate a short, cinematic video. All generated assets (images and video) are automatically uploaded to your specified hosting platform, providing you with direct URLs for immediate use. This template is an efficient solution for scaling your content creation efforts, reducing time spent on manual design, and producing a consistent stream of visually engaging content for your online store, social media, and advertising campaigns. Prerequisites: OpenRouter Account:** Required for AI agents to generate image and video prompts. GOAPI Account:** Used for the final video generation process. Media Hosting Platform:** A self-hosted service like MediaUpload, or any alternative like Google Drive or a similar service that can provide a direct URL for uploaded images and videos. This is essential for passing the visuals between different steps of the workflow.
by Snehasish Konger
How it works: This template takes approved Notion pages and syncs them to a Webflow CMS collection as draft items. It reads pages marked Status = Ready for publish in a specific Notion database/project, merges JSON content stored across page blocks into a single object, then either creates a new CMS item or updates the existing one by name. On success it sets the Notion page to 5. Done; on failure it switches the page to On Hold for review.  Step-by-step: Manual Trigger You start the run with When clicking ‘Execute workflow’. Get Notion Pages (Notion → Database: Tech Content Tasks) Pull all pages with Status = Ready for publish scoped to the target Project. Loop Over Items (Split In Batches) Process one Notion page at a time. Code (Pass-through) Expose page fields (e.g., name, id, url, sector) for downstream nodes. Get Notion Block (children) Fetch all blocks under the page id. Merge Content (Code) Concatenate code-block fragments, parse them into one mergedContent JSON, and attach the page metadata. Get Webflow Items (HTTP GET) List items in the target Webflow collection to see if an item with the same name already exists. Update or Create (Switch) No match: Create Webflow Item (POST) with isDraft: true, mapping all fieldData (e.g., category titles, meta title, excerpt, hero copy/image, benefits, problem pointers, FAQ, ROI). Match: Update Webflow Item (Draft) (PATCH) for that id. Keep the existing slug, write latest fieldData, leave isDraft: true. Write Back Status (Notion) Success path → set Status = 5. Done. Error path → set Status = On Hold. Log Submission (Code) Log a compact object with status, notionPageId, webflowItemId, timestamp, and action. Wait → Loop Short pause, then continue with the next page. Tools integration: Notion** — source database and page blocks for approved content. Webflow CMS API* — destination collection; items created/updated as *drafts**. n8n Code** — JSON merge and lightweight logging. Split In Batches + Wait** — controlled, item-wise processing. Want hands-free publishing? Add a Cron trigger before step 2 to run on a schedule.
by Alok Kumar
📒 Generate Product Requirements Document (PRD) and test scenarios form input to PDF with OpenRouter and APITemplate.io This workflow generates a Product Requirements Document (PRD) and test scenarios from structured form inputs. It uses OpenRouter LLMs (GPT/Claude) for natural language generation and APITemplate.io for PDF export. Who’s it for This template is designed for product managers, business analysts, QA teams, and startup founders who need to quickly create Product Requirement Documents (PRDs) and test cases from structured inputs. How it works A Form Trigger collects key product details (name, overview, audience, goals, requirements). The LLM Chain (OpenRouter GPT/Claude) generates a professional, structured PRD in Markdown format. A second LLM Chain creates test scenarios and Gherkin-style test cases based on the PRD. Data is cleaned and merged using a Set node. The workflow sends the formatted document to APITemplate.io to generate a polished PDF. Finally, the workflow returns the PDF via a Form Completion node for easy download. ⚡ Requirements OpenRouter API Key (or any LLM) APITemplate.io account 🎯 Use cases Rapid PRD drafting for startups. QA teams generating test scenarios automatically. Standardized documentation workflows. 👉 Customize by editing prompts, PDF templates, or extending with integrations (Slack, Notion, Confluence). Need Help? Ask in the n8n Forum! Happy Automating with n8n! 🚀
by Davide
This workflow is a beginner-friendly tutorial demonstrating how to use the Evaluation tool to automatically score the AI’s output against a known correct answer (“ground truth”) stored in a Google Sheet. Advantages ✅ Beginner-friendly – Provides a simple and clear structure to understand AI evaluation. ✅ Flexible input sources – Works with both Google Sheets datasets and manual test entries. ✅ Integrated with Google Gemini – Leverages a powerful AI model for text-based tasks. ✅ Tool usage – Demonstrates how an AI agent can call external tools (e.g., calculator) for accurate answers. ✅ Automated evaluation – Outputs are automatically compared against ground truth data for factual correctness. ✅ Scalable testing – Can handle multiple dataset rows, making it useful for structured AI model evaluation. ✅ Result tracking – Saves both answers and correctness scores back to Google Sheets for easy monitoring. How it Works The workflow operates in two distinct modes, determined by the trigger: Manual Test Mode: Triggered by "When clicking 'Execute workflow'". It sends a fixed question ("How much is 8 * 3?") to the AI agent and returns the answer to the user. This mode is for quick, ad-hoc testing. Evaluation Mode: Triggered by "When fetching a dataset row". This mode reads rows of data from a linked Google Sheet. Each row contains an input (a question) and an expected_output (the correct answer). It processes each row as follows: The input question is sent to the AI Agent node. The AI Agent, powered by a Google Gemini model and equipped with a Calculator tool, processes the question and generates an answer (output). The workflow then checks if it's in evaluation mode. Instead of just returning the answer, it passes the AI's actual_output and the sheet's expected_output to another Evaluation node. This node uses a second Google Gemini model as a "judge" to evaluate the factual correctness of the AI's answer compared to the expected one, generating a Correctness score on a scale from 1 to 5. Finally, both the AI's actual_output and the automated correctness score are written back to a new column in the same row of the Google Sheet. Set up Steps To use this workflow, you need to complete the following setup steps: Credentials Configuration: Set up the Google Sheets OAuth2 API credentials (named "Google Sheets account"). This allows n8n to read from and write to your Google Sheet. Set up the Google Gemini (PaLM) API credentials (named "Google Gemini(PaLM) (Eure)"). This provides the AI language model capabilities for both the agent and the evaluator. Prepare Your Google Sheet: The workflow is pre-configured to use a specific Google Sheet. You must clone the provided template sheet (the URL is in the Sticky Note) to your own Google Drive. In your cloned sheet, ensure you have at least two columns: one for the input/question (e.g., input) and one for the expected correct answer (e.g., expected_output). You may need to update the node parameters that reference $json.input and $json.expected_output to match your column names exactly. Update Document IDs: After cloning the sheet, get its new Document ID from its URL and update the documentId field in all three Evaluation nodes ("When fetching a dataset row", "Set output Evaluation", and "Set correctness") to point to your new sheet instead of the original template. Activate the Workflow: Once the credentials and sheet are configured, toggle the workflow to Active. You can then trigger a manual test run or set the "When fetching a dataset row" node to poll your sheet automatically to evaluate all rows. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Max aka Mosheh
How it works • Webhook triggers from content creation system in Airtable • Downloads media (images/videos) from Airtable URLs • Uploads media to Postiz cloud storage • Schedules or publishes content across multiple platforms via Postiz API • Tracks publishing status back to Airtable for reporting Set up steps • Sign up for Postiz account at https://postiz.com/?ref=max • Connect your social media channels in Postiz dashboard • Get channel IDs and API key from Postiz settings • Add Postiz API key to n8n credentials (Header Auth) • Update channel IDs in "Prepare for Publish" node • Connect Airtable with your content database • Customize scheduling times per platform as needed • Full setup details in workflow sticky notes
by Robert Breen
This n8n workflow automatically generates a custom YouTube thumbnail using OpenAI’s DALL·E based on a YouTube video’s transcript and title. It uses Apify actors to extract video metadata and transcript, then processes the data into a prompt for DALL·E and creates a high-resolution image for use as a thumbnail. ✅ Key Features 📥 Form Trigger**: Accepts a YouTube URL from the user. 🧠 GPT-4o Prompt Creation**: Summarizes transcript and title into a descriptive DALL·E prompt. 🎨 DALL·E Image Generation**: Produces a clean, minimalist YouTube thumbnail with OpenAI’s image model. 🪄 Automatic Image Resizing**: Resizes final image to YouTube specs (1280x720). 🔍 Apify Integration**: Uses two Apify actors: Youtube-Transcript-Scraper to extract transcript youtube-scraper to get video metadata like title, channel, etc. 🧰 What You'll Need OpenAI API Key** Apify Account & API Token** YouTube video URL** n8n instance (cloud or self-hosted)** 🔧 Step-by-Step Setup 1️⃣ Form & Parameter Assignment Node**: Form Trigger How it works**: Collects the YouTube URL via a form embedded in your n8n instance. API Required**: None Additional Node**: Set Converts the single input URL into the format Apify expects: an array of { url } objects. 2️⃣ Apify Actors for Data Extraction Node**: HTTP Request (Query Metadata) URL: https://api.apify.com/v2/acts/streamers~youtube-scraper/run-sync-get-dataset-items Payload: JSON with startUrls array and filtering options like maxResults, isHD, etc. Node**: HTTP Request (Query Transcript) URL: https://api.apify.com/v2/acts/topaz_sharingan~Youtube-Transcript-Scraper/run-sync-get-dataset-items Payload: startUrls array API Required**: Apify API Token (via HTTP Query Auth) Notes**: You must have an Apify account and actor credits to use these actors. 3️⃣ OpenAI GPT-4o & DALL·E Generation Node**: OpenAI (Prompt Creator) Uses the transcript and title to generate a DALL·E-compatible visual prompt. Node**: OpenAI (Image Generator) Resource: image Model: DALL·E (default with GPT-4o key) API Required**: OpenAI API Key Prompt Strategy**: Create a minimalist YouTube thumbnail in an illustration style. The background should be a very simple, uncluttered setting with soft, ambient lighting that subtly reflects the essence of the transcript. The overall mood should be professional and non-cluttered, ensuring that the text overlay stands out without distraction. Do not include any text. 4️⃣ Resize for YouTube Format Node**: Edit Image Purpose**: Resize final image to 1280x720 with ignoreAspectRatio set to true. No API required** — this runs entirely in n8n. 👤 Created By Robert Breen Automation Consultant | AI Workflow Designer | n8n Expert 📧 robert@ynteractive.com 🌐 ynteractive.com 🔗 LinkedIn 🏷️ Tags openai dalle youtube thumbnail generator apify ai automation image generation illustration prompt engineering gpt-4o
by Automate With Marc
🤖 Telegram Image Editor with Nano Banana Send an image to your Telegram bot, and this workflow will automatically enhance it with Google’s Nano Banana (via Wavespeed API), then return the polished version back to the same chat—seamlessly. 👉 Watch step-by-step video tutorials of workflows like these on www.youtube.com/@automatewithmarc What it does Listens on Telegram for incoming photo messages Downloads the file sent by the user Uploads it to Google Drive (temporary storage for processing) Sends the image to Nano Banana API with a real-estate style cleanup + enhancement prompt Polls until the job is complete (handles async processing) Returns the edited image back to the same Telegram chat Perfect for Real-estate agents previewing polished property photos instantly Social media managers editing on-the-fly from Telegram Anyone who wants “send → cleaned → returned” image flow without manual edits Apps & Services Telegram Bot API (Trigger + Send/Receive files) Google Drive (Temporary file storage) Wavespeed / Google Nano Banana (AI-powered image editing) Setup Connect your Telegram Bot API token in n8n. Add your Wavespeed API key for Nano Banana. Link your Google Drive account (temporary storage). Deploy the workflow and send a test photo to your Telegram bot. Customization Adjust the Nano Banana prompt for different styles (e.g., ecommerce cleanup, portrait retouching, color correction). Replace Google Drive with another storage service if preferred. Add logging to Google Sheets or Airtable to track edits.
by Robert Schröder
AI Image Generation Workflow for Social Media Content Overview This n8n workflow automates the creation of photorealistic AI-generated images for social media content. The workflow uses RunComfy (ComfyUI cloud service) combined with Airtable for data management to create high-quality images based on custom prompts and LoRa models. Key Features Automated Image Generation: Creates photorealistic images using Flux Realism model and custom LoRa models Airtable Integration: Centrally manages content requests, model information, and image status Cloud-based Processing: Utilizes RunComfy servers for powerful GPU processing without local hardware requirements Status Tracking: Monitors generation process and automatically updates database entries Telegram Notifications: Sends success notifications after image completion Technical Workflow Server Initialization: Starts RunComfy server with configured specifications Data Retrieval: Fetches content requests from Airtable database Image Generation: Sends prompts to ComfyUI with Flux Realism + LoRa models Status Monitoring: Checks generation progress in 30-second intervals Download: Downloads completed images Database Update: Updates Airtable with image links and status Server Cleanup: Deletes RunComfy server for cost optimization Prerequisites RunComfy Membership** with API access Airtable Account** with configured database Telegram Bot** for notifications Flux Realism Workflow** in RunComfy library Uploaded LoRa Models** in RunComfy Airtable Schema The database must contain these fields: topic: Content description pose_1: Detailed image prompt LoRa Name Flux: LoRa model name Model: Character name pose_1_drive_fotolink: Link to generated image Bilder erstellt: Generation status Configuration Options Image Resolution: Default 832x1216px (adjustable in ComfyUI parameters) Generation Parameters: 35 steps, Euler sampler, Guidance 2.0 Server Size: "Large" for optimal performance (adjustable based on requirements) Time Intervals: 30s status checks, 50s server initialization This workflow is ideal for content creators who need regular, high-quality, character-consistent images for social media campaigns.
by Evoort Solutions
📥 TikTok to MP4 Converter with Google Drive & Sheets Convert TikTok videos to MP4 , MP3 (without watermark), upload to Google Drive, and log conversion attempts into Google Sheets automatically — powered by TikTok Download Audio Video API. 📝 Description This n8n automation accepts a TikTok video URL via a form, sends it to the TikTok Download Audio Video API, downloads the watermark-free MP4, uploads it to Google Drive, and logs the result (success/failure) into Google Sheets. 🧩 Node-by-Node Overview | # | Node | Functionality | |---|-------------------------------|-------------------------------------------------------------------------------| | 1 | 🟢 Form Trigger | Displays a form for user input of TikTok video URL. | | 2 | 🌐 TikTok RapidAPI Request | Calls the TikTok Downloader API to get the MP4 link. | | 3 | 🔍 If Condition | Checks if the API response status is "success". | | 4 | ⬇️ MP4 Downloader | Downloads the video file using the returned "no watermark" MP4 URL. | | 5 | ☁️ Upload to Google Drive | Uploads the video file to Google Drive root folder. | | 6 | 🔑 Set Google Drive Permission | Makes the file publicly shareable via link. | | 7 | 📄 Google Sheets (Success) | Logs TikTok URL + public Drive link into a Google Sheet. | | 8 | ⏱️ Wait Node | Delays to prevent rapid write operations on error. | | 9 | 📑 Google Sheets (Failure) | Logs failed attempts with Drive_URL = N/A. | ✅ Use Cases 📲 Social media managers downloading user-generated content 🧠 Educators saving TikTok content for offline lessons 💼 Agencies automating short-form video curation 🤖 Workflow automation demonstrations with n8n 🎯 Key Benefits ✔️ MP4 without watermark via TikTok Download Audio Video API ✔️ Automated Google Drive upload & shareable links ✔️ Centralized logging in Google Sheets ✔️ Error handling and retry-safe structure ✔️ Fully customizable and extendable within n8n 💡 Ideal for anyone looking to automate TikTok video archiving with full control over file storage and access. 🔐 How to Get Your API Key for the TikTok Download Audio Video API Go to 👉 TikTok Download Audio Video API - RapidAPI Click "Subscribe to Test" (you may need to sign up or log in). Choose a pricing plan (there’s a free tier for testing). After subscribing, click on the "Endpoints" tab. Your API Key will be visible in the "x-rapidapi-key" header. 🔑 Copy and paste this key into the httpRequest node in your workflow. Create your free n8n account and set up the workflow in just a few minutes using the link below: 👉 Start Automating with n8n
by Sabrina Ramonov 🍄
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Description This fully automated AI Avatar Social Media system creates talking head AI clone videos, WITHOUT having to film or edit yourself. It combines n8n, AI agent, HeyGen, and Blotato to research, create, and distribute talking head AI clone videos to every social media platform every single day. This template is ideal for content creators, social media managers, social media agencies, small businesses, and marketers who want to to scale short-form video creation, without manually filming and editing every single video. Overview 1. Trigger: Schedule Configured to run once daily at 10am 2. AI News Research Research viral news from tech-focused forum, Hackernews Fetch the selected news item, plus discussion comments 3. AI Writer AI writes 30-second monologue script AI writes short video caption 4. Create Avatar Video Call Heygen API (requires paid API plan), specifying your avatar ID and voice ID Create avatar video, optionally passing in an image/video background if you have a green screen avatar (matte: true) 5. Get Video Wait awhile, then fetch completed avatar video Upload video to Blotato 6. Publish to Social Media via Blotato Connect your Blotato account Choose your social accounts Either post immediately or schedule for later" 📄 Documentation Full Tutorial Troubleshooting Check your Blotato API Dashboard to see every request, response, and error. Click on a request to see the details. Need Help? In the Blotato web app, click the orange button on the bottom right corner. This opens the Support messenger where I help answer technical questions.
by Harshil Agrawal
This workflow demonstrates how to create a new deployment when new content gets added to the database. This example workflow can be used when building a JAMstack site. Webhook node: This node triggers the workflow when new content gets added. For this example, we have configured the webhook in GraphCMS. Netlify node: This node will start the build process and deploy the website. You will have to select your site from the Site ID dropdown list. To identify the deployment, we are passing a title.
by Juan Carlos Cavero Gracia
This automation template is a revolutionary AI-powered interior design and product visualization workflow that allows users to seamlessly place any object or artwork into real spaces using artificial intelligence. Upload two photos - one of your product/artwork and another of the target space - and watch as AI intelligently composites them together, then converts the result into a captivating animated video with professional camera movements. The final video is automatically published across TikTok, Instagram Reels, and YouTube Shorts for maximum reach. Note: This workflow uses Google's Gemini 2.5 Flash (Nano Banana) for intelligent image composition and FAL AI's WAN v2.2-a14b model for video generation. Each complete generation costs approximately $0.25 USD, making it an incredibly cost-effective solution for professional-quality content creation.* Who Is This For? Interior Designers & Architects:** Visualize how furniture, artwork, or decor will look in client spaces before making purchases or installations. Art Dealers & Galleries:** Show potential buyers how paintings or sculptures would appear in their homes or offices with realistic placement and lighting. E-commerce Retailers:** Create compelling product demonstrations by showing furniture, artwork, or home decor items in realistic room settings. Real Estate Professionals:** Help clients visualize how their furniture or art collection would look in new properties. Content Creators & Influencers:** Generate engaging "before and after" style content showing product placements in various environments. Marketing Agencies:** Scale visual content production for furniture brands, art dealers, and home decor companies. What Problem Does This Workflow Solve? Traditional product visualization requires expensive 3D rendering software, professional photography setups, or costly photoshoot arrangements. This workflow eliminates these barriers by: Intelligent Object Placement:** AI analyzes both the object/artwork and target space to determine optimal positioning, scale, and lighting integration. Realistic Integration:** Advanced AI composition ensures shadows, reflections, and lighting match perfectly between the object and environment. Professional Animation:** Converts static compositions into cinematic videos with smooth camera movements that highlight the placement naturally. Cost-Effective Production:** At just $0.25 per generation, it's exponentially cheaper than traditional 3D rendering or professional photography. Instant Multi-Platform Distribution:** Automatically formats and publishes content across all major social media platforms simultaneously. How It Works Dual Image Upload: Users upload two photos through an intuitive web form: Photo 1: The object, artwork, or furniture piece to be placed Photo 2: The target room or space where the item should appear Optional Description: Additional context about the desired placement Image Processing & Hosting: Both images are automatically uploaded to ImgBB for reliable cloud access throughout the workflow. AI-Powered Composition: Google's Gemini 2.5 Flash (Nano Banana) analyzes both images and intelligently composites the object into the space, considering: Proper scale and proportions Realistic lighting and shadows Perspective and depth matching Environmental integration Video Generation: FAL AI's WAN v2.2-a14b model transforms the composed image into a professional 4-second video featuring: Smooth camera panning movements Natural motion blur effects Cinematic framing and composition Quality Assurance: Automated status monitoring ensures successful generation before proceeding to publication. Multi-Platform Publishing: The final video is automatically uploaded to TikTok, Instagram Reels, and YouTube Shorts with customizable captions. Setup FAL AI Credentials: Create an account at fal.ai and add your API credentials for: Gemini 2.5 Flash (Nano Banana) image composition WAN v2.2-a14b image-to-video conversion ImgBB API Setup: Sign up at imgbb.com for free image hosting Generate an API key and update the imgbb_api_key value in the "Set APIs Vars" node Upload-Post Configuration: Create an account at upload-post.com Connect your TikTok, Instagram, and YouTube accounts Add your Upload-Post credentials to the "Upload Post" node Prompt Customization: In the "Set Prompts" node, fine-tune: prompt-image-edit: "Place the [object] in the room on the back wall, respecting the [object] perfectly and the background room and the camera frame in the photo of the room." prompt-image-to-video: Camera movement style and cinematic effects Cost Management: Monitor usage as each generation costs approximately $0.25 USD through the FAL AI services. Requirements Accounts:** n8n, fal.ai, imgbb.com, upload-post.com, social media accounts (TikTok, Instagram, YouTube). API Keys & Credentials:** FAL AI API token, ImgBB API key, Upload-Post authentication. Budget:** Approximately $0.25 USD per complete workflow execution. Social Media Setup:** Business/Creator accounts connected through Upload-Post platform. Features Dual-Image Intelligence:** Sophisticated AI analysis of both object and space for perfect integration Cost-Effective Processing:** Only $0.25 per generation compared to hundreds for traditional methods Advanced AI Models:** Google Gemini 2.5 Flash (Nano Banana) + FAL WAN v2.2-a14b for premium quality Realistic Lighting Integration:** AI matches shadows, reflections, and ambient lighting automatically Professional Video Output:** Cinematic camera movements optimized for social media engagement Multi-Platform Optimization:** Automatic formatting for TikTok, Instagram Reels, and YouTube Shorts Robust Error Handling:** Built-in retry mechanisms and quality verification Scalable Production:** Handle multiple object-space combinations efficiently Transform your product visualization workflow today - simply upload a photo of any object and the space where you want to place it, and let AI create stunning, professional videos that showcase perfect integration for just $0.25 per generation.