by Maxim Osipovs
This n8n workflow template implements a dual-path architecture for AI customer support, based on the principles outlined in the research paper "A Locally Executable AI System for Improving Preoperative Patient Communication: A Multi-Domain Clinical Evaluation" (Sato et al.). The system, named LENOHA (Low Energy, No Hallucination, Leave No One Behind Architecture), uses a high-precision classifier to differentiate between high-stakes queries and casual conversation. Queries matching a known FAQ are answered with a pre-approved, verbatim response, structurally eliminating hallucination risk. All other queries are routed to a standard generative LLM for conversational flexibility. This template provides a practical ++blueprint++ for building safer, more reliable, and cost-efficient AI agents, particularly in regulated or high-stakes domains where factual accuracy is critical. What This Template Does (Step-by-Step) Loads an expert-curated FAQ from Google Sheets and creates a searchable vector store from the questions during a one-time setup flow. Receives incoming user queries in real-time via a chat trigger. Classifies user intent by converting the query to an embedding and searching the vector store for the most semantically similar FAQ question. Routes the query down one of two paths based on a configurable similarity score threshold. Responds with a verbatim, pre-approved answer if a match is found (safe path), or generates a conversational reply via an LLM if no match is found (casual path). Important Note for Production Use This template uses an in-memory Simple Vector Store for demonstration purposes. For a production application, this should be replaced with a persistent vector database (e.g., Pinecone, Chroma, Weaviate, Supabase) to store your embeddings permanently. Required Integrations: Google Sheets (for the FAQ knowledge base) Hugging Face API (for creating embeddings) An LLM provider (e.g., OpenAI, Anthropic, Mistral) (Recommended) A persistent Vector Store integration. Best For: π¦ Organizations in regulated industries (finance, healthcare) requiring high accuracy. π° Applications where reducing LLM operational costs is a priority. βοΈ Technical support agents that must provide precise, unchanging information. π Systems where auditability and deterministic responses for known issues are required. Key Benefits: β Structurally eliminates hallucination risk for known topics. β Reduces reliance on expensive generative models for common queries. β Ensures deterministic, accurate, and consistent answers for your FAQ. β Provides high-speed classification via vector search. β Implements a research-backed architecture for building safer AI systems.
by Ryan Nolan
This template and YouTube video goes over 8 different examples of how we can utilize Binary data within n8n. We start with brining in Binary data with Google Drive, FTP, or Form submission. After we jump into how to extract Binary Data, Analyze an image, convert files, and use base64. This lesson also covers the recent update with grabbing binary data in later nodes. YouTube video: https://youtu.be/0Vefm8vXFxE
by Robert SchrΓΆder
AI Image Generation Workflow for Social Media Content Overview This n8n workflow automates the creation of photorealistic AI-generated images for social media content. The workflow uses RunComfy (ComfyUI cloud service) combined with Airtable for data management to create high-quality images based on custom prompts and LoRa models. Key Features Automated Image Generation: Creates photorealistic images using Flux Realism model and custom LoRa models Airtable Integration: Centrally manages content requests, model information, and image status Cloud-based Processing: Utilizes RunComfy servers for powerful GPU processing without local hardware requirements Status Tracking: Monitors generation process and automatically updates database entries Telegram Notifications: Sends success notifications after image completion Technical Workflow Server Initialization: Starts RunComfy server with configured specifications Data Retrieval: Fetches content requests from Airtable database Image Generation: Sends prompts to ComfyUI with Flux Realism + LoRa models Status Monitoring: Checks generation progress in 30-second intervals Download: Downloads completed images Database Update: Updates Airtable with image links and status Server Cleanup: Deletes RunComfy server for cost optimization Prerequisites RunComfy Membership** with API access Airtable Account** with configured database Telegram Bot** for notifications Flux Realism Workflow** in RunComfy library Uploaded LoRa Models** in RunComfy Airtable Schema The database must contain these fields: topic: Content description pose_1: Detailed image prompt LoRa Name Flux: LoRa model name Model: Character name pose_1_drive_fotolink: Link to generated image Bilder erstellt: Generation status Configuration Options Image Resolution: Default 832x1216px (adjustable in ComfyUI parameters) Generation Parameters: 35 steps, Euler sampler, Guidance 2.0 Server Size: "Large" for optimal performance (adjustable based on requirements) Time Intervals: 30s status checks, 50s server initialization This workflow is ideal for content creators who need regular, high-quality, character-consistent images for social media campaigns.
by Sabrina Ramonov π
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Description This fully automated AI Avatar Social Media system creates talking head AI clone videos, WITHOUT having to film or edit yourself. It combines n8n, AI agent, HeyGen, and Blotato to research, create, and distribute talking head AI clone videos to every social media platform every single day. This template is ideal for content creators, social media managers, social media agencies, small businesses, and marketers who want to to scale short-form video creation, without manually filming and editing every single video. Overview 1. Trigger: Schedule Configured to run once daily at 10am 2. AI News Research Research viral news from tech-focused forum, Hackernews Fetch the selected news item, plus discussion comments 3. AI Writer AI writes 30-second monologue script AI writes short video caption 4. Create Avatar Video Call Heygen API (requires paid API plan), specifying your avatar ID and voice ID Create avatar video, optionally passing in an image/video background if you have a green screen avatar (matte: true) 5. Get Video Wait awhile, then fetch completed avatar video Upload video to Blotato 6. Publish to Social Media via Blotato Connect your Blotato account Choose your social accounts Either post immediately or schedule for later" π Documentation Full Tutorial Troubleshooting Check your Blotato API Dashboard to see every request, response, and error. Click on a request to see the details. Need Help? In the Blotato web app, click the orange button on the bottom right corner. This opens the Support messenger where I help answer technical questions.
by Zain Khan
AI Product Photography With Nano Banana and Jotform πΈβ¨ Automate your product visuals! This n8n workflow instantly processes new product photography requests from Jotform or Google Sheets, uses an AI agent (Gemini Nano Banana) to generate professional AI product photography based on your product details and reference images, saves the final image to Google Drive, and updates the photo link in your Google Sheet for seamless record keeping. How it Works This n8n workflow operates as a fully automated pipeline for generating and managing AI product photographs: Trigger: The workflow is triggered either manually, on a set schedule (e.g., hourly), or immediately upon a new submission from the connected Jotform (or when new "Pending" rows are detected in the Google Sheet on a scheduled or manual run). Data Retrieval: If triggered by a schedule or manually, the workflow fetches new rows with a "Status" of "Pending" from the designated Google Sheet. Data Preparation: The input data (Product Name, Description, Requirements, and URLs for the Product and Reference Images) is prepared. The Product and Reference Images are downloaded using HTTP Requests. AI Analysis & Prompt Generation: An AI agent (using the Gemini model) analyzes the product details and image requirements, then generates a refined, professional prompt for the image generation model. AI Photo Generation: The generated prompt, along with the downloaded product and reference images, is sent to the image generation model, referred to as "Gemini Nano Banana" (a powerful Google AI model for image generation), to create the final, high-quality AI product photograph. File Handling: The raw image data is converted into a binary file format. Storage: The generated photograph is saved with the Product Name as the filename to your specified Google Drive folder. Record Update: The workflow updates the original row in the Google Sheet, changing the "Status" to "Completed" and adding the public URL of the newly saved image in the "Generated Image" column. If the trigger was from Jotform, a new record is appended to the Google Sheet. Requirements To use this workflow, you'll need the following accounts and credentials configured in n8n: n8n Account:** Your self-hosted or cloud n8n instance. Google Sheets/Drive Credentials:* An *OAuth2* or *API Key** credential for the Google Sheets and Google Drive nodes to read input and save the generated image. Google Gemini API Key:* An API key for the Google Gemini nodes to access the AI agent for prompt generation and the image generation service (Gemini Nano Banana*). Jotform Credential (Optional):* A Jotform credential is only required if you want to use the Jotform Webhook trigger. *Sign up for Jotform here:** https://www.jotform.com/?partner=zainurrehman A Google Sheet and Jotform:** with columns/fields for: Product Name, Product Description, Product Image (URL), Requirement, Reference Image 1 (URL), Reference Image 2 (URL), Status, and a blank Generated Image column. How to Use 1. Set Up Your Integrations Add the necessary Credentials (Google Sheets, Google Drive, Gemini API, and optionally Jotform) in your n8n settings. Specify the Google Sheet Document ID and Sheet Name in the Google Sheet nodes. In the Upload to Drive node, select your desired Drive ID and Folder ID where the final images should be saved. 2. Prepare Input Data You can start the workflow either by: Submitting a Form:* Fill out and submit the connected *Jotform** with the product details and image links. Adding to a Sheet:* Manually add a new row to your Google Sheet with all the product and image details, ensuring the *Status* is set to *"Pending"**. 3. Run the Workflow For Jotform Trigger:* Once the workflow is *Active**, a Jotform submission will automatically start the process. For Scheduled/Manual Trigger:* Activate the *Schedule Trigger* for automatic runs (e.g., hourly), or click the *Manual Trigger* node and select *"Execute Workflow"** to process all current "Pending" requests in the Google Sheet. The generated photograph will be uploaded to Google Drive, and its link will be automatically recorded in the "Generated Image" column in your Google Sheet.
by Evoort Solutions
π₯ TikTok to MP4 Converter with Google Drive & Sheets Convert TikTok videos to MP4 , MP3 (without watermark), upload to Google Drive, and log conversion attempts into Google Sheets automatically β powered by TikTok Download Audio Video API. π Description This n8n automation accepts a TikTok video URL via a form, sends it to the TikTok Download Audio Video API, downloads the watermark-free MP4, uploads it to Google Drive, and logs the result (success/failure) into Google Sheets. π§© Node-by-Node Overview | # | Node | Functionality | |---|-------------------------------|-------------------------------------------------------------------------------| | 1 | π’ Form Trigger | Displays a form for user input of TikTok video URL. | | 2 | π TikTok RapidAPI Request | Calls the TikTok Downloader API to get the MP4 link. | | 3 | π If Condition | Checks if the API response status is "success". | | 4 | β¬οΈ MP4 Downloader | Downloads the video file using the returned "no watermark" MP4 URL. | | 5 | βοΈ Upload to Google Drive | Uploads the video file to Google Drive root folder. | | 6 | π Set Google Drive Permission | Makes the file publicly shareable via link. | | 7 | π Google Sheets (Success) | Logs TikTok URL + public Drive link into a Google Sheet. | | 8 | β±οΈ Wait Node | Delays to prevent rapid write operations on error. | | 9 | π Google Sheets (Failure) | Logs failed attempts with Drive_URL = N/A. | β Use Cases π² Social media managers downloading user-generated content π§ Educators saving TikTok content for offline lessons πΌ Agencies automating short-form video curation π€ Workflow automation demonstrations with n8n π― Key Benefits βοΈ MP4 without watermark via TikTok Download Audio Video API βοΈ Automated Google Drive upload & shareable links βοΈ Centralized logging in Google Sheets βοΈ Error handling and retry-safe structure βοΈ Fully customizable and extendable within n8n π‘ Ideal for anyone looking to automate TikTok video archiving with full control over file storage and access. π How to Get Your API Key for the TikTok Download Audio Video API Go to π TikTok Download Audio Video API - RapidAPI Click "Subscribe to Test" (you may need to sign up or log in). Choose a pricing plan (thereβs a free tier for testing). After subscribing, click on the "Endpoints" tab. Your API Key will be visible in the "x-rapidapi-key" header. π Copy and paste this key into the httpRequest node in your workflow. Create your free n8n account and set up the workflow in just a few minutes using the link below: π Start Automating with n8n
by Marth
Workflow Description: Automated YouTube Short Viral History (Blotato + GPT-4.1) This workflow is a powerful, self-sustaining end-to-end content automation pipeline designed to feed your YouTube Shorts channel with consistent, high-quality, and highly engaging videos focused on "What if history..." scenarios. This solution completely eliminates manual intervention across the creative, production, and publishing stages. It expertly links the creative power of a GPT-4o AI Agent with the video rendering capabilities of the Blotato API, all orchestrated by n8n. How It Works The automation runs through a five-step, scheduled process: Trigger and Idea Generation: The Schedule Trigger starts the workflow (default is 10:00 AM daily). The AI Agent (GPT-4o) acts as a copywriter/researcher, automatically brainstorming a random "What if history..." topic, researching relevant facts, and formulating a viral, hook-driven 60-second video script, along with a title and caption. Visual Production Request: The formatted script is sent to the Blotato API via the Create Video node. Blotato begins rendering the text-to-video short based on the pre-set style parameters (cinematic style, specific voice ID, and AI models). Status Check and Wait: The Wait node pauses the workflow, and the Get Video node continually checks the Blotato system until the video rendering status is confirmed as done. Media Upload: The completed video file is uploaded to the Blotato media library using an HTTP Request node, preparing it for publishing. Automated Publishing: The final YT Post node (another HTTP Request to the Blotato API) automatically publishes the video to your linked YouTube channel, using the video URL and the AI-generated title and short caption. Set Up Steps To activate and personalize this powerful content pipeline in n8n, follow these steps: OpenAI Credential: Ensure your OpenAI API key credential is created and connected to the Brainstorm Idea node (Language Model). The workflow uses GPT-4o by default. Blotato API Key: Obtain your Blotato API Key. Open the Prepare Video node and manually insert your Blotato API Key into the blotato_api_key field. YouTube Account ID: Find the Account ID (or Channel ID) for the YouTube channel you want to post to. Open the Prepare for Publish node and manually insert your YouTube Account ID into the youtube_id field. Customize Video Style (Optional): If desired, adjust the visual aesthetic by modifying parameters in the Prepare Video node, such as: voiceId: To change the video narrator. style: To change the visual theme (e.g., from cinematic to documentary). text_to_image_model and image_to_video_model: To change the underlying AI generation models. Activate Workflow: Save the workflow and toggle the main switch to Active. The first video will be created and published on the next scheduled run.
by Kai S. Huxmann
Objective This template helps you create clean, structured, and visually understandable workflows that are easy to read, present to clients, and collaborate on with teams. Whether you're onboarding a client, building reusable automations, or working across a team, this template gives you a solid foundation for workflow visual design and communication. β¨ Whatβs inside? β Visual layout structure suggestion β Clear segmentation into basic functional parts β Color Coding suggestion to define meaning of colors π¨ Color-coded nodes (with a built-in legend): π© Green β Operational and stable π¨ Yellow β Work in progress π₯ Red β Failing / error π§ Orange β Needs review or improvement π¦ Blue β User input required β¬ Dark grey β Deprecated or paused π₯ Who is this for? This template is ideal for: π§ Freelancers or agencies delivering workflows to clients π₯ Teams working together on large-scale automations π§± Anyone creating reusable templates or internal standards π§βπ Beginners who want to learn clean visual patterns supporting easy to maintain code base πΈ Why use this? > βA workflow should explain itself visually β this template helps it do just that.β Better team collaboration Easier onboarding of new developers Faster understanding** for clients, even non-technical ones Reduces maintenance time in the long run π How to use Clone this template and start from it when creating new workflows Keep color conventions consistent (especially in early project stages) Use it to build a visual standard across your team or organization π§ Reminder This is a non-functional template β it contains structure, patterns, and documentation examples only. Replace the example nodes with your own logic.
by Harshil Agrawal
This workflow demonstrates how to create a new deployment when new content gets added to the database. This example workflow can be used when building a JAMstack site. Webhook node: This node triggers the workflow when new content gets added. For this example, we have configured the webhook in GraphCMS. Netlify node: This node will start the build process and deploy the website. You will have to select your site from the Site ID dropdown list. To identify the deployment, we are passing a title.
by Juan Carlos Cavero Gracia
This automation template is a revolutionary AI-powered interior design and product visualization workflow that allows users to seamlessly place any object or artwork into real spaces using artificial intelligence. Upload two photos - one of your product/artwork and another of the target space - and watch as AI intelligently composites them together, then converts the result into a captivating animated video with professional camera movements. The final video is automatically published across TikTok, Instagram Reels, and YouTube Shorts for maximum reach. Note: This workflow uses Google's Gemini 2.5 Flash (Nano Banana) for intelligent image composition and FAL AI's WAN v2.2-a14b model for video generation. Each complete generation costs approximately $0.25 USD, making it an incredibly cost-effective solution for professional-quality content creation.* Who Is This For? Interior Designers & Architects:** Visualize how furniture, artwork, or decor will look in client spaces before making purchases or installations. Art Dealers & Galleries:** Show potential buyers how paintings or sculptures would appear in their homes or offices with realistic placement and lighting. E-commerce Retailers:** Create compelling product demonstrations by showing furniture, artwork, or home decor items in realistic room settings. Real Estate Professionals:** Help clients visualize how their furniture or art collection would look in new properties. Content Creators & Influencers:** Generate engaging "before and after" style content showing product placements in various environments. Marketing Agencies:** Scale visual content production for furniture brands, art dealers, and home decor companies. What Problem Does This Workflow Solve? Traditional product visualization requires expensive 3D rendering software, professional photography setups, or costly photoshoot arrangements. This workflow eliminates these barriers by: Intelligent Object Placement:** AI analyzes both the object/artwork and target space to determine optimal positioning, scale, and lighting integration. Realistic Integration:** Advanced AI composition ensures shadows, reflections, and lighting match perfectly between the object and environment. Professional Animation:** Converts static compositions into cinematic videos with smooth camera movements that highlight the placement naturally. Cost-Effective Production:** At just $0.25 per generation, it's exponentially cheaper than traditional 3D rendering or professional photography. Instant Multi-Platform Distribution:** Automatically formats and publishes content across all major social media platforms simultaneously. How It Works Dual Image Upload: Users upload two photos through an intuitive web form: Photo 1: The object, artwork, or furniture piece to be placed Photo 2: The target room or space where the item should appear Optional Description: Additional context about the desired placement Image Processing & Hosting: Both images are automatically uploaded to ImgBB for reliable cloud access throughout the workflow. AI-Powered Composition: Google's Gemini 2.5 Flash (Nano Banana) analyzes both images and intelligently composites the object into the space, considering: Proper scale and proportions Realistic lighting and shadows Perspective and depth matching Environmental integration Video Generation: FAL AI's WAN v2.2-a14b model transforms the composed image into a professional 4-second video featuring: Smooth camera panning movements Natural motion blur effects Cinematic framing and composition Quality Assurance: Automated status monitoring ensures successful generation before proceeding to publication. Multi-Platform Publishing: The final video is automatically uploaded to TikTok, Instagram Reels, and YouTube Shorts with customizable captions. Setup FAL AI Credentials: Create an account at fal.ai and add your API credentials for: Gemini 2.5 Flash (Nano Banana) image composition WAN v2.2-a14b image-to-video conversion ImgBB API Setup: Sign up at imgbb.com for free image hosting Generate an API key and update the imgbb_api_key value in the "Set APIs Vars" node Upload-Post Configuration: Create an account at upload-post.com Connect your TikTok, Instagram, and YouTube accounts Add your Upload-Post credentials to the "Upload Post" node Prompt Customization: In the "Set Prompts" node, fine-tune: prompt-image-edit: "Place the [object] in the room on the back wall, respecting the [object] perfectly and the background room and the camera frame in the photo of the room." prompt-image-to-video: Camera movement style and cinematic effects Cost Management: Monitor usage as each generation costs approximately $0.25 USD through the FAL AI services. Requirements Accounts:** n8n, fal.ai, imgbb.com, upload-post.com, social media accounts (TikTok, Instagram, YouTube). API Keys & Credentials:** FAL AI API token, ImgBB API key, Upload-Post authentication. Budget:** Approximately $0.25 USD per complete workflow execution. Social Media Setup:** Business/Creator accounts connected through Upload-Post platform. Features Dual-Image Intelligence:** Sophisticated AI analysis of both object and space for perfect integration Cost-Effective Processing:** Only $0.25 per generation compared to hundreds for traditional methods Advanced AI Models:** Google Gemini 2.5 Flash (Nano Banana) + FAL WAN v2.2-a14b for premium quality Realistic Lighting Integration:** AI matches shadows, reflections, and ambient lighting automatically Professional Video Output:** Cinematic camera movements optimized for social media engagement Multi-Platform Optimization:** Automatic formatting for TikTok, Instagram Reels, and YouTube Shorts Robust Error Handling:** Built-in retry mechanisms and quality verification Scalable Production:** Handle multiple object-space combinations efficiently Transform your product visualization workflow today - simply upload a photo of any object and the space where you want to place it, and let AI create stunning, professional videos that showcase perfect integration for just $0.25 per generation.
by AArtIntelligent
Objective This workflow automatically imports product images from Google Drive and associates them with templates and products in Odoo.
by panyanyany
Overview This workflow utilizes the Defapi API with Sora 2 AI model to generate stunning viral videos with creative AI-generated motion, effects, and storytelling. Simply provide a creative prompt describing your desired video scene, and optionally upload an image as a reference. The AI generates professional-quality video content perfect for tiktok, youtube, marketing campaigns, and creative projects. Input: Creative prompt (required) + optional image Output: AI-generated viral video ready for social media and content marketing Users can interact through a simple form, providing a text prompt describing the desired video scene and optionally uploading an image for context. The system automatically submits the request to the Defapi Sora 2 API, monitors the generation status in real time, and retrieves the final video output. This solution is ideal for content creators, social media marketers, video producers, and businesses who want to quickly generate engaging video content with minimal setup. Prerequisites A Defapi account and API key: Sign up at Defapi.org to obtain your API key for Sora 2 access. An active n8n instance (cloud or self-hosted) with HTTP Request and form submission capabilities. Basic knowledge of AI prompts for video generation to achieve optimal results. Example prompt: A pack of dogs driving tiny cars in a high-speed chase through a city, wearing sunglasses and honking their horns, with dramatic action music and slow-motion jumps over fire hydrants. For 15-second HD videos, prefix your prompt with (15s,hd). (Optional) An image to use as a reference or starting point for video generation. Image Restrictions: Avoid uploading images with real people or highly realistic human faces, as they will be rejected during content review. Important Notes**: The API requires proper authentication via Bearer token for all requests. Content undergoes multi-stage moderation. Avoid violence, adult content, copyrighted material, and living celebrities in both prompts and images. Setup Instructions Obtain API Key: Register at Defapi.org and generate your API key with Sora 2 access. Store it securelyβdo not share it publicly. Configure Credentials: In n8n, create HTTP Bearer Auth credentials named "Defapi account" with your API key. Configure the Form: In the "Upload Image" form trigger node, ensure the following fields are set up: Prompt (text field, required) - Describe the video scene you want to generate Image (file upload, optional) - Optionally upload .jpg, .png, or .webp image files as reference Test the Workflow: Click "Execute Workflow" in n8n to activate the form trigger. Access the generated form URL and enter your creative video prompt. Optionally upload an image for additional context. The workflow will process any uploaded image through the "Convert to JSON" node, converting it to base64 format. The request is sent to the Sora 2 API endpoint at Defapi.org. The system will wait 10 seconds and then poll the API status until video generation is complete. Handle Outputs: The final "Format and Display Results" node formats and displays the generated video URL for download or embedding. Workflow Structure The workflow consists of the following nodes: Upload Image (Form Trigger) - Collects user input: creative prompt (required) and optional image file Convert to JSON (Code Node) - Converts any uploaded image to base64 data URI and formats prompt Send Sora 2 Generation Request to Defapi.org API (HTTP Request) - Submits video generation request to Sora 2 API Wait for Processing Completion (Wait Node) - Waits 10 seconds before checking status Obtain the generated status (HTTP Request) - Polls API task query endpoint for completion status Check if Image Generation is Complete (IF Node) - Checks if status equals 'success' Format and Display Results (Set Node) - Extracts and formats final video URL output Technical Details API Endpoint**: https://api.defapi.org/api/sora2/gen (POST request) Model Used**: Sora 2 AI video generation model Video Capabilities**: Supports 15-second videos and high-definition (HD) output Status Check Endpoint**: https://api.defapi.org/api/task/query (GET request) Wait Time**: 10 seconds between status checks Image Processing**: If an image is uploaded, it is converted to base64 data URI format (data:image/[type];base64,[data]) for API submission Authentication**: Bearer token authentication using the configured Defapi account credentials Request Body Format**: { "prompt": "Your video description here", "images": ["data:image/jpeg;base64,..."] } Note: The images array can contain an image or be empty if no image is provided Response Format**: The API returns a task_id which is used to poll for completion status. Final result contains data.result.video with the video URL. Accepted Image Formats**: .jpg, .png, .webp Specialized For**: Viral video content, social media videos, creative video marketing Customization Tips Enhance Prompts**: Include specifics like: Scene description and action sequences Character behaviors and emotions Camera movements and angles (e.g., slow-motion, dramatic zoom) Audio/music style (e.g., dramatic, upbeat, cinematic) Visual effects and atmosphere Timing and pacing details Enable 15s and HD Output**: To generate 15-second high-definition videos, start your prompt with (15s,hd). For example: (15s,hd) A pack of dogs driving tiny cars in a high-speed chase through a city... Content Moderation The API implements a three-stage content review process: Image Review: Rejects images with real people or highly realistic human faces Prompt Filtering: Checks for violence, adult content, copyrighted material, and living celebrities Output Review: Final check after generation (often causes failures at 90%+ completion) Best Practices: Avoid real human photos; use illustrations or cartoons instead Keep prompts generic; avoid brand names and celebrity names You can reference verified Sora accounts (e.g., "let @sama dance") If generation fails at 90%+, simplify your prompt and try again Example Prompts "A pack of dogs driving tiny cars in a high-speed chase through a city, wearing sunglasses and honking their horns, with dramatic action music and slow-motion jumps over fire hydrants." "(15s,hd) Animated fantasy landscape with floating islands, waterfalls cascading into clouds, magical creatures flying, golden sunset lighting, epic orchestral music." "(15s,hd) Product showcase with 360-degree rotation, dramatic lighting changes, particle effects, modern electronic background music." Use Cases Social Media Content**: Generate eye-catching videos for Instagram Reels, TikTok, and YouTube Shorts Marketing Campaigns**: Create unique promotional videos from product images Creative Projects**: Transform static images into dynamic storytelling videos Content Marketing**: Produce engaging video content without expensive production costs Viral Content Creation**: Generate shareable, attention-grabbing videos for maximum engagement