by Muhammad Asadullah
Daily Blog Automation Workflow Fully automated blog creation system using n8n + AI Agents + Image Generation Overview This workflow automates the entire blog creation pipeline—from topic research to final publication. Three specialized AI agents collaborate to produce publication-ready blog posts with custom images, all saved directly to your Supabase database. How It Works 1. Research Agent (Topic Discovery) Triggers**: Runs on schedule (default: daily at 4 AM) Process**: Fetches existing blog titles from Supabase to avoid duplicates Uses Google Search + RSS feeds to identify trending topics in your niche Scrapes competitor content to find content gaps Generates detailed topic briefs with SEO keywords, search intent, and differentiation angles Output**: Comprehensive research document with SERP analysis and content strategy 2. Writer Agent (Content Creation) Triggers**: Receives research from Agent 1 Process**: Writes full blog article based on research brief Follows strict SEO and readability guidelines (no AI fluff, natural tone, actionable content) Structures content with proper HTML markup Includes key sections: hook, takeaways, frameworks, FAQs, CTAs Places image placeholders with mock URLs (https://db.com/image_1, etc.) Output**: Complete JSON object with title, slug, excerpt, tags, category, and full HTML content 3. Image Prompt Writer (Visual Generation) Triggers**: Receives blog content from Agent 2 Process**: Analyzes blog content to determine number and type of images needed Generates detailed 150-word prompts for each image (feature image + content images) Creates prompts optimized for Nano-Banana image model Names each image descriptively for SEO Output**: Structured prompts for 3-6 images per blog post 4. Image Generation Pipeline Process**: Loops through each image prompt Generates images via Nano-Banana API (Wavespeed.ai) Downloads and converts images to PNG Uploads to Supabase storage bucket Generates permanent signed URLs Replaces mock URLs in HTML with real image URLs Output**: Blog HTML with all images embedded 5. Publication Final blog post saved to Supabase blogs table as draft Ready for immediate publishing or review Key Features ✅ Duplicate Prevention: Checks existing blogs before researching new topics ✅ SEO Optimized: Natural language, proper heading structure, keyword integration ✅ Human-Like Writing: No robotic phrases, varied sentence structure, actionable advice ✅ Custom Images: Generated specifically for each blog's content ✅ Fully Structured: JSON output with all metadata (tags, category, excerpt, etc.) ✅ Error Handling: Automatic retries with wait periods between agent calls ✅ Tool Integration: Google Search, URL scraping, RSS feeds for research Setup Requirements 1. API Keys Needed Google Gemini API**: For Gemini 2.5 Pro/Flash models (content generation/writing) Groq API (optional)**: For Kimi-K2-Instruct model (research/writing) Serper.dev API**: For Google Search (2,500 free searches/month) Wavespeed.ai API**: For Nano-Banana image generation Supabase Account**: For database and image storage 2. Supabase Setup Create blogs table with fields: title, slug, excerpt, category, tags, featured_image, status, featured, content Create storage bucket for blog images Configure bucket as public or use signed URLs 3. Workflow Configuration Update these placeholders: RSS Feed URLs**: Replace [your website's rss.xml] with your site's RSS feed Storage URLs**: Update Supabase storage paths in "Upload object" and "Generate presigned URL" nodes API Keys**: Add your credentials to all HTTP Request nodes Niche/Brand**: Customize Research Agent system prompt with your industry keywords Writing Style**: Adjust Writer Agent prompt for your brand voice Customization Options Change Image Provider Replace the "nano banana" node with: Gemini Imagen 3/4 DALL-E 3 Midjourney API Any Wavespeed.ai model Adjust Schedule Modify "Schedule Trigger" to run: Multiple times daily Specific days of week On-demand via webhook Alternative Research Tools Replace Serper.dev with: Perplexity API (included as alternative node) Custom web scraping Different search providers Output Format { "title": "Your SEO-Optimized Title", "slug": "your-seo-optimized-title", "excerpt": "Compelling 2-3 sentence summary with key benefits.", "category": "Your Category", "tags": ["tag1", "tag2", "tag3", "tag4"], "author_name": "Your Team Name", "featured": false, "status": "draft", "content": "...complete HTML with embedded images..." } Performance Notes Average runtime**: 15-25 minutes per blog post Cost per post**: ~$0.10-0.30 (depending on API usage) Image generation**: 10-15 seconds per image with Nano-Banana Retry logic**: Automatically handles API timeouts with 5-15 minute wait periods Best Practices Review Before Publishing: Workflow saves as "draft" status for human review Monitor API Limits: Track Serper.dev searches and image generation quotas Test Custom Prompts: Adjust Research/Writer prompts to match your brand Image Quality: Review generated images; regenerate if needed SEO Validation: Check slugs and meta descriptions before going live Workflow Architecture 3 Main Phases: Research → Writer → Image Prompts (Sequential AI Agent chain) Image Generation → Upload → URL Replacement (Loop-based processing) Final Assembly → Database Insert (Single save operation) Error Handling: Wait nodes between agents prevent rate limiting Retry logic on agent failures (max 2 retries) Conditional checks ensure content quality before proceeding Result: Hands-free blog publishing that maintains quality while saving 3-5 hours per post.
by SOLOVIEVA ANNA
Who this is for Users who frequently receive images or documents via LINE or email Teams needing automatic OCR + AI summarization Anyone who wants hands-free document processing and structured storage How it works Triggers: LINE Webhook and Gmail IMAP Trigger capture incoming messages or emails. Source Tagging: Inputs are tagged as LINE or EMAIL for later branching. File Handling: Files are uploaded to Google Drive and converted for analysis. OCR: An AI vision model extracts all readable text from the document image. AI Summarization: A text model produces a concise summary. Logging: The summary is appended to Google Sheets for record-keeping. Email Drafting: A Gmail Draft is generated containing the OCR text and summary. How to set up Connect your LINE, Gmail, OpenAI, and Google Drive/Sheets credentials. Update folder IDs, sheet names, and authentication fields as needed. Optional: customize summarization instructions. Customization ideas Add translation or classification steps Modify output format for Slack/Notion Store files in date-based Drive folders
by Ehsan
Analyze food ingredients from Telegram photos using Gemini and Airtable 🛡️ Personal Ingredient Bodyguard Turn your Telegram bot into an intelligent food safety scanner. This workflow analyzes photos of ingredient labels sent via Telegram, extracts the text using AI, and cross-references it against your personal database of "Good" and "Bad" ingredients in Airtable. It solves the problem of manually reading tiny, complex labels for allergies or dietary restrictions. Whether you are Vegan, Halal, allergic to nuts, or just avoiding specific additives, this workflow acts as a strict, personalized bodyguard for your diet. It even features a customizable "Persona" (like a Sarcastic Bodyguard) to make safety checks fun. 🎯 Who is it for? People with specific dietary restrictions (Vegan, Gluten-free, Keto). Individuals with food allergies (Nuts, Dairy, Shellfish). Special dietary observers (Halal, Kosher). Health-conscious shoppers avoiding specific additives (e.g., E120, Aspartame). 🚀 How it works Trigger: You send a photo of a product label to your Telegram Bot. Fetch Rules: The workflow retrieves your active "Watchlist" (Ingredients to avoid/prefer) and "Persona" settings from Airtable. Vision & Logic: It uses an AI Vision model to extract text from the image (OCR) and Google Gemini to analyze the text against your strict veto rules (e.g., "Safe" only if ZERO bad items are found). Response: The bot replies instantly on Telegram with a Safe/Unsafe verdict, highlighting detected ingredients using HTML formatting. Log: The result is saved back to Airtable for your records. ⚙️ How to set up This workflow relies on a specific Airtable structure to function as the "Brain." Set up Airtable Sign up for Airtable: Click here Copy the required Base: Click here to copy the "Ingredients Brain" base Connect Airtable to n8n (5-min guide): Watch Tutorial Set up Telegram Message @BotFather on Telegram to create a new bot and get your API Token. Add your Telegram credentials in n8n. Configure AI Add your Google Gemini API credentials. Note on OCR: This template is configured to use a local LLM for OCR to save costs (via the OpenAI-compatible node). If you do not have a local model running, simply swap the "OpenAI Chat Model" node for a standard GPT-4o or Gemini Vision node. 📋 Requirements n8n** (Cloud or Self-hosted) Airtable** account (Free tier works) Telegram** account Google Gemini** API Key Local LLM* (Optional, for free OCR) OR *OpenAI/Gemini** Key (for standard Cloud Vision) 🎨 How to customize Change the Persona:** Go to the "Preferences" table in Airtable to change the bot's personality (e.g., "Helpful Nutritionist") and output language. Update Ingredients:** Add or remove items in the "Watchlist" table. Mark them as "Good Stuff" or "Bad Stuff" and set Status to "Active". Adjust Sensitivity:** The AI prompt in the "AI Agent" node is set to strict "Veto" mode (Bad overrides Good). You can modify the system prompt to change this logic. ⚠️ Disclaimer This tool is for informational purposes only. Not Medical Advice: Do not rely on this for life-threatening allergies. AI Limitations: OCR can misread text, and AI can hallucinate. Verify: Always double-check the physical product label. Use at your own risk.
by vinci-king-01
Meeting Notes Distributor – Mailchimp and MongoDB This workflow automatically converts raw meeting recordings or written notes into concise summaries, stores them in MongoDB for future reference, and distributes the summaries to all meeting participants through Mailchimp. It is ideal for teams that want to keep everyone aligned without manual copy-and-paste or email chains. Pre-conditions/Requirements Prerequisites n8n instance (self-hosted or cloud) Audio transcription service or written notes available via HTTP endpoint MongoDB database (cloud or self-hosted) Mailchimp account with an existing Audience list Required Credentials MongoDB** – Connection string with insert permission Mailchimp API Key** – To send campaigns (Optional) HTTP Service Auth** – If your transcription/notes endpoint is secured Specific Setup Requirements | Component | Example Value | Notes | |------------------|--------------------------------------------|-----------------------------------------------------| | MongoDB Database | meeting_notes | Database in which summaries will be stored | | Collection Name | summaries | Collection automatically created if it doesn’t exist| | Mailchimp List | Meeting Participants | Audience list containing participant email addresses| | Notes Endpoint | https://example.com/api/meetings/{id} | Returns raw transcript or note text (JSON) | How it works This workflow automatically converts raw meeting recordings or written notes into concise summaries, stores them in MongoDB for future reference, and distributes the summaries to all meeting participants through Mailchimp. It is ideal for teams that want to keep everyone aligned without manual copy-and-paste or email chains. Key Steps: Schedule Trigger**: Fires daily (or on-demand) to check for new meeting notes. HTTP Request**: Downloads raw notes or transcript from your endpoint. Code Node**: Uses an AI or custom function to generate a concise summary. If Node**: Skips processing if the summary already exists in MongoDB. MongoDB**: Inserts the new summary document. Split in Batches**: Splits participants into Mailchimp-friendly batch sizes. Mailchimp**: Sends personalized summary emails to each participant. Wait**: Ensures rate limits are respected between Mailchimp calls. Merge**: Consolidates success/failure results for logging or alerting. Set up steps Setup Time: 15-25 minutes Clone the workflow: Import or copy the JSON into your n8n instance. Configure Schedule Trigger: Set the cron expression (e.g., every weekday at 18:00). Set HTTP Request URL: Replace placeholder with your transcription/notes endpoint. Add auth headers if needed. Add MongoDB Credentials: Enter your connection string in the MongoDB node. Customize Summary Logic: Open the Code node to tweak summarization length, language, or model. Mailchimp Credentials: Supply your API key and select the correct Audience list. Map Email Fields: Ensure participant emails are supplied from transcription metadata or external source. Test Run: Execute once manually to verify MongoDB insert and email delivery. Activate Workflow: Enable the workflow so it runs on its defined schedule. Node Descriptions Core Workflow Nodes: Schedule Trigger** – Initiates the workflow at predefined intervals. HTTP Request** – Retrieves the latest meeting data (transcript or notes). Code** – Generates a summarized version of the meeting content. If** – Checks MongoDB for duplicates to avoid re-sending. MongoDB** – Stores finalized summaries for archival and audit. SplitInBatches** – Breaks participant list into manageable chunks. Mailchimp** – Sends summary emails via campaigns or transactional messages. Wait** – Pauses between batches to honor Mailchimp rate limits. Merge** – Aggregates success/failure responses for logging. Data Flow: Schedule Trigger → HTTP Request → Code → If If summary is new: MongoDB → SplitInBatches → Mailchimp → Wait Merge collates all results Customization Examples 1. Change Summary Length // Inside the Code Node const rawText = items[0].json.text; const maxSentences = 5; // adjust to 3, 7, etc. items[0].json.summary = summarize(rawText, maxSentences); return items; 2. Personalize Mailchimp Subject // In the Set node before Mailchimp items[0].json.subject = Recap: ${items[0].json.meetingTitle} – ${new Date().toLocaleDateString()}; return items; Data Output Format The workflow outputs structured JSON data: { "meetingId": "abc123", "meetingTitle": "Quarterly Planning", "summary": "Key decisions on roadmap, budget approvals...", "participants": [ "alice@example.com", "bob@example.com" ], "mongoInsertId": "65d9278fa01e3f94b1234567", "mailchimpBatchIds": ["2024-01-01T12:00:00Z#1", "2024-01-01T12:01:00Z#2"] } Troubleshooting Common Issues Mailchimp rate-limit errors – Increase Wait node delay or reduce batch size. Duplicate summaries – Ensure the If node correctly queries MongoDB using meeting ID as a unique key. Performance Tips Keep batch sizes under 500 to stay well within Mailchimp limits. Offload AI summarization to external services if Code node execution time is high. Pro Tips: Store full transcripts in MongoDB GridFS for future reference. Use environment variables in n8n for all API keys to simplify workflow export/import. Add a notifier (e.g., Slack node) after Merge to alert admins on failures. This is a community template provided “as-is” without warranty. Always validate the workflow in a test environment before using it in production.
by Cheng Siong Chin
How It Works This workflow automates veterinary clinic operations and client communications for animal hospitals and veterinary practices managing appointments, inventory, and patient care. It solves the dual challenge of maintaining medical supply levels while delivering personalized pet care updates and appointment coordination. The system processes scheduled inventory data through AI-powered quality validation and restocking recommendations, then branches into two intelligent pathways: supplier coordination via email for replenishment, and client engagement through personalized appointment reminders, follow-up care instructions, and satisfaction surveys distributed via email and messaging platforms. This eliminates manual inventory tracking, reduces appointment no-shows, and ensures consistent post-visit care communication. Setup Steps Configure webhook or schedule trigger for veterinary management system inventory data sync Add AI model API keys for inventory quality validation Connect supplier email system with template configurations for automated purchase orders Set up client communication channels with appointment and care instruction templates Integrate customer database for pet records and appointment history Prerequisites Veterinary practice management software with API/webhook capabilities, AI service API access Use Cases Multi-location veterinary hospitals coordinating inventory across sites Customization Modify AI prompts for species-specific care instructions Benefits Reduces supply management time by 75%, prevents critical medication stockouts
by Rajeet Nair
Overview This workflow enables GDPR-compliant document processing by detecting, masking, and securely handling personally identifiable information (PII) before AI analysis. It ensures that sensitive data is never exposed to AI systems by replacing it with tokens, while still allowing controlled re-injection of original values when permitted. The workflow also maintains full audit logs for compliance and traceability. How It Works Document Upload & Configuration Receives documents via webhook and initializes configuration such as document ID, thresholds, and database tables. Text Extraction Extracts raw text from uploaded documents for processing. Multi-Detector PII Detection Detects emails, phone numbers, ID numbers, and addresses using regex and AI-based detection. PII Aggregation & Conflict Resolution Merges detections, resolves overlaps, removes duplicates, and builds a unified PII map. Tokenization & Vault Storage Replaces sensitive data with secure tokens and stores original values in a database vault. Masking & Validation Generates masked text and verifies that all PII has been successfully removed before AI processing. AI Processing (Masked Data) Processes the document using AI while preserving tokens to prevent exposure of sensitive information. Re-Injection Controller Determines which fields are allowed to restore original PII based on permissions. Secure Retrieval & Restoration Retrieves original values from the vault and restores them only where permitted. Audit Logging Stores metadata, detected PII types, and re-injection events for compliance tracking. Error Handling & Alerts Blocks processing and triggers alerts if masking fails or compliance rules are violated. Setup Instructions Activate the webhook and upload a document (PDF or supported file) Configure AI credentials (Anthropic / OpenAI) Set database credentials for PII vault and audit logs Adjust detection thresholds and compliance settings if needed Execute the workflow and review outputs and logs Use Cases GDPR-compliant document processing pipelines Secure AI document analysis with PII protection Automated redaction and tokenization systems Financial, legal, or healthcare document processing Privacy-first AI workflows for sensitive data Requirements n8n (latest version recommended) Anthropic or OpenAI API credentials PostgreSQL (or compatible database) for vault and audit logs Input documents (PDF or text-based files)
by Marco Florez
Turn your code commits into engaging social media content automatically. This workflow monitors a GitHub repository, uses AI to write a LinkedIn post about your changes, generates a beautiful "Mac-window" style image of your code, and publishes it all to LinkedIn. How it works GitHub Trigger: Watches for new push events in your selected repository. AI Analysis: Passes the code changes to an LLM (via LangChain) to write a professional LinkedIn post and select the best code snippet. Image Generation: Creates a custom HTML view of your code (with syntax highlighting and window controls) and converts it to an image using the HCTI API. Hosting & Posting: Uploads the generated image back to GitHub for hosting, then combines the text and image to publish a live post on LinkedIn. Set up steps Configure Credentials: You will need credentials for: GitHub (OAuth2 or Access Token) LinkedIn (OAuth2) OpenRouter (or swap the model node for OpenAI/Anthropic) HCTI.io (for the HTML-to-Image conversion) Update GitHub Nodes: In the Trigger node: Set your Owner and Repository. In the File Download node: Set the same Owner and Repository. In the Upload Image node: Set the target repo where you want images stored. Update LinkedIn Node: Add your LinkedIn Person URN in the Person field.
by Shun Nakayama
Automate your Instagram growth strategy by generating and posting viral Reels using AI and Creatomate. This workflow plans content topics based on trends, generates video assets, and handles the approval and posting process—all without manual video editing. How it works Schedule Trigger: Runs every day at 9:00 AM. Topic Planning: Checks past topics from Google Sheets to avoid duplicates, then uses OpenAI (GPT-4o) to generate a new quiz-style content plan. Video Generation: Uses Creatomate to generate a video based on a template, dynamically inserting the AI-generated text and images. Approval Loop: Sends the generated video to Slack for human review. Posting: Once approved in Slack, the workflow automatically uploads the Reel to Instagram. Logging: Saves the new topic to Google Sheets and notifies Slack upon successful publication. Setup steps Configure Credentials: OpenAI: For generating content plans. Creatomate: For video rendering. Google Sheets: For tracking past topics. Slack: For approval notifications. Facebook Graph API: For Instagram publishing. Google Sheets Setup: Create a Google Sheet with columns: Question, Answer, Title, Date. Update the Get Past Topics and Save New Topic nodes with your Sheet ID. Creatomate Setup: Create a template in Creatomate or use an existing one. Update the Generate Video node with your template_id in the JSON body. Slack Setup: Create a channel for approvals. Update the Slack Approval Request and Slack Notification nodes with your Channel ID. Activate: Turn on the workflow to start automating your content pipeline!
by achiya
How it works Courier sends an invoice photo to WhatsApp → AI extracts all details via Google Vision OCR Courier sends a payment photo (check, bank transfer, credit card voucher) → AI matches it to the invoice AI presents a summary and asks for confirmation Once approved — receipt is created in Rivhit, invoice is closed, and the PDF is sent back to WhatsApp Supports cash, checks, credit cards, bank transfers, and split payments. Includes automatic customer lookup by tax ID and Israeli bank code recognition. Set up steps Takes about 10 minutes: Set up a WAHA instance and point its webhook to this workflow Add your Google Cloud Vision API key to the HTTP Request node Add your Rivhit API token to the "api key" Set node Replace the WhatsApp group ID in the Filter node with yours Connect your OpenAI credentials Activate and start sending photos! See the sticky notes inside the workflow for detailed instructions.
by Cheng Siong Chin
How It Works This workflow automates hospital operational event management by intelligently processing incoming events and orchestrating appropriate responses across multiple hospital systems. Designed for hospital operations managers, healthcare IT teams, and clinical administrators, it solves the complex challenge of coordinating rapid responses to diverse hospital events including patient admissions, equipment alerts, staffing emergencies, and clinical escalations. The system receives event triggers via webhook, uses AI-powered orchestration to analyze event context and determine required actions, then intelligently routes tasks to appropriate systems including appointment scheduling, task management, and insurance verification. It calculates priority scores, assigns tasks, verifies insurance coverage, and merges results while masking sensitive PHI data for compliance. The workflow leverages Anthropic's Claude and multiple AI tools to ensure context-aware decision-making aligned with hospital protocols. Setup Steps Configure webhook URL endpoint for hospital event system integration Set up Anthropic API credentials for Claude model access in orchestration agent Configure Hospital Orchestration Agent Tool with your facility's event protocols Connect Schedule Appointment API with hospital scheduling system credentials Set up Task Management API integration for staff assignment system Configure Insurance Verification API with payer network access credentials Prerequisites Active Anthropic API account, hospital event management system with webhook capability Use Cases Patient admission coordination, equipment failure response, code blue orchestration Customization Modify orchestration agent prompts for facility-specific protocols Benefits Reduces event response time by 75%, ensures consistent protocol adherence
by Intuz
This n8n template from Intuz provides a complete solution to automate a powerful, AI-driven 'Chat with your PDF' bot on Telegram. It uses Retrieval-Augmented Generation (RAG) to allow users to upload documents, which are then indexed into a vector database, enabling the bot to answer questions based only on the provided content. Who's this workflow for? Researchers & Students Legal & Compliance Teams Business Analysts & Financial Advisors Anyone needing to quickly find information within large documents How it works This workflow has two primary functions: indexing a new document and answering questions about it. 1. Uploading & Indexing a Document: A user sends a PDF file to the Telegram bot. n8n downloads the document, extracts the text, and splits it into small, manageable chunks. Using Google Gemini, each text chunk is converted into a numerical representation (an "embedding"). These embeddings are stored in a Pinecone vector database, making the document's content searchable. The bot sends a confirmation message to the user that the document has been successfully saved. 2. Asking a Question (RAG): A user sends a regular text message (a question) to the bot. n8n converts the user's question into an embedding using Google Gemini. It then searches the Pinecone database to find the most relevant text chunks from the uploaded PDF that match the question. These relevant chunks (the "context") are sent to the Gemini chat model along with the original question. Gemini generates a new, accurate answer based only on the provided context and sends it back to the user in Telegram. Key Requirements to Use This Template 1. n8n Instance & Required Nodes: An active n8n account (Cloud or self-hosted). This workflow uses the official n8n LangChain integration (@n8n/n8n-nodes-langchain). If you are using a self-hosted version of n8n, please ensure this package is installed. 2. Telegram Account: A Telegram bot created via the BotFather, along with its API token. 3. Google Gemini AI Account: A Google Cloud account with the Vertex AI API enabled and an associated API Key. 4. Pinecone Account: A Pinecone account with an API key. You must have a vector index created in Pinecone. For use with Google Gemini's embedding-001 model, the index must be configured with 768 dimensions. Setup Instructions 1. Telegram Configuration: In the "Telegram Message Trigger" node, create a new credential and add your Telegram bot's API token. Do the same for the "Telegram Response" and "Telegram Response about Database" nodes. 2. Pinecone Configuration: In both "Pinecone Vector Store" nodes, create a new credential and add your Pinecone API key. In the "Index" field of both nodes, enter the name of your pre-configured Pinecone index (e.g., telegram). 3. Google Gemini Configuration: In all three Google Gemini nodes (Embeddings Google Gemini, Embeddings Google Gemini1, and Google Gemini Chat Model), create a new credential and add your Google Gemini (Palm) API key. 4. Activate and Use: Save the workflow and toggle the "Active" switch to ON. To use: First, send a PDF document to your bot. Wait for the confirmation message. Then, you can start asking questions about the content of that PDF. Connect with us Website: https://www.intuz.com/services Email: getstarted@intuz.com LinkedIn: https://www.linkedin.com/company/intuz Get Started: https://n8n.partnerlinks.io/intuz For Custom Workflow Automation Click here- Get Started
by Juan Carlos Cavero Gracia
This workflow transforms any video you drop into a Google Drive folder into a ready-to-publish YouTube upload. It analyzes the video with AI to craft 3 high-CTR title ideas, 3 long SEO-friendly descriptions (with timestamps), and 10–15 optimized tags. It then generates 4 thumbnail options using your face and lets you pick your favorite before auto-publishing to YouTube via Upload-Post. Who Is This For? YouTube Creators & Editors:** Ship videos with winning titles, thumbnails, and SEO in minutes. Agencies & Media Teams:** Standardize output and speed across channels and clients. Founders & Solo Makers:** Maintain consistent publishing with minimal manual work. What Problem Does It Solve? Producing SEO metadata and high-performing thumbnails is slow and inconsistent. This flow: Generates High-CTR Options:** 3 distinct angles for title/description/tags. Creates Thumbnails with Your Face:** 4 options ready for review in one pass. Auto-Publishes Safely:** Human selection gates reduce risk before going live. How It Works Google Drive Trigger: Watches a folder for new video files. AI Video Analysis (Gemini): Produces an in-depth Spanish description and timestamps. Concept Generation: Returns 3 JSON concepts (title, thumbnail prompt, description, tags). User Review #1: Pick your favorite concept in a simple form. Thumbnail Generation (fal.ai): Creates 4 thumbnails using your face (provided image URL). User Review #2: Choose the best thumbnail. Upload to YouTube (Upload-Post): Publishes the video with your chosen title, description, tags, and thumbnail. Setup Credentials (all offer free trials, no credit card required): Google Gemini (chat/vision for analysis) fal.ai API (thumbnail generation) Upload-Post ( Connect your Youtube channel and generate api keys) Google Drive OAuth (folder watch + file download) Provide Your Face Image URL(s): Used by fal.ai to integrate your face into thumbnails. Select the Google Drive Folder: Where you’ll drop videos to process. Pick & Publish: Use the built-in forms to choose concept and thumbnail. Requirements Accounts:** Google (Drive + Gemini), fal.ai, Upload-Post, n8n. API Keys:** Gemini, fal.ai; Upload-Post credentials; Google Drive OAuth. Assets:** At least one clear face image for thumbnails. Features Three SEO Angles:** Distinct title/description sets to test different intents. Rich Descriptions with Timestamps:** Ready for YouTube SEO and viewer navigation. Face-Integrated Thumbnails:** 4 options aligned with the selected title. Human-in-the-Loop Controls:** Approve concepts and thumbnails before publishing. Auto-Publish via Upload-Post:** One click to push live to YouTube. Start Free:** All API calls can run on free trials, no credit card required. Video demo https://www.youtube.com/watch?v=EOOgFveae-U