by Erfan Mostafiz
This n8n workflow scrapes LinkedIn data for your leads, feeds it into a GPT-4 AI agent, and generates laser-targeted, personalized icebreakers you can drop into your cold email campaigns. It automates the personalization process at scale — saving you hours of research while sounding human and thoughtful. Step-by-Step Setup (Beginner Friendly) Step 1: Prepare Your Leads (Input Sheet) Get your lead list based on your industry and niche from Apollo (free) Copy the entire link Go to Apify and use this Apollo Scraper to scrape the leads. Download the result as CSV and upload the CSV to Google Sheets Add a column at the end of the Sheet. Name this column as "status". Mark the entire column (every row) as "un-enriched" (this is important) Connect your Google Sheets account to n8n The workflow will pull leads from this sheet where status = un-enriched Step 2: Set Your Credentials Google Sheets: Connect your account to n8n using OAuth2 OpenAI: Add your OpenAI API credentials Apify: Visit Apify Console to get your Apify API key Use this Apify LinkedIn Profile Scraper and copy the actorID --> get it from the URL : https://console.apify.com/:actorID/input Paste both Apify API Key and ActorID into the “Set Apify Tokens” node Step 3: Customize the AI Agent In the node “Generate Personalized Icebreaker”, adjust the system prompt. Update it with your own niche, offer, tone, and insights Keep the JSON output format exactly as shown. The rest of the workflow depends on it Step 4: Run the Workflow Click "Execute Workflow" The system will: -- Pull all unenriched leads -- Filter out entries without email -- Scrape LinkedIn profiles using Apify -- Use GPT-4 to write a short, personalized icebreaker -- Save the result to a separate “Enriched” sheet -- Mark those leads as “enriched” in your original sheet How It Works Behind the Scenes Manual Trigger starts the workflow Get Raw Leads from a Google Sheet (filter = un-enriched) Filter for Valid Emails (hasEmail?) Loop Over Leads Set Apify API credentials Call Apify’s LinkedIn Scraper using each lead's LinkedIn URL Aggregate the scraped data Simplify fields for AI prompt Call OpenAI GPT-4.1 Mini with structured, data-rich prompt to generate icebreaker Append results to Enriched Sheet Update original list’s status to prevent reprocessing Loop continues to the next lead Best Practices for Successful Use Clean your leads: Remove unnecessary columns from your Google Sheet raw lead list Throttle large batches: The Apify actor and OpenAI calls may hit rate limits. Process in small batches. Customize prompt deeply: The better your AI instructions, the more believable your icebreakers will sound. Use shortened company names and local slang: The system prompt already does this — keep it. Avoid fluff: Keep the tone Spartan, specific, and real. Ideal Use Cases Cold email campaigns for SMB SaaS, agency offers, B2B sales Personalized intros for LinkedIn DMs Data enrichment for lead gen automation Integrating with tools like Instantly.ai, Smartlead, or Mailshake Demo Link Watch the full walkthrough and see it in action: 👉 Watch me build this LIVE on YouTube
by Ertay Kaya
Apple App Store Connect: Featuring Nominations Report This workflow automates the process of tracking and reporting app nominations submitted to Apple for App Store featuring consideration. It connects to the App Store Connect API to fetch your list of apps and submitted nominations, stores the data in a MySQL database, and generates a report of all nominations. The report is then exported as a CSV file and can be automatically shared via Google Drive and Slack. Key features Authenticates with App Store Connect using JWT. Fetches all apps and submitted nominations, including details and related in-app events (API documentation: https://developer.apple.com/documentation/appstoreconnectapi/featuring-nominations) Stores and updates app and nomination data in MySQL tables. Generates a comprehensive nominations report with app and nomination details. Exports the report as a CSV file. Shares the report automatically to Google Drive and Slack. Runs on a weekly schedule, but can be triggered manually as well. Setup Instructions Obtain your App Store Connect API credentials (Issuer ID, Key ID, and private key) from your Apple Developer account. Set up a MySQL database and configure the connection details in the workflow’s MySQL node(s). (Optional) Connect your Google Drive and Slack accounts using the respective n8n nodes if you want to share the report automatically. Update any credentials in the workflow to match your setup. Activate the workflow and set the schedule as needed. This template is ideal for teams who regularly submit apps or updates for featuring on the App Store and want to keep track of their nomination history and status in a structured, automated way.
by sebastian pineda
🤖 AI-Powered Hardware Store Assistant with PostgreSQL & MCP Supercharge your customer service with this conversational AI agent! This n8n workflow provides a complete solution for a hardware store chatbot that connects to a PostgreSQL database in real-time. It uses Google Gemini for natural language understanding and the powerful MCP (My Credential Provider) nodes to securely expose database operations as tools for the AI agent. ✨ Key Features 💬 Conversational Product Queries: Allow users to ask for products by name, category, description, or even technical notes. 📦 Real-time Inventory & Pricing: The agent fetches live data directly from your PostgreSQL database, ensuring accurate stock and price information. 💰 Automatic Quote Generation: Ask the agent to create a detailed quote for a list of materials, and it will calculate quantities and totals. 🧠 Smart Project Advice: The agent is primed with a system message to act as an expert, helping users calculate materials for projects (e.g., "How much drywall do I need for a 10x12 foot room?"). 🛠️ Tech Stack & Core Components Technologies Used 🗄️ PostgreSQL: For storing and managing product data. ✨ Google Gemini API: The large language model that powers the agent's conversational abilities. 🔗 MCP (My Credential Provider): Securely exposes database queries as callable tools without exposing credentials directly to the agent. n8n Nodes Used @n8n/n8n-nodes-langchain.agent: The core AI agent that orchestrates the workflow. @n8n/n8n-nodes-langchain.chatTrigger: To start a conversation. @n8n/n8n-nodes-langchain.lmChatGoogleGemini: The connection to the Google Gemini model. n8n-nodes-base.postgresTool: Individual nodes for querying products by ID, name, category, etc. @n8n/n8n-nodes-langchain.mcpTrigger: Exposes the PostgresTools. @n8n/n8n-nodes-langchain.mcpClientTool: Allows the AI agent to consume the tools exposed by the MCP Trigger. 🚀 How to Get Started: Setup & Configuration Follow these steps to get your AI assistant up and running: Configure your Database: This template assumes a PostgreSQL database named bd_ferreteria with a productos table. You can adapt the PostgresTool nodes to match your own schema. Set up Credentials: Create and assign your PostgreSQL credentials to each of the six PostgresTool nodes. Create and assign your Google Gemini API credentials in the Language Model (Google Gemini) node. Review the System Prompt: The main AI Agent node has a detailed system prompt that defines its persona and capabilities. Feel free to customize it to better fit your business's tone and product line. Activate the Workflow: Save and activate the workflow. You can now start interacting with your new AI sales assistant through the chat interface! 💡 Use Cases & Customization While designed for a hardware store, this template is highly adaptable. You can use it for: Any e-commerce store with a product database (e.g., electronics, clothing, books). An internal IT support bot that queries a database of company assets. A booking assistant that checks availability in a database of appointments or reservations.
by Avkash Kakdiya
How it works This workflow automatically pulls daily signup stats from your PostgreSQL database and shares them with your team across multiple channels. Every morning, it counts the number of new signups in the last 24 hours, formats the results into a concise report, and posts it to Slack, Microsoft Teams, and Telegram. This ensures your entire team stays updated on customer growth without manual queries or reporting. Step-by-step Daily Trigger & Data Fetching The Daily Report Trigger runs at 9:00 AM each day. The Fetch Signup Count node queries the customers table in PostgreSQL. It calculates the number of new signups in the last 24 hours using the created_at timestamp column. Report Preparation The Prepare Report Message node formats the results into a structured message: Report date Signup count A clear summary line: Daily Signup Report – New signups in the last 24h: X Multi-Channel Delivery The prepared message is sent to multiple platforms simultaneously: Slack Microsoft Teams Telegram This ensures all teams receive the update in their preferred communication tool. Why use this? Automates daily customer growth reporting. Eliminates manual SQL queries and report sharing. Keeps the whole team aligned with real-time growth metrics. Delivers updates across Slack, Teams, and Telegram at once. Provides simple, consistent reporting every day.
by Madame AI
Product Hunt Launch Monitor - Scraping & Summarization of Product Hunt Feedbacks This n8n template provides automated competitive intelligence by scraping and summarizing Product Hunt launch feedback with a specialized AI analyst. This workflow is essential for product managers, marketing teams, and founders who need to quickly gather and distill actionable insights from competitor launches to inform their own product strategy and positioning. Self-Hosted Only This Workflow uses a community contribution and is designed and tested for self-hosted n8n instances only. How it works The workflow can be triggered manually but is designed to be easily switched to a Schedule Trigger for continuous competitive monitoring. A Google Sheet node fetches a list of product names you wish to monitor, which the workflow processes in a loop. A BrowserAct node then initiates a web scraping task to collect all the public comments from the specified Product Hunt launch page. An AI Agent, powered by Google Gemini, acts as a competitive intelligence analyst, processing the raw comments. The AI distills the feedback into a structured format, providing a concise Summary, pinpointing key Positive and Negative feedback, and generating Recommendations for a similar product to be successful. The structured analysis is saved to a Google Sheet for easy review and tracking. Finally, a Slack notification confirms that the Product Hunt results have been processed and updated. Requirements BrowserAct** API account for web scraping BrowserAct* "Product Hunt Launch Monitor*" Template BrowserAct** n8n Community Node -> (n8n Nodes BrowserAct) Gemini** account for the AI Agent Google Sheets** credentials for input and saving the analysis Slack** credentials for sending notifications Need Help? How to Find Your BrowseAct API Key & Workflow ID How to Connect n8n to Browseract How to Use & Customize BrowserAct Templates How to Use the BrowserAct N8N Community Node Workflow Guidance and Showcase Steal Your Competitor's Weaknesses (Product Hunt + BrowserAct + n8n)
by Jaruphat J.
⚠️ Note: All sensitive credentials should be set via n8n Credentials or environment variables. Do not hardcode API keys in nodes. Who’s it for Marketers, creators, and automation builders who want to generate UGC-style ad images automatically from a Google Sheet. Ideal for e‑commerce SKUs, agencies, or teams that need many variations quickly. What it does (Overview) This template turns a spreadsheet row into ad images ready for campaigns. Zone 1 — Create Ad Image**: Reads product rows, downloads image, analyzes it, generates prompts, appends results back into Google Sheet. Zone 2 — Create Image (Fal nano‑banana)**: Generates ad image variations, polls Fal.ai API until done, uploads to Drive, and updates sheet with output URLs. Requirements Fal.ai API key** (env: FAL\_KEY) Google Sheets / Google Drive** OAuth2 credentials OpenAI (Vision/Chat)** for image analysis A Google Sheet with columns for product and output Google Drive files set to Anyone with link → Viewer so APIs can fetch them How to set up Credentials: Add Google Sheets + Google Drive (OAuth2), Fal.ai (Header Auth with Authorization: Key {{\$env.FAL\_KEY}}), and OpenAI. Google Sheet: Create sheets with the following headers. Sheet: product product_id | product_name | product_image_url | product_description | campaign | brand_notes | constraints | num_variations | aspect_ratio | model_target | status Sheet: ad_image scene_ref | product_name | prompt | status | output_url Import the workflow: Use the provided JSON. Confirm node credentials resolve. Run: Start with Zone 0 to verify prompt-only flow, then test Zone 1 for image generation. Zone 1 — Create Ad Image (Prompt-only) Reads product row, normalizes Drive link, analyzes image, generates structured ad prompts, appends to ad_image sheet. Zone 2 — Create Image (Fal nano‑banana) Reads product row, converts Drive link, generates image(s) with Fal nano‑banana, polls until complete, uploads to Drive, updates sheet. Node settings (high‑level) Drive Link Parser (Set) {{ (() => { const u = $json.product || ''; const q = u.match(/[?&]id=([\-\w]{25,})/); const d = u.match(/\/d\/([\-\w]{25,})/); const any = u.match(/[\-\w]{25,}/); const id = q?.[1] || d?.[1] || (any ? any[0] : ''); return id ? 'https://drive.google.com/uc?export=view&id=' + id : ''; })() }} How to customize the workflow Adjust AI prompts to change ad style (luxury, cozy, techy). Change aspect ratio for TikTok/IG/Shorts (9:16, 1:1, 16:9). Extend Sheet schema for campaign labels, audiences, hashtags. Add distribution (Slack/LINE/Telegram) after Drive upload. Troubleshooting JSON parameter needs to be valid JSON** → Ensure expressions return objects, not strings. 403 on images** → Make Drive files public (Viewer) and convert links. Job never completes* → Check status_url, retry with -fast models or off‑peak times. Template metadata Uses:** Google Sheets, Google Drive, HTTP Request, Wait/If/Switch, Code, OpenAI Vision/Chat, Fal.ai models (nano‑banana) Visuals Workflow Diagram Example Product Image Product Image - nano Banana
by Jaruphat J.
LINE OCR Workflow to Extract and Save Thai Government Letters to Google Sheets This template automates the extraction of structured data from Thai government letters received via LINE or uploaded to Google Drive. It uses Mistral AI for OCR and OpenAI for information extraction, saving results to a Google Sheet. Who’s it for? Thai government agencies or teams receiving official documents via LINE or Google Drive Automation developers working with document intake and OCR Anyone needing to extract fields from Thai scanned letters and store structured info What it does This n8n workflow: Receives documents from two sources: LINE webhook (via Messaging API) Google Drive (new file trigger) Checks file type (PDF or image) Runs OCR with Mistral AI (Document or Image model) Uses OpenAI to extract key metadata such as: book_id subject recipient (to) signatory date, contact info, etc. Stores structured data in Google Sheets Replies to LINE user with extracted info or moves files into archive folders (Drive) How to Set It Up Create a Google Sheet with a tab named data and the following columns Example Google Sheet: book_id, date, subject, to, attach, detail, signed_by, signed_by_position, contact_phone, contact_email, download_url Set up required credentials: googleDriveOAuth2Api googleSheetsOAuth2Api httpHeaderAuth for LINE Messaging API openAiApi mistralCloudApi Define environment variables: LINE_CHANNEL_ACCESS_TOKEN GDRIVE_INVOICE_FOLDER_ID GSHEET_ID MISTRAL_API_KEY Deploy webhook to receive files from LINE Messaging API (Path: /line-invoice) Monitor Drive uploads using Google Drive Trigger How to Customize the Workflow Adjust the information extraction schema in the OpenAI Information Extractor node to match your document layout Add logic for different document types if you have more than one format Modify the LINE Reply message format or use Flex Message Update the Move File node if you want to archive to a different folder Requirements n8n self-hosted or cloud instance Google account with access to Drive and Sheets LINE Developer Account OpenAI API key Mistral Cloud API key Notes Community nodes used: @n8n/n8n-nodes-base.mistralAi This workflow supports both document images and PDF files File handling is done dynamically via MIME type
by Dele Odufuye
N8n OpenAI-Compatible API Endpoints Transform your n8n workflows into OpenAI-compatible API endpoints, allowing you to access multiple workflows as selectable AI models through a single integration. What This Does This workflow creates two API endpoints that mimic the OpenAI API structure: /models - Lists all n8n workflows tagged with aimodel (or any other tag of your choice) /chat/completions - Executes chat completions with your selected workflows, supporting both text and stream responses Benefits Access Multiple Workflows: Connect to all your n8n agents through one API endpoint instead of creating separate pipelines for each workflow. Universal Platform Support: Works with any application that supports OpenAI-compatible APIs, including OpenWebUI, Microsoft Teams, Zoho Cliq, and Slack. Simple Workflow Management: Add new workflows by tagging them with aimodel . No code changes needed. Streaming Support: Handles both standard responses and streaming for real-time agent interactions . How to Use Download the workflow JSON file from this repository Import it into your n8n instance Tag your workflows with aimodel to make them accessible through the API Create a new OpenAI credential in n8n and change the Base URL to point to your n8n webhook endpoints . Learn more about OpenAI Credentials Point your chat applications to your n8n webhook URL as if it were an OpenAI API endpoint Requirements n8n instance (self-hosted or cloud) Workflows you want to expose as AI models Any OpenAI-compatible chat application Documentation For detailed setup instructions and implementation guide, visit https://medium.com/@deleodufuye/how-to-create-openai-compatible-api-endpoints-for-multiple-n8n-workflows-803987f15e24. Inspiration This approach was inspired by Jimleuk’s workflow on n8n Templates.
by MUHAMMAD SHAHEER
Overview This workflow helps you automatically collect verified business leads from Google Search using SerpAPI — no coding required. It extracts company names, websites, emails, and phone numbers directly from search results and saves them into Google Sheets for easy follow-up or CRM import. Perfect for marketers, freelancers, and agencies who want real, usable leads fast — without manual scraping or paid databases. How It Works SerpAPI Node performs a Google search for your chosen keyword or niche. Split Out Node separates each result for individual processing. HTTP Request Node optionally visits each site for deeper data extraction. Code Node filters, validates, and formats leads using smart parsing logic. Google Sheets Node stores the final structured data automatically. All steps include sticky notes with configuration help. Setup Steps Setup takes about 5–10 minutes: Add your SerpAPI key (replace the placeholder). Connect your Google Sheets account. Update the search term (e.g., “Plumbers in New York”). Run the workflow and watch leads populate your sheet in real time.
by 中崎功大
Smart Irrigation Scheduler with Weather Forecast and Soil Analysis Summary Automated garden and farm irrigation system that uses weather forecasts and evapotranspiration calculations to determine optimal watering schedules, preventing water waste while maintaining healthy plants. Detailed Description A comprehensive irrigation management workflow that analyzes weather conditions, forecasts, soil types, and plant requirements to make intelligent watering decisions. The system considers multiple factors including expected rainfall, temperature, humidity, wind speed, and days since last watering to determine if irrigation is needed and how much. Key Features Multi-Zone Management**: Support for multiple irrigation zones with different plant and soil types Weather-Based Decisions**: Uses OpenWeatherMap current conditions and 5-day forecast Evapotranspiration Calculation**: Simplified Penman method for accurate water loss estimation Rain Forecast Skip**: Automatically skips watering when significant rain is expected Plant-Type Specific**: Different requirements for flowers, vegetables, grass, and shrubs Soil Type Consideration**: Adjusts for clay, loam, and sandy soil characteristics Urgency Classification**: High/medium/low priority based on moisture levels Optimal Timing**: Adjusts watering time based on temperature and wind conditions IoT Integration**: Sends commands to smart irrigation controllers Historical Logging**: Tracks all decisions in Google Sheets Use Cases Home garden automation Commercial greenhouse management Agricultural operations Landscaping company scheduling Property management with large grounds Water conservation projects Required Credentials OpenWeatherMap API key Slack Bot Token Google Sheets OAuth IoT Hub API (optional) Node Count: 24 (19 functional + 5 sticky notes) Unique Aspects Uses OpenWeatherMap node (rarely used in templates) Uses Split Out node for loop-style processing of zones Uses Filter node for conditional routing Uses Aggregate node to collect results Implements evapotranspiration calculation using Code node Comprehensive multi-factor decision logic Workflow Architecture [Daily Morning Check] [Manual Override Trigger] | | +----------+-------------+ | v [Define Irrigation Zones] | v [Split Zones] (Loop) / \ v v [Get Current] [Get 5-Day Forecast] \ / +----+----+ | v [Merge Weather Data] | v [Analyze Irrigation Need] / \ v v [Filter Needing] [Aggregate All] \ / +----+----+ | v [Generate Irrigation Schedule] | v [Has Irrigation Tasks?] (If) / \ Has Tasks No Tasks / | \ | Sheets[Slack] [Log No Action] \ | / | +---+---+-----------+ | v [Respond to Webhook] Configuration Guide Irrigation Zones: Edit "Define Irrigation Zones" with your zone data (coordinates, plant/soil types) Water Thresholds: Adjust waterThreshold per zone based on plant needs OpenWeatherMap: Add API credentials in the weather nodes Slack Channel: Set to your garden/irrigation channel IoT Integration: Configure endpoint URL for your smart valve controller Google Sheets: Connect to your logging spreadsheet Decision Logic The system evaluates: Expected rainfall in next 24 hours (skip if >5mm expected) Soil moisture estimate based on days since watering + evapotranspiration Plant-specific minimum and ideal moisture levels Temperature adjustments for hot days Scheduled watering frequency by plant type Wind speed for optimal watering time
by edisantosa
This n8n workflow is the data ingestion pipeline for the "RAG System V2" chatbot. It automatically monitors a specific Google Drive folder for new files, processes them based on their type, and inserts their content into a Supabase vector database to make it searchable for the RAG agent. Key Features & Workflow: Google Drive Trigger: The workflow starts automatically when a new file is created in a designated folder (named "DOCUMENTS" in this template). Smart File Handling: A Switch node routes the file based on its MIME type (e.g., PDF, Excel, Google Doc, Word Doc) for correct processing. Multi-Format Extraction: PDF: Text is extracted directly using the Extract PDF Text node. Google Docs: Files are downloaded and converted to plain text (text/plain) and processed by the Extract from Text File node. Excel: Data is extracted, aggregated, and concatenated into a single text block for embedding. Word (.doc/.docx): Word files are automatically converted into Google Docs format using an HTTP Request. This newly created Google Doc will then trigger the entire workflow again, ensuring it's processed correctly. Chunking & Metadata Enrichment: The extracted text is split into manageable chunks using the Recursive Character Text Splitter (set to 2000-character chunks). The Enhanced Default Data Loader then enriches these chunks with crucial metadata from the original file, such as file_name, creator, and created_at. Vectorization & Storage: Finally, the workflow uses OpenAI Embeddings to create vector representations of the text chunks and inserts them into the Supabase Vector Store.
by Sulieman Said
How to use the provided n8n workflow (step‑by‑step), what matters, what it’s good for, and costs per run. What this workflow does (in simple terms) 1) You write (or speak) your idea in Telegram. 2) The workflow builds two short prompts: Image prompt → generates one thumbnail via KIE.ai – Nano Banana (Gemini 2.5 Flash Image). Video prompt → starts a Veo‑3 (KIE.ai) video job using the thumbnail as init image. 3) You receive the thumbnail first, then the short video back in Telegram once rendering completes. Typical output: 1 PNG thumbnail + 1 short MP4 video (e.g., 8–12 s, 9:16). Why this is useful Rapid ideation**: Turn a quick text/voice idea into a ready‑to‑post thumbnail + matching short video. Consistent look: The video uses the thumbnail as **init image, keeping colors, objects and mood consistent. One chat = full pipeline**: Everything happens directly inside Telegram—no context switches. Agency‑ready**: Collect ideas from clients/team chats, and deliver outputs quickly. What you need before importing 1) KIE.ai account & API key Sign up/in at KIE.ai, go to Dashboard → API / Keys. Copy your KIE_API_KEY (keep it private). 2) Telegram Bot (BotFather) In Telegram, open @BotFather → command /newbot. Choose a name and a unique username (must end with bot). Copy your Bot Token (keep it private). 3) Your Telegram Chat ID (browser method) Send any message to your bot so you have a active chat Open Telegram web and the chat with the bot Find the chatid in the URL Import & minimal configuration (n8n) 1) Import the provided workflow JSON in n8n. 2) Create Credentials: Telegram API: paste your Bot Token. HTTP (KIE.ai): usually you’ll pass Authorization: Bearer {{ $env.KIE_API_KEY }} directly in the HTTP Request node headers, or make a generic HTTP credential that injects the header. 3) Replace hardcoded values in the template: Chat ID: use an Expression like {{$json.message.chat.id}} from the Telegram Trigger (prefer dynamic over hardcoded IDs). Authorization headers: never in query params—always in Headers. Content‑Type spelling: Content-Type (no typos). ` How to run it (basic flow) 1) Start the workflow (activate trigger). 2) Send a message to your bot, e.g. glass hourglass on a black mirror floor, minimal, elegant 3) The bot replies with the thumbnail (PNG), then the Veo‑3 video (MP4). If you send a voice message, the flow will download & transcribe it first, then proceed as above. Pricing (rule of thumb) Image (Nano Banana via KIE.ai):* ~ *$0.02–$0.04** per image (plan‑dependent). Video (Veo‑3 via KIE.ai):** Fast: $0.40 per 8 seconds ($0.05/s) Quality: $2.00 per 8 seconds ($0.25/s) Typical run (1 image + 8 s Fast video) ≈ $0.42–$0.44. > These are indicative values. Check your KIE.ai dashboard for the latest pricing/quotas. Why KIE.ai over the “classic” Google API? Cheaper in practice** for short video clips and image gen in this pipeline. One vendor** for both image & video (same auth, similar responses) = less integration hassle. Quick start**: Playground/tasks/status endpoints are n8n‑friendly for polling workflows. Security & reliability tips Never hardcode* API keys or Chat IDs into nodes—use *Credentials* or *Environment variables**. Add IF + error paths after each HTTP node: If status != 200 → Send friendly Telegram message (“Please try again”) + log to admin. If you use callback URLs for video completion, ensure the URL is publicly reachable (n8n Webhook URL). Otherwise, stick to polling. For rate limits, add a Wait node and limit concurrency in workflow settings. Keep aspect & duration consistent across prompt + API calls to avoid unexpected crops. Advanced: voice input (optional) The template supports voice via a Switch → Download → Transcribe (Whisper/OpenAI). Ensure your OpenAI credential is set and your n8n instance can fetch the audio file from Telegram. Example prompt patterns (keep it short & generic) Thumbnail prompt**: “Minimal, elegant, surreal [OBJECT], clean composition, 9:16” Video prompt**: “Cinematic [OBJECT]. slow camera move, elegant reflections, minimal & surreal mood, 9:16, 8–12s.” You can later replace the simple prompt builder with a dedicated LLM step or a fixed style guide for your brand. Final notes This template focuses on a solid, reliable pipeline first. You can always refine prompts later. Start with Veo‑3 Fast to keep iteration costs low; switch to Quality for final renders. Consider saving outputs (S3/Drive) and logging prompts/URLs to a sheet for audit & analytics. Questions or custom requests? 📩 suliemansaid.business@gmail.com