by Alok Kumar
📒 Generate Product Requirements Document (PRD) and test scenarios form input to PDF with OpenRouter and APITemplate.io This workflow generates a Product Requirements Document (PRD) and test scenarios from structured form inputs. It uses OpenRouter LLMs (GPT/Claude) for natural language generation and APITemplate.io for PDF export. Who’s it for This template is designed for product managers, business analysts, QA teams, and startup founders who need to quickly create Product Requirement Documents (PRDs) and test cases from structured inputs. How it works A Form Trigger collects key product details (name, overview, audience, goals, requirements). The LLM Chain (OpenRouter GPT/Claude) generates a professional, structured PRD in Markdown format. A second LLM Chain creates test scenarios and Gherkin-style test cases based on the PRD. Data is cleaned and merged using a Set node. The workflow sends the formatted document to APITemplate.io to generate a polished PDF. Finally, the workflow returns the PDF via a Form Completion node for easy download. ⚡ Requirements OpenRouter API Key (or any LLM) APITemplate.io account 🎯 Use cases Rapid PRD drafting for startups. QA teams generating test scenarios automatically. Standardized documentation workflows. 👉 Customize by editing prompts, PDF templates, or extending with integrations (Slack, Notion, Confluence). Need Help? Ask in the n8n Forum! Happy Automating with n8n! 🚀
by Max aka Mosheh
How it works • Webhook triggers from content creation system in Airtable • Downloads media (images/videos) from Airtable URLs • Uploads media to Postiz cloud storage • Schedules or publishes content across multiple platforms via Postiz API • Tracks publishing status back to Airtable for reporting Set up steps • Sign up for Postiz account at https://postiz.com/?ref=max • Connect your social media channels in Postiz dashboard • Get channel IDs and API key from Postiz settings • Add Postiz API key to n8n credentials (Header Auth) • Update channel IDs in "Prepare for Publish" node • Connect Airtable with your content database • Customize scheduling times per platform as needed • Full setup details in workflow sticky notes
by Maxim Osipovs
This n8n workflow template implements a dual-path architecture for AI customer support, based on the principles outlined in the research paper "A Locally Executable AI System for Improving Preoperative Patient Communication: A Multi-Domain Clinical Evaluation" (Sato et al.). The system, named LENOHA (Low Energy, No Hallucination, Leave No One Behind Architecture), uses a high-precision classifier to differentiate between high-stakes queries and casual conversation. Queries matching a known FAQ are answered with a pre-approved, verbatim response, structurally eliminating hallucination risk. All other queries are routed to a standard generative LLM for conversational flexibility. This template provides a practical ++blueprint++ for building safer, more reliable, and cost-efficient AI agents, particularly in regulated or high-stakes domains where factual accuracy is critical. What This Template Does (Step-by-Step) Loads an expert-curated FAQ from Google Sheets and creates a searchable vector store from the questions during a one-time setup flow. Receives incoming user queries in real-time via a chat trigger. Classifies user intent by converting the query to an embedding and searching the vector store for the most semantically similar FAQ question. Routes the query down one of two paths based on a configurable similarity score threshold. Responds with a verbatim, pre-approved answer if a match is found (safe path), or generates a conversational reply via an LLM if no match is found (casual path). Important Note for Production Use This template uses an in-memory Simple Vector Store for demonstration purposes. For a production application, this should be replaced with a persistent vector database (e.g., Pinecone, Chroma, Weaviate, Supabase) to store your embeddings permanently. Required Integrations: Google Sheets (for the FAQ knowledge base) Hugging Face API (for creating embeddings) An LLM provider (e.g., OpenAI, Anthropic, Mistral) (Recommended) A persistent Vector Store integration. Best For: 🏦 Organizations in regulated industries (finance, healthcare) requiring high accuracy. 💰 Applications where reducing LLM operational costs is a priority. ⚙️ Technical support agents that must provide precise, unchanging information. 🔒 Systems where auditability and deterministic responses for known issues are required. Key Benefits: ✅ Structurally eliminates hallucination risk for known topics. ✅ Reduces reliance on expensive generative models for common queries. ✅ Ensures deterministic, accurate, and consistent answers for your FAQ. ✅ Provides high-speed classification via vector search. ✅ Implements a research-backed architecture for building safer AI systems.
by Davide
This workflow is a beginner-friendly tutorial demonstrating how to use the Evaluation tool to automatically score the AI’s output against a known correct answer (“ground truth”) stored in a Google Sheet. Advantages ✅ Beginner-friendly – Provides a simple and clear structure to understand AI evaluation. ✅ Flexible input sources – Works with both Google Sheets datasets and manual test entries. ✅ Integrated with Google Gemini – Leverages a powerful AI model for text-based tasks. ✅ Tool usage – Demonstrates how an AI agent can call external tools (e.g., calculator) for accurate answers. ✅ Automated evaluation – Outputs are automatically compared against ground truth data for factual correctness. ✅ Scalable testing – Can handle multiple dataset rows, making it useful for structured AI model evaluation. ✅ Result tracking – Saves both answers and correctness scores back to Google Sheets for easy monitoring. How it Works The workflow operates in two distinct modes, determined by the trigger: Manual Test Mode: Triggered by "When clicking 'Execute workflow'". It sends a fixed question ("How much is 8 * 3?") to the AI agent and returns the answer to the user. This mode is for quick, ad-hoc testing. Evaluation Mode: Triggered by "When fetching a dataset row". This mode reads rows of data from a linked Google Sheet. Each row contains an input (a question) and an expected_output (the correct answer). It processes each row as follows: The input question is sent to the AI Agent node. The AI Agent, powered by a Google Gemini model and equipped with a Calculator tool, processes the question and generates an answer (output). The workflow then checks if it's in evaluation mode. Instead of just returning the answer, it passes the AI's actual_output and the sheet's expected_output to another Evaluation node. This node uses a second Google Gemini model as a "judge" to evaluate the factual correctness of the AI's answer compared to the expected one, generating a Correctness score on a scale from 1 to 5. Finally, both the AI's actual_output and the automated correctness score are written back to a new column in the same row of the Google Sheet. Set up Steps To use this workflow, you need to complete the following setup steps: Credentials Configuration: Set up the Google Sheets OAuth2 API credentials (named "Google Sheets account"). This allows n8n to read from and write to your Google Sheet. Set up the Google Gemini (PaLM) API credentials (named "Google Gemini(PaLM) (Eure)"). This provides the AI language model capabilities for both the agent and the evaluator. Prepare Your Google Sheet: The workflow is pre-configured to use a specific Google Sheet. You must clone the provided template sheet (the URL is in the Sticky Note) to your own Google Drive. In your cloned sheet, ensure you have at least two columns: one for the input/question (e.g., input) and one for the expected correct answer (e.g., expected_output). You may need to update the node parameters that reference $json.input and $json.expected_output to match your column names exactly. Update Document IDs: After cloning the sheet, get its new Document ID from its URL and update the documentId field in all three Evaluation nodes ("When fetching a dataset row", "Set output Evaluation", and "Set correctness") to point to your new sheet instead of the original template. Activate the Workflow: Once the credentials and sheet are configured, toggle the workflow to Active. You can then trigger a manual test run or set the "When fetching a dataset row" node to poll your sheet automatically to evaluate all rows. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Automate With Marc
🤖 Telegram Image Editor with Nano Banana Send an image to your Telegram bot, and this workflow will automatically enhance it with Google’s Nano Banana (via Wavespeed API), then return the polished version back to the same chat—seamlessly. 👉 Watch step-by-step video tutorials of workflows like these on www.youtube.com/@automatewithmarc What it does Listens on Telegram for incoming photo messages Downloads the file sent by the user Uploads it to Google Drive (temporary storage for processing) Sends the image to Nano Banana API with a real-estate style cleanup + enhancement prompt Polls until the job is complete (handles async processing) Returns the edited image back to the same Telegram chat Perfect for Real-estate agents previewing polished property photos instantly Social media managers editing on-the-fly from Telegram Anyone who wants “send → cleaned → returned” image flow without manual edits Apps & Services Telegram Bot API (Trigger + Send/Receive files) Google Drive (Temporary file storage) Wavespeed / Google Nano Banana (AI-powered image editing) Setup Connect your Telegram Bot API token in n8n. Add your Wavespeed API key for Nano Banana. Link your Google Drive account (temporary storage). Deploy the workflow and send a test photo to your Telegram bot. Customization Adjust the Nano Banana prompt for different styles (e.g., ecommerce cleanup, portrait retouching, color correction). Replace Google Drive with another storage service if preferred. Add logging to Google Sheets or Airtable to track edits.
by Robert Breen
This n8n workflow automatically generates a custom YouTube thumbnail using OpenAI’s DALL·E based on a YouTube video’s transcript and title. It uses Apify actors to extract video metadata and transcript, then processes the data into a prompt for DALL·E and creates a high-resolution image for use as a thumbnail. ✅ Key Features 📥 Form Trigger**: Accepts a YouTube URL from the user. 🧠 GPT-4o Prompt Creation**: Summarizes transcript and title into a descriptive DALL·E prompt. 🎨 DALL·E Image Generation**: Produces a clean, minimalist YouTube thumbnail with OpenAI’s image model. 🪄 Automatic Image Resizing**: Resizes final image to YouTube specs (1280x720). 🔍 Apify Integration**: Uses two Apify actors: Youtube-Transcript-Scraper to extract transcript youtube-scraper to get video metadata like title, channel, etc. 🧰 What You'll Need OpenAI API Key** Apify Account & API Token** YouTube video URL** n8n instance (cloud or self-hosted)** 🔧 Step-by-Step Setup 1️⃣ Form & Parameter Assignment Node**: Form Trigger How it works**: Collects the YouTube URL via a form embedded in your n8n instance. API Required**: None Additional Node**: Set Converts the single input URL into the format Apify expects: an array of { url } objects. 2️⃣ Apify Actors for Data Extraction Node**: HTTP Request (Query Metadata) URL: https://api.apify.com/v2/acts/streamers~youtube-scraper/run-sync-get-dataset-items Payload: JSON with startUrls array and filtering options like maxResults, isHD, etc. Node**: HTTP Request (Query Transcript) URL: https://api.apify.com/v2/acts/topaz_sharingan~Youtube-Transcript-Scraper/run-sync-get-dataset-items Payload: startUrls array API Required**: Apify API Token (via HTTP Query Auth) Notes**: You must have an Apify account and actor credits to use these actors. 3️⃣ OpenAI GPT-4o & DALL·E Generation Node**: OpenAI (Prompt Creator) Uses the transcript and title to generate a DALL·E-compatible visual prompt. Node**: OpenAI (Image Generator) Resource: image Model: DALL·E (default with GPT-4o key) API Required**: OpenAI API Key Prompt Strategy**: Create a minimalist YouTube thumbnail in an illustration style. The background should be a very simple, uncluttered setting with soft, ambient lighting that subtly reflects the essence of the transcript. The overall mood should be professional and non-cluttered, ensuring that the text overlay stands out without distraction. Do not include any text. 4️⃣ Resize for YouTube Format Node**: Edit Image Purpose**: Resize final image to 1280x720 with ignoreAspectRatio set to true. No API required** — this runs entirely in n8n. 👤 Created By Robert Breen Automation Consultant | AI Workflow Designer | n8n Expert 📧 robert@ynteractive.com 🌐 ynteractive.com 🔗 LinkedIn 🏷️ Tags openai dalle youtube thumbnail generator apify ai automation image generation illustration prompt engineering gpt-4o
by Robert Schröder
AI Image Generation Workflow for Social Media Content Overview This n8n workflow automates the creation of photorealistic AI-generated images for social media content. The workflow uses RunComfy (ComfyUI cloud service) combined with Airtable for data management to create high-quality images based on custom prompts and LoRa models. Key Features Automated Image Generation: Creates photorealistic images using Flux Realism model and custom LoRa models Airtable Integration: Centrally manages content requests, model information, and image status Cloud-based Processing: Utilizes RunComfy servers for powerful GPU processing without local hardware requirements Status Tracking: Monitors generation process and automatically updates database entries Telegram Notifications: Sends success notifications after image completion Technical Workflow Server Initialization: Starts RunComfy server with configured specifications Data Retrieval: Fetches content requests from Airtable database Image Generation: Sends prompts to ComfyUI with Flux Realism + LoRa models Status Monitoring: Checks generation progress in 30-second intervals Download: Downloads completed images Database Update: Updates Airtable with image links and status Server Cleanup: Deletes RunComfy server for cost optimization Prerequisites RunComfy Membership** with API access Airtable Account** with configured database Telegram Bot** for notifications Flux Realism Workflow** in RunComfy library Uploaded LoRa Models** in RunComfy Airtable Schema The database must contain these fields: topic: Content description pose_1: Detailed image prompt LoRa Name Flux: LoRa model name Model: Character name pose_1_drive_fotolink: Link to generated image Bilder erstellt: Generation status Configuration Options Image Resolution: Default 832x1216px (adjustable in ComfyUI parameters) Generation Parameters: 35 steps, Euler sampler, Guidance 2.0 Server Size: "Large" for optimal performance (adjustable based on requirements) Time Intervals: 30s status checks, 50s server initialization This workflow is ideal for content creators who need regular, high-quality, character-consistent images for social media campaigns.
by Zain Khan
AI Product Photography With Nano Banana and Jotform 📸✨ Automate your product visuals! This n8n workflow instantly processes new product photography requests from Jotform or Google Sheets, uses an AI agent (Gemini Nano Banana) to generate professional AI product photography based on your product details and reference images, saves the final image to Google Drive, and updates the photo link in your Google Sheet for seamless record keeping. How it Works This n8n workflow operates as a fully automated pipeline for generating and managing AI product photographs: Trigger: The workflow is triggered either manually, on a set schedule (e.g., hourly), or immediately upon a new submission from the connected Jotform (or when new "Pending" rows are detected in the Google Sheet on a scheduled or manual run). Data Retrieval: If triggered by a schedule or manually, the workflow fetches new rows with a "Status" of "Pending" from the designated Google Sheet. Data Preparation: The input data (Product Name, Description, Requirements, and URLs for the Product and Reference Images) is prepared. The Product and Reference Images are downloaded using HTTP Requests. AI Analysis & Prompt Generation: An AI agent (using the Gemini model) analyzes the product details and image requirements, then generates a refined, professional prompt for the image generation model. AI Photo Generation: The generated prompt, along with the downloaded product and reference images, is sent to the image generation model, referred to as "Gemini Nano Banana" (a powerful Google AI model for image generation), to create the final, high-quality AI product photograph. File Handling: The raw image data is converted into a binary file format. Storage: The generated photograph is saved with the Product Name as the filename to your specified Google Drive folder. Record Update: The workflow updates the original row in the Google Sheet, changing the "Status" to "Completed" and adding the public URL of the newly saved image in the "Generated Image" column. If the trigger was from Jotform, a new record is appended to the Google Sheet. Requirements To use this workflow, you'll need the following accounts and credentials configured in n8n: n8n Account:** Your self-hosted or cloud n8n instance. Google Sheets/Drive Credentials:* An *OAuth2* or *API Key** credential for the Google Sheets and Google Drive nodes to read input and save the generated image. Google Gemini API Key:* An API key for the Google Gemini nodes to access the AI agent for prompt generation and the image generation service (Gemini Nano Banana*). Jotform Credential (Optional):* A Jotform credential is only required if you want to use the Jotform Webhook trigger. *Sign up for Jotform here:** https://www.jotform.com/?partner=zainurrehman A Google Sheet and Jotform:** with columns/fields for: Product Name, Product Description, Product Image (URL), Requirement, Reference Image 1 (URL), Reference Image 2 (URL), Status, and a blank Generated Image column. How to Use 1. Set Up Your Integrations Add the necessary Credentials (Google Sheets, Google Drive, Gemini API, and optionally Jotform) in your n8n settings. Specify the Google Sheet Document ID and Sheet Name in the Google Sheet nodes. In the Upload to Drive node, select your desired Drive ID and Folder ID where the final images should be saved. 2. Prepare Input Data You can start the workflow either by: Submitting a Form:* Fill out and submit the connected *Jotform** with the product details and image links. Adding to a Sheet:* Manually add a new row to your Google Sheet with all the product and image details, ensuring the *Status* is set to *"Pending"**. 3. Run the Workflow For Jotform Trigger:* Once the workflow is *Active**, a Jotform submission will automatically start the process. For Scheduled/Manual Trigger:* Activate the *Schedule Trigger* for automatic runs (e.g., hourly), or click the *Manual Trigger* node and select *"Execute Workflow"** to process all current "Pending" requests in the Google Sheet. The generated photograph will be uploaded to Google Drive, and its link will be automatically recorded in the "Generated Image" column in your Google Sheet.
by Sabrina Ramonov 🍄
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Description This fully automated AI Avatar Social Media system creates talking head AI clone videos, WITHOUT having to film or edit yourself. It combines n8n, AI agent, HeyGen, and Blotato to research, create, and distribute talking head AI clone videos to every social media platform every single day. This template is ideal for content creators, social media managers, social media agencies, small businesses, and marketers who want to to scale short-form video creation, without manually filming and editing every single video. Overview 1. Trigger: Schedule Configured to run once daily at 10am 2. AI News Research Research viral news from tech-focused forum, Hackernews Fetch the selected news item, plus discussion comments 3. AI Writer AI writes 30-second monologue script AI writes short video caption 4. Create Avatar Video Call Heygen API (requires paid API plan), specifying your avatar ID and voice ID Create avatar video, optionally passing in an image/video background if you have a green screen avatar (matte: true) 5. Get Video Wait awhile, then fetch completed avatar video Upload video to Blotato 6. Publish to Social Media via Blotato Connect your Blotato account Choose your social accounts Either post immediately or schedule for later" 📄 Documentation Full Tutorial Troubleshooting Check your Blotato API Dashboard to see every request, response, and error. Click on a request to see the details. Need Help? In the Blotato web app, click the orange button on the bottom right corner. This opens the Support messenger where I help answer technical questions.
by Marth
Workflow Description: Automated YouTube Short Viral History (Blotato + GPT-4.1) This workflow is a powerful, self-sustaining end-to-end content automation pipeline designed to feed your YouTube Shorts channel with consistent, high-quality, and highly engaging videos focused on "What if history..." scenarios. This solution completely eliminates manual intervention across the creative, production, and publishing stages. It expertly links the creative power of a GPT-4o AI Agent with the video rendering capabilities of the Blotato API, all orchestrated by n8n. How It Works The automation runs through a five-step, scheduled process: Trigger and Idea Generation: The Schedule Trigger starts the workflow (default is 10:00 AM daily). The AI Agent (GPT-4o) acts as a copywriter/researcher, automatically brainstorming a random "What if history..." topic, researching relevant facts, and formulating a viral, hook-driven 60-second video script, along with a title and caption. Visual Production Request: The formatted script is sent to the Blotato API via the Create Video node. Blotato begins rendering the text-to-video short based on the pre-set style parameters (cinematic style, specific voice ID, and AI models). Status Check and Wait: The Wait node pauses the workflow, and the Get Video node continually checks the Blotato system until the video rendering status is confirmed as done. Media Upload: The completed video file is uploaded to the Blotato media library using an HTTP Request node, preparing it for publishing. Automated Publishing: The final YT Post node (another HTTP Request to the Blotato API) automatically publishes the video to your linked YouTube channel, using the video URL and the AI-generated title and short caption. Set Up Steps To activate and personalize this powerful content pipeline in n8n, follow these steps: OpenAI Credential: Ensure your OpenAI API key credential is created and connected to the Brainstorm Idea node (Language Model). The workflow uses GPT-4o by default. Blotato API Key: Obtain your Blotato API Key. Open the Prepare Video node and manually insert your Blotato API Key into the blotato_api_key field. YouTube Account ID: Find the Account ID (or Channel ID) for the YouTube channel you want to post to. Open the Prepare for Publish node and manually insert your YouTube Account ID into the youtube_id field. Customize Video Style (Optional): If desired, adjust the visual aesthetic by modifying parameters in the Prepare Video node, such as: voiceId: To change the video narrator. style: To change the visual theme (e.g., from cinematic to documentary). text_to_image_model and image_to_video_model: To change the underlying AI generation models. Activate Workflow: Save the workflow and toggle the main switch to Active. The first video will be created and published on the next scheduled run.
by Evoort Solutions
📥 TikTok to MP4 Converter with Google Drive & Sheets Convert TikTok videos to MP4 , MP3 (without watermark), upload to Google Drive, and log conversion attempts into Google Sheets automatically — powered by TikTok Download Audio Video API. 📝 Description This n8n automation accepts a TikTok video URL via a form, sends it to the TikTok Download Audio Video API, downloads the watermark-free MP4, uploads it to Google Drive, and logs the result (success/failure) into Google Sheets. 🧩 Node-by-Node Overview | # | Node | Functionality | |---|-------------------------------|-------------------------------------------------------------------------------| | 1 | 🟢 Form Trigger | Displays a form for user input of TikTok video URL. | | 2 | 🌐 TikTok RapidAPI Request | Calls the TikTok Downloader API to get the MP4 link. | | 3 | 🔍 If Condition | Checks if the API response status is "success". | | 4 | ⬇️ MP4 Downloader | Downloads the video file using the returned "no watermark" MP4 URL. | | 5 | ☁️ Upload to Google Drive | Uploads the video file to Google Drive root folder. | | 6 | 🔑 Set Google Drive Permission | Makes the file publicly shareable via link. | | 7 | 📄 Google Sheets (Success) | Logs TikTok URL + public Drive link into a Google Sheet. | | 8 | ⏱️ Wait Node | Delays to prevent rapid write operations on error. | | 9 | 📑 Google Sheets (Failure) | Logs failed attempts with Drive_URL = N/A. | ✅ Use Cases 📲 Social media managers downloading user-generated content 🧠 Educators saving TikTok content for offline lessons 💼 Agencies automating short-form video curation 🤖 Workflow automation demonstrations with n8n 🎯 Key Benefits ✔️ MP4 without watermark via TikTok Download Audio Video API ✔️ Automated Google Drive upload & shareable links ✔️ Centralized logging in Google Sheets ✔️ Error handling and retry-safe structure ✔️ Fully customizable and extendable within n8n 💡 Ideal for anyone looking to automate TikTok video archiving with full control over file storage and access. 🔐 How to Get Your API Key for the TikTok Download Audio Video API Go to 👉 TikTok Download Audio Video API - RapidAPI Click "Subscribe to Test" (you may need to sign up or log in). Choose a pricing plan (there’s a free tier for testing). After subscribing, click on the "Endpoints" tab. Your API Key will be visible in the "x-rapidapi-key" header. 🔑 Copy and paste this key into the httpRequest node in your workflow. Create your free n8n account and set up the workflow in just a few minutes using the link below: 👉 Start Automating with n8n
by Kai S. Huxmann
Objective This template helps you create clean, structured, and visually understandable workflows that are easy to read, present to clients, and collaborate on with teams. Whether you're onboarding a client, building reusable automations, or working across a team, this template gives you a solid foundation for workflow visual design and communication. ✨ What’s inside? ✅ Visual layout structure suggestion ✅ Clear segmentation into basic functional parts ✅ Color Coding suggestion to define meaning of colors 🎨 Color-coded nodes (with a built-in legend): 🟩 Green → Operational and stable 🟨 Yellow → Work in progress 🟥 Red → Failing / error 🟧 Orange → Needs review or improvement 🟦 Blue → User input required ⬛ Dark grey → Deprecated or paused 👥 Who is this for? This template is ideal for: 🔧 Freelancers or agencies delivering workflows to clients 👥 Teams working together on large-scale automations 🧱 Anyone creating reusable templates or internal standards 🧑🎓 Beginners who want to learn clean visual patterns supporting easy to maintain code base 📸 Why use this? > “A workflow should explain itself visually – this template helps it do just that.” Better team collaboration Easier onboarding of new developers Faster understanding** for clients, even non-technical ones Reduces maintenance time in the long run 📌 How to use Clone this template and start from it when creating new workflows Keep color conventions consistent (especially in early project stages) Use it to build a visual standard across your team or organization 🚧 Reminder This is a non-functional template — it contains structure, patterns, and documentation examples only. Replace the example nodes with your own logic.