by Guillaume Duvernay
Move beyond generic AI-generated content and create articles that are high-quality, factually reliable, and aligned with your unique expertise. This template orchestrates a sophisticated "research-first" content creation process. Instead of simply asking an AI to write an article from scratch, it first uses an AI planner to break your topic down into logical sub-questions. It then queries a Super assistant—which you've connected to your own trusted knowledge sources like Notion, Google Drive, or PDFs—to build a comprehensive research brief. Only then is this fact-checked brief handed to a powerful AI writer to compose the final article, complete with source links. This is the ultimate workflow for scaling expert-level content creation. Who is this for? Content marketers & SEO specialists:** Scale the creation of authoritative, expert-level blog posts that are grounded in factual, source-based information. Technical writers & subject matter experts:** Transform your complex internal documentation into accessible public-facing articles, tutorials, and guides. Marketing agencies:** Quickly generate high-quality, well-researched drafts for clients by connecting the workflow to their provided brand and product materials. What problem does this solve? Reduces AI "hallucinations":** By grounding the entire writing process in your own trusted knowledge base, the AI generates content based on facts you provide, not on potentially incorrect information from its general training data. Ensures comprehensive topic coverage:** The initial AI-powered "topic breakdown" step acts like an expert outliner, ensuring the final article is well-structured and covers all key sub-topics. Automates source citation:** The workflow is designed to preserve and integrate source URLs from your knowledge base directly into the final article as hyperlinks, boosting credibility and saving you manual effort. Scales expert content creation:** It effectively mimics the workflow of a human expert (outline, research, consolidate, write) but in an automated, scalable, and incredibly fast way. How it works This workflow follows a sophisticated, multi-step process to ensure the highest quality output: Decomposition: You provide an article title and guidelines via the built-in form. An initial AI call then acts as a "planner," breaking down the main topic into an array of 5-8 logical sub-questions. Fact-based research (RAG): The workflow loops through each of these sub-questions and queries your Super assistant. This assistant, which you have pre-configured and connected to your own knowledge sources (Notion pages, Google Drive folders, PDFs, etc.), finds the relevant information and source links for each point. Consolidation: All the retrieved question-and-answer pairs are compiled into a single, comprehensive research brief. Final article generation: This complete, fact-checked brief is handed to a final, powerful AI writer (e.g., GPT-5). Its instructions are clear: write a high-quality article using only the provided information and integrate the source links as hyperlinks where appropriate. Implementing the template Set up your Super assistant (Prerequisite): First, go to Super, create an assistant, connect it to your knowledge sources (Notion, Drive, etc.), and copy its Assistant ID and your API Token. Configure the workflow: Connect your AI provider (e.g., OpenAI) credentials to the two Language Model nodes (GPT 5 mini and GPT 5 chat). In the Query Super Assistant (HTTP Request) node, paste your Assistant ID in the body and add your Super API Token for authentication (we recommend using a Bearer Token credential). Activate the workflow: Toggle the workflow to "Active" and use the built-in form to generate your first fact-checked article! Taking it further Automate publishing:* Connect the final *Article result* node to a *Webflow* or *WordPress** node to automatically create a draft post in your CMS. Generate content in bulk:* Replace the *Form Trigger* with an *Airtable* or *Google Sheet** trigger to automatically generate a whole batch of articles from your content calendar. Customize the writing style:* Tweak the system prompt in the final *New content - Generate the AI output** node to match your brand's specific tone of voice, add SEO keywords, or include specific calls-to-action.
by AttenSys AI
🧥 Virtual Try-On Image & Video Generation (VLM Run) 📌 Overview This n8n workflow enables a Virtual Try-On experience where users upload a dress image and the system: Combines it with a fashion model image Generates a realistic try-on image Generates a fashion walking video Produces secure pre-signed download URLs Automatically shares results via: Telegram Discord YouTube 🚀 Use Cases Virtual fashion try-on AI fashion marketing Clothing e-commerce previews Social media fashion automation Influencer & brand demo pipelines ✨ Key Features 🖼️ Image-based virtual try-on (model wearing the dress) 🎥 AI-generated fashion video 🔗 Multi-platform publishing (Telegram, Discord, YouTube) 🧩 Modular, extensible workflow design 🧠 Workflow Architecture 🟨 Input Dress Image** – Uploaded by user (Form Trigger) Model Image** – Downloaded from predefined URL Prompt** – Auto-constructed inside workflow 🟦 Output 🖼️ Try-On Image (pre-signed download link) 🎥 Fashion Walk Video (pre-signed download link) 📤 Shared to: Telegram (image/video) Discord (image embed) YouTube (video upload) 🔐 Required Credentials You must configure the following credentials in n8n: | Service | Credential Type | | -------- | ------------------ | | VLM Run | VLM Run API | | Telegram | Telegram Bot API | | Discord | Discord OAuth2 | | YouTube | YouTube OAuth2 | ⚠️ Community Node Warning > Important: This workflow uses a Community Node > @vlm-run/n8n-nodes-vlmrun What this means: This node is NOT installed by default in n8n You must manually install it before using the workflow 📦 Installation Run the following command in your n8n environment: npm install @vlm-run/n8n-nodes-vlmrun Then restart n8n. 📖 Community Nodes Documentation: https://docs.n8n.io/integrations/community-nodes/
by Dr. Christoph Schorsch
Rename Workflow Nodes with AI for Clarity This workflow automates the tedious process of renaming nodes in your n8n workflows. Instead of manually editing each node, it uses an AI language model to analyze its function and assign a concise, descriptive new name. This ensures your workflows are clean, readable, and easy to maintain. Who's it for? This template is perfect for n8n developers and power users who build complex workflows. If you often find yourself struggling to understand the purpose of different nodes at a glance or spend too much time manually renaming them for documentation, this tool will save you significant time and effort. How it works / What it does The workflow operates in a simple, automated sequence: Configure Suffix: A "Set" node at the beginning allows you to easily define the suffix that will be appended to the new workflow's name (e.g., "- new node names"). Fetch Workflow: It then fetches the JSON data of a specified n8n workflow using its ID. AI-Powered Renaming: The workflow's JSON is sent to an AI model (like Google Gemini or Anthropic Claude), which has been prompted to act as an n8n expert. The AI analyzes the type and parameters of each node to understand its function. Generate New Names: Based on this analysis, the AI proposes new, meaningful names and returns them in a structured JSON format. Update and Recreate: A Code Node processes these suggestions, updates all node names, and correctly rebuilds the connections and expressions. Create & Activate New Workflow: Finally, it creates a new workflow with the updated name, deactivates the original to avoid confusion, and activates the new version.
by Panth1823
AI Workflow Description and Template Generator This workflow automates the creation of professional documentation and template-ready sticky notes for any n8n workflow using AI. How it works Receives an n8n workflow JSON file via Telegram Validates the input file type and extracts workflow data Scrubs sensitive information and analyzes workflow structure Uses Google Gemini AI to generate comprehensive documentation Assembles a complete template with main workflow sticky note and logical section stickies Sends back the documented workflow file, usage checklist, and setup guide via Telegram Setup Configure Telegram Trigger credentials for receiving files Configure Telegram API credentials for sending messages Configure Google Gemini Chat Model (Google PaLM API) credentials Customization Adjust the prompt in the "AI Template Generator" node to modify documentation style, detail level, or specific requirements for your use case.
by Jimleuk
Generating contextual summaries is an token-intensive approach for RAG embeddings which can quickly rack up costs if your inference provider charges by token usage. Featherless.ai is an inference provider with a different pricing model - they charge a flat subscription fee (starting from $10) and allows for unlimited token usage instead. If you're typically spending over $10 - $25 a month, you may find Featherless to be a cheaper and more manageable option for your projects or team. For this template, Featherless's unlimited token usage is well suited for generating contextual summaries at high volumes for a majority of RAG workloads. LLM: moonshotai/Kimi-K2-Instruct Embeddings: models/gemini-embedding-001 How it works A large document is imported into the workflow using the HTTP node and its text extracted via the Extract from file node. For this demonstration, the UK highway code is used an an example. Each page is processed individually and a contextual summary is generated for it. The contextual summary generation involves taking the current page, preceding and following pages together and summarising the contents of the current page. This summary is then converted to embeddings using Gemini-embedding-001 model. Note, we're using a http request to use the Gemini embedding API as at time of writing, n8n does not support the new API's schema. These embeddings are then stored in a Qdrant collection which can then be retrieved via an agent/MCP server or another workflow. How to use Replace the large document import with your own source of documents such as google drive or an internal repo. Replace the manual trigger if you want the workflow to run as soon as documents become available. If you're using Google Drive, check out my Push notifications for Google Drive template. Expand and/or tune embedding strategies to suit your data. You may want to additionally embed the content itself and perform multi-stage queries using both. Requirements Featherless.ai Account and API Key Gemini Account and API Key for Embeddings Qdrant Vector store Customising this workflow Sparse Vectors were not included in this template due to scope but should be the next step to getting the most our of contextual retrieval. Be sure to explore other models on the Featherless.ai platform or host your own custom/finetuned models.
by Dahiana
Description Who's it for: Content creators, marketers, and businesses who publish on both YouTube and blog platforms. What it does: Monitors your YouTube channel for new videos and automatically creates SEO-optimized blog posts using AI, then publishes to WordPress or Webflow. How it works: RSS Feed Trigger polls YouTube videos (every X amount of time) Extracts video metadata (title, description, thumbnail) YouTube node extracts full description for extra context Uses OpenAI (you can choose any model) to generate 600-800 word blog post Publishes to WordPress AND/OR Webflow with error handling Sends notifications to Telegram if publishing fails Requirements: YouTube channel ID (avoid tutorial channels for better results) OpenAI API key (or similar) WordPress OR Webflow credentials Telegram bot (optional, for error notifications) Setup steps: Replace YOUR_CHANNEL_ID in RSS Feed Trigger Add OpenAI credentials in AI generation node Configure WordPress and/or Webflow credentials Add Telegram bot for error notifications (optional). If you choose to set up Telegram, you need to input your channel ID. Test with manual execution first Customization: Modify AI prompt for different content styles Adjust polling frequency (30-60 minutes recommended) Add more CMS platforms Add content verification (is content larger than 600 characters? if not, improve)
by Jimleuk
Cohere's new multimodal model releases make building your own Vision RAG agents a breeze. If you're new to Multimodal RAG and for the intent of this template, it means to embed and retrieve only document scans relevant to a query and then have a vision model read those scans to answer. The benefits being (1) the vision model doesn't need to keep all document scans in context (expensive) and (2) ability to query on graphical content such as charts, graphs and tables. How it works Page extracts from a technology report containing graphs and charts are downloaded, converted to base64 and embedded using Cohere's Embed v4 model. This produces embedding vectors which we will associate with the original page url and store them in our Qdrant vector store collection using the Qdrant community node. Our Vision RAG agent is split into 2 parts; one regular AI agent for chat and a second Q&A agent powered by Cohere's Command-A-vision model which is required to read contents of images. When a query requires access to the technology report, the Q&A agent branch is activated. This branch performs a vector search on our image embeddings and returns a list of matching image urls. These urls are then used as input for our vision model along with the user's original query. The Q&A vision agent can then reply to the user using the "respond to chat" node. Because both agents share the same memory space, it would be the same conversation to the user. How to use Ensure you have a Cohere account and sufficient credit to avoid rate limit or token usage restrictions. For embeddings, swap out the page extracts for your own. You may need to split and convert document pages to images if you want to use image embeddings. For chat, you may want to structure the agent(s) in another way which makes sense for your environment eg. using MCP servers. Requirements Cohere account for Embeddings and LLM Qdrant for vector store
by Deniz
Structured Setup Guide: Narrative Chaining with N8N + AI 1. Input Setup Use a Google Sheet as the control panel. Fields required: Video URL (starting clip, ends with .mp4) Number of clips to extend (e.g., 2 extra scenes) Aspect ratio (horizontal, vertical, etc.) Model (V3 or V3 Fast) Narrative theme (guidance for story flow) Special requests (scene-by-scene instructions) Status column (e.g., "For Production", "Done") 👉 Example scene inputs: Scene 1: Naruto walks out with ramen is his hands Scene 2: Joker joins with chips 2. Workflow in N8N Step 1: Fetch Input Get rows in sheet → fetch the next row where status = For Production. Clear sheet 2 → reset the sheet that stores generated scenes. Edit fields (Initial Values): Video URL = starting clip Step = 1 Complete = total number of scenes requested Step 2: Looping Logic Looper Node: Runs until step = complete. Carries over current video URL → feeds into next generation. Step 3: Analyze Current Clip Send video URL to File.AI Video Understanding API. Request: Describe last frame + audio + scene details. Output: Detailed video analysis text. Step 4: Generate Prompt AI Agent creates the next scene prompt using: Context from video analysis Narrative theme (from sheet) Scene instructions (from sheet) Aspect ratio, model preference, etc. 👉 Output = video prompt for next scene Step 5: Extract Last Frame Call File.AI Extract Frame API. Parameters: Input video URL Frame = last Output = JPG image (last frame of current clip). Step 6: Generate New Scene Use Key.AI (V3 Fast) for economical video generation. POST request includes: Prompt (from AI Agent) Aspect ratio + model Image URL (last frame) → ensures seamless chaining Wait for generation to complete. 👉 Output = New clip URL (MP4) Step 7: Store & Increment Log new clip URL into Sheet 2. Increment Step by +1. Replace Video URL with the new clip. Loop back if Step < Complete. 3. Output Section Once all clips are generated: Gather all scene URLs from Sheet 2. Use File.AI Merge Videos API to stitch clips together: Original clip + all generated scenes. Save final MP4 output. Update Sheet 1 row with: Final video URL Status = Done 4. Costs Video analysis: ~$0.015 per 8s clip Frame extraction: ~0.002¢ (almost free) Clip merging: negligible (via ffmpeg backend) V3 Fast video generation (Key.AI): ~$0.30 per 8s clip
by Wessel Bulte
Description This workflow is a practical, “dirty” solution for real-world scenarios where frontline workers keep using Excel in their daily processes. Instead of forcing change, we take their spreadsheets as-is, clean and normalize the data, generate embeddings, and store everything in Supabase. The benefit: frontline staff continue with their familiar tools, while data analysts gain clean, structured, and vectorized data ready for analysis or RAG-style AI applications. How it works Frontline workers continue with Excel** – no disruption to their daily routines. Upload & trigger** – The workflow runs when a new Excel sheet is ready. Read Excel rows** – Data is pulled from the specified workbook and worksheet. Clean & normalize** – HTML is stripped, Excel dates are fixed, and text fields are standardized. Batch & switch** – Rows are split and routed into Question/Answer processing paths. Generate embeddings** – Cleaned Questions and Answers are converted into vectors via OpenAI. Merge enriched records** – Original business data is combined with embeddings. Write into Supabase** – Data lands in a structured table (excel_records) with vector and FTS indexes. Why it’s “dirty but useful” No disruption** – frontline workers don’t need to change how they work. Analyst-ready data** – Supabase holds clean, queryable data for dashboards, reporting, or AI pipelines. Bridge between old and new** – Excel remains the input, but the backend becomes modern and scalable. Incremental modernization** – paves the way for future workflow upgrades without blocking current work. Outcome Frontline workers keep their Excel-based workflows, while data can immediately be structured, searchable, and vectorized in Supabase — enabling AI-powered search, reporting, and retrieval-augmented generation. Required setup Supabase account Create a project and enable the pgvector extension. OpenAI API Key Required for generating embeddings (text-embedding-3-small). Microsoft Excel credentials Needed to connect to your workbook and worksheet. Need Help 🔗 LinkedIn – Wessel Bulte
by Roshan Ramani
Product Video Creator with Nano Banana & Veo 3.1 via Telegram Who's it for This workflow is perfect for: E-commerce sellers needing quick product videos Social media marketers creating content at scale Small business owners without video editing skills Product photographers enhancing their offerings Anyone selling on Instagram, TikTok, or mobile-first platforms What it does Transform basic product photos into professional marketing videos in under 2 minutes: Send a product photo to your Telegram bot Nano Banana analyzes and enhances your image with studio-quality lighting Veo 3.1 generates an 8-second vertical video with motion and audio Receive your scroll-stopping marketing video automatically Perfect for creating engaging vertical content without expensive tools or editing expertise. How it works Input → User sends product photo via Telegram with optional caption AI Analysis → Nano Banana analyzes product and generates detailed enhancement prompt Image Enhancement → Nano Banana creates commercial-grade photo (9:16, studio lighting) Video Generation → Veo 3.1 creates 8-second 1080p video with motion and audio Delivery → Auto-polls status every 30s, delivers final video to Telegram Requirements Google Cloud Platform Vertex AI API** enabled for Veo 3.1 Generative Language API** enabled for Nano Banana OAuth2 credentials Get credentials from Google Cloud Console Telegram Bot token from @BotFather n8n Self-hosted or cloud instance Setup Import workflow JSON into n8n Add credentials: Telegram API (bot token) Google OAuth2 API (client id and secret) Google PaLM API (API key) Update your Project ID in both Veo 3.1 nodes Activate workflow and test with a product photo How to customize Aspect Ratio: Choose 9:16 (vertical), 16:9 (horizontal) in "Generate Enhanced Image" and "Initiate veo 3.1" nodes Duration: Set 2 to 8 seconds by adjusting durationSeconds in "Initiate veo 3.1 Video Generation" Quality: Select 720p or 1080p by changing resolution in "Initiate veo 3.1 Video Generation" Audio: Enable or disable background music by toggling generateAudio in "Initiate veo 3.1 Video Generation" Enhancement Style: Match your brand aesthetic by editing the prompt in "AI Design Analysis" node Polling Time: Adjust retry interval by changing wait time in "Processing Delay (30s)" node Key Features 🔐 Direct Google APIs – No third-party services. Uses Nano Banana and Veo 3.1 directly via Google Cloud for maximum reliability and privacy ⚡ Fully Automated – Send photo, receive video. Zero manual work required 🎨 Studio Quality – Nano Banana delivers professional lighting, composition, and AI-powered color grading 📱 Mobile-First – Default 9:16 vertical format optimized for Instagram Reels, TikTok, and Stories 🔄 Smart Retry Logic – Automatically polls Veo 3.1 status every 30 seconds until video generation completes 🎵 Audio Included – Veo 3.1 generates background music automatically (can be disabled)
by Aryan Shinde
Effortlessly generate, review, and publish SEO-optimized blog posts to WordPress using AI and automation. How It Works AI Topic Generation: Gemini suggests trending blog topics matching your agency's services. Content Research: Tavily fetches recent relevant articles for each generated topic. Human Review: Choose the preferred article for publishing through a Telegram notification. AI Rewriting: Gemini rewrites the selected article into a polished, SEO-friendly post. Image Generation & Publishing: The workflow creates a featured image with Gemini or OpenAI, then publishes the post (with dynamic categories and images) to WordPress. Audit Trail: Every published post is logged to Google Sheets, and final details are sent to Telegram. Set Up Steps Estimated setup time: 15–30 minutes (excluding API approval/wait times). Connect your WordPress, Gemini (Google), Tavily, Google Sheets, and Telegram accounts. Configure your preferred posting schedule in the “Schedule Trigger.” Adjust prompts or messages to fit your agency’s niche or editorial voice if needed. Note: Detailed customizations and advanced configuration tips are included in the sticky notes within the workflow.
by PDF Vector
Overview Healthcare organizations face significant challenges in digitizing and processing medical records while maintaining strict HIPAA compliance. This workflow provides a secure, automated solution for extracting clinical data from various medical documents including discharge summaries, lab reports, clinical notes, prescription records, and scanned medical images (JPG, PNG). What You Can Do Extract clinical data from medical documents while maintaining HIPAA compliance Process handwritten notes and scanned medical images with OCR Automatically identify and protect PHI (Protected Health Information) Generate structured data from various medical document formats Maintain audit trails for regulatory compliance Who It's For Healthcare providers, medical billing companies, clinical research organizations, health information exchanges, and medical practice administrators who need to digitize and extract data from medical records while maintaining HIPAA compliance. The Problem It Solves Manual medical record processing is time-consuming, error-prone, and creates compliance risks. Healthcare organizations struggle to extract structured data from handwritten notes, scanned documents, and various medical forms while protecting PHI. This template automates the extraction process while maintaining the highest security standards for Protected Health Information. Setup Instructions: Configure Google Drive credentials with proper medical record access controls Install the PDF Vector community node from the n8n marketplace Configure PDF Vector API credentials with HIPAA-compliant settings Set up secure database storage with encryption at rest Define PHI handling rules and extraction parameters Configure audit logging for regulatory compliance Set up integration with your Electronic Health Record (EHR) system Key Features: Secure retrieval of medical documents from Google Drive HIPAA-compliant processing with automatic PHI masking OCR support for handwritten notes and scanned medical images Automatic extraction of diagnoses with ICD-10 code validation Medication list processing with dosage and frequency information Lab results extraction with reference ranges and flagging Vital signs capture and normalization Complete audit trail for regulatory compliance Integration-ready format for EHR systems Customization Options: Define institution-specific medical terminology and abbreviations Configure automated alerts for critical lab values or abnormal results Set up custom extraction fields for specialized medical forms Implement medication interaction warnings and contraindication checks Add support for multiple languages and international medical coding systems Configure integration with specific EHR platforms (Epic, Cerner, etc.) Set up automated quality assurance checks and validation rules Implementation Details: The workflow uses advanced AI with medical domain knowledge to understand clinical terminology and extract relevant information while automatically identifying and protecting PHI. It processes various document formats including handwritten prescriptions, lab reports, discharge summaries, and clinical notes. The system maintains strict security protocols with encryption at rest and in transit, ensuring full HIPAA compliance throughout the processing pipeline. Note: This workflow uses the PDF Vector community node. Make sure to install it from the n8n community nodes collection before using this template.