by Pawan
Who is this for? This workflow is designed for growth agencies, SaaS founders, and sales teams who want to move beyond static lead forms. It is ideal for those who need a "living" system that not only captures leads but also provides immediate value through AI-generated strategies and evolves based on performance data. How it works The template operates in two distinct phases: The Lead Engine**: A user interacts with a chat bubble. A Gemini-powered agent conversationally qualifies the lead (Name, Industry, Budget). A custom JavaScript node ensures data integrity before an "AI Council" node generates three industry-specific growth tactics. High-value leads are then routed to Slack and logged in Google Sheets. The Self-Optimization Loop**: A scheduled trigger audits lead data in Google Sheets daily. It uses Gemini to identify friction points and sends a "System Audit" report to Slack, suggesting prompt improvements to increase conversion rates. How to set up Credentials**: Connect your Google Gemini (API Key), Google Sheets (OAuth2), and Slack (OAuth2) accounts. Google Sheets**: Create a spreadsheet with headers: Lead, Suggestion, and Status. Copy the Spreadsheet ID into the Google Sheets nodes. Slack**: Invite your n8n bot to a specific channel (e.g., /invite @n8n) and select that channel in the Slack nodes. Memory**: Ensure the Window Buffer Memory node is connected to the AI Agent to maintain conversation state. Requirements Google AI (Gemini) API Key. Google Sheets for data logging. Slack for real-time notifications. n8n version 1.0+ (supporting AI Agent nodes). How to customize Scoring Logic: Adjust the Scoring Logic code node to change what constitutes a "Hot" lead. AI Strategist: Modify the prompt in the AI Council: Strategist node to provide different types of value (e.g., free audits instead of growth tactics).
by DataForSEO
This weekly workflow automatically discovers new high-volume, ranked keywords for your domain on Google without manual SERP monitoring. On each run, the workflow fetches the latest ranking and search volume data using the DataForSEO Labs API and stores a fresh snapshot in Google Sheets. It then compares this data with the previous run to identify any new keywords your domain started ranking for, focusing on queries with a search volume above 1,000. All newly ranked keywords that match this rule are added to a dedicated Google Sheet, along with their ranking position and search volume, creating a growing historical log you can use to analyze gains over time. Once new terms are identified, the workflow creates tasks in Asana to help your team act on them quickly, and sends you a Slack summary highlighting the latest changes. Who’s it for SEO professionals, marketers, and content teams who want an automated way to discover newly ranked, high-volume Google keywords and turn organic ranking gains into actionable content or optimization tasks. What it does This workflow automatically detects when your domain starts ranking for new high-volume keywords on Google, records them in Google Sheets, creates related tasks in Asana, and sends a weekly summary via Slack. How it works Runs on a predefined schedule (default: once a week). Reads your keywords and target domains from Google Sheets. Extracts the latest Google results and keyword metrics via DataForSEO API. Compares current data with the previous snapshot. Logs newly ranked keywords to a dedicated Google Sheet. Creates follow-up tasks in Asana for content team. Sends a Slack summary with key changes. Requirements DataForSEO account and API credentials Google Sheets spreadsheet with your keywords, following the required column structure (as in the example). Google Sheets spreadsheet with your target domains, following the required column structure (as in the example). Asana account Slack account Customization You can easily tailor this workflow to your needs by adjusting the run schedule, changing the minimum search volume threshold, exporting results to other tools (like Looker Studio or BigQuery), and customizing the content of the Asana task or Slack message to match your team’s workflow.
by Intuz
This n8n template from Intuz provides a complete and automated solution for identifying high-intent leads from LinkedIn job postings and automatically generating personalized outreach emails. Disclaimer Community nodes are used in this workflow. Who’s this workflow for? B2B Sales Teams & SDRs Recruitment Agencies & Tech Recruiters Startup Founders Growth Marketing Teams How it works 1. Scrape Hiring Signals: The workflow starts by using an Apify scraper to find companies actively hiring for specific roles on LinkedIn (e.g., “ML Engineer”). 2. Filter & Qualify Companies: It automatically filters the results based on your criteria (e.g., company size, industry) to create a high-quality list of target accounts. 3. Find Decision-Makers: For each qualified company, it uses Apollo.io to find key decision-makers (VPs, Directors, etc.) and enrich their profiles with verified email addresses using user’s Apollo API. 4. Build a Lead List: All the enriched lead data—contact name, title, email, company info—is systematically added to a Google Sheet. 5. Generate AI-Powered Emails: The workflow then feeds each lead’s data to a Google Gemini AI model, which drafts a unique, personalized cold email that references the specific job the company is hiring for. 6. Complete the Outreach List: Finally, the AI-generated subject line and email body are saved back into the Google Sheet, leaving you with a fully prepared, hyper-targeted outreach campaign. Setup Instructions 1. Apify Configuration: Connect your Apify account in the Run the LinkedIn Job Scraper node. You’ll need an apify scrapper, we have used this scrapper In the Custom Body field, paste the URL of your target LinkedIn Jobs search query. 2. Data Enrichment: Connect your account API of data providers like Clay, Hunter, Apollo, etc. using HTTP Header Auth in the Get Targeted Personnel and Email Finder nodes. 3. Google Gemini AI: Connect your Google Gemini (or PaLM) API account in the Google Gemini Chat Model node. 4. Google Sheets Setup: Connect your Google Sheets account. Create a spreadsheet and update the Document ID and Sheet Name in the three Google Sheets nodes to match your own. 5. Activate Workflow: Click “Execute workflow” to run the entire lead generation and email-writing process on demand. Connect with us: Website: https://www.intuz.com/services Email: getstarted@intuz.com LinkedIn: https://www.linkedin.com/company/intuz Get Started: https://n8n.partnerlinks.io/intuz For Custom Workflow Automation Click here- Get Started
by Vuong Nguyen
How it works Reads product image links from a Google Sheet. Analyzes each image, generates an AI prompt, and combines the product with a human model image. Creates final AI advertising photos and: Saves images to Google Drive. Saves prompts + result links to a separate “output” sheet. Here’s an updated version of the note, optimized so the customer knows exactly what they must customize (no hard‑coded IDs assumed). One‑time setup for your own account Before using the workflow, you must point it to your Google Sheet, Drive folder, and model image. 2.1. Prepare your Google assets Google Sheet Create or copy a spreadsheet. Create two sheets, for example: Sheet A (input): e.g. “Product Images” Required column: Image-URL Sheet B (output): e.g. “Output Images” Required columns: Image-URL, Prompt, Output Google Drive Create a folder to store generated AI images (e.g. “AI Product Photos”). Upload one model image (e.g. Model.png) that will be used as the human model in all generated photos. n8n credentials In n8n, make sure you have: A Google Sheets credential (your Google account). A Google Drive credential. A Google Gemini / PaLM API credential. 2.2. Update the workflow nodes Open the workflow “Automated generation of AI advertising photos for product marketing” and update these nodes: Node: “Read Image URLs” (Google Sheets) Set Document / Spreadsheet to your product spreadsheet. Set Sheet to your input sheet (e.g. “Product Images”). Make sure that sheet has a column named exactly Image-URL. Node: “Insert Image URL in Table” (Google Sheets) Operation: appendOrUpdate (keep as is). Set Document / Spreadsheet to the same spreadsheet as above (or another one if you prefer). Set Sheet to your output sheet (e.g. “Output Images”). Make sure the output sheet has columns: Image-URL Prompt Output Matching column: Image-URL (keeps update‑by‑product behavior). Node: “Download model image” (Google Drive) Choose your model image file from Google Drive. This is the image of the human model that will appear in all generated photos. Node: “Upload to Drive” (Google Drive) Set Folder to your AI output folder (e.g. “AI Product Photos”) where you want all generated images to be saved. Credentials in each Google node For each Google Sheets / Drive node, select your Google accounts in the “Credentials” section if they are not already set. After these 4–5 fields are updated, the workflow is fully adapted to your environment. Daily usage (for business / marketing users) Open your Google Sheet. Go to the input sheet (e.g. “Product Images”). For each product, add a row with: Image-URL → a valid link to the product image. Do not edit the output sheet manually; it will be filled by the workflow. To generate images: Log into n8n. Open the workflow “Automated generation of AI advertising photos for product marketing”. Click “Test workflow” or “Execute workflow”. Wait until it finishes. Viewing results In Google Drive: Open your chosen output folder (e.g. “AI Product Photos”). All generated advertising photos are stored there. In Google Sheets (output sheet): Image-URL: product image link used as input. Prompt: the AI prompt used to generate the image. Output: a web link to the generated image on Google Drive. Re‑running the workflow with the same Image-URL will update the existing row instead of creating duplicates. Troubleshooting No image generated for a row Check that Image-URL opens correctly in a browser. No data written to output sheet Confirm the workflow nodes point to the correct spreadsheet and sheet names. Check that the columns Image-URL, Prompt, Output exist and are spelled exactly the same. General errors Open the last execution in n8n and review the error message. Share that message with your technical team for support.
by Fahmi Fahreza
Weekly SEO Watchlist Audit to Google Sheets (Gemini + Decodo) Sign up for Decodo HERE for Discount Automatically fetches page content, generates a compact SEO audit (score, issues, fixes), and writes both a per-URL summary and a normalized “All Issues” table to Google Sheets—great for weekly monitoring and prioritization. Who’s it for? Content/SEO teams that want lightweight, scheduled audits of key pages with actionable next steps and spreadsheet reporting. How it works Weekly trigger loads the Google Sheet of URLs. Split in Batches processes each URL. Decodo fetches page content (markdown + status). Gemini produces a strict JSON audit via the AI Chain + Output Parser. Code nodes flatten data for two tabs. Google Sheets nodes append Summary and All Issues rows. Split in Batches continues to the next URL. How to set up Add credentials for Google Sheets, Decodo, and Gemini. Set sheet_id and Sheet GIDs in the Set node. Ensure input sheet has a URL column. Configure your Google Sheets tabs with proper headers matching each field being appended (e.g., URL, Decodo Score, Priority, etc.). Adjust schedule as needed. Activate the workflow.
by Club de Inteligencia Artificial Politécnico CIAP
Telegram Appointment Scheduling Bot with n8n 📃 Description Tired of managing appointments manually? This template transforms your Telegram account into a smart virtual assistant that handles the entire scheduling process for you, 24/7. This workflow allows you to deploy a fully functional Telegram bot that not only schedules appointments but also checks real-time availability in your Google Calendar, logs a history in Google Sheets, and allows your clients to cancel or view their upcoming appointments. It's the perfect solution for professionals, small businesses, or anyone looking to automate their booking system professionally and effortlessly. ✨ Key Features Complete Appointment Management:** Allows users to schedule, cancel, and list their future appointments. Conflict Prevention:** Integrates with Google Calendar to check availability before confirming a booking, eliminating the risk of double-booking. Automatic Logging:** Every confirmed appointment is saved to a row in Google Sheets, creating a perfect database for tracking and analysis. Smart Interaction:** The bot handles unrecognized commands and guides the user, ensuring a smooth experience. Easy to Adapt:** Connect your own accounts, customize messages, and tailor it to your business needs in minutes. 🚀 Setup Follow these steps to deploy your own instance of this bot: 1. Prerequisites An n8n instance (Cloud or self-hosted). A Telegram account. A Google account. 2. Telegram Bot Talk to @BotFather on Telegram. Create a new bot using /newbot. Give it a name and a username. Copy and save the API token it provides. 3. Google Cloud & APIs Go to the Google Cloud Console. Create a new project. Enable the Google Calendar API and Google Sheets API. Create OAuth 2.0 Client ID credentials. Make sure to add your n8n instance's OAuth redirect URL. Save the Client ID and Client Secret. 4. Google Sheets Create a new spreadsheet in Google Sheets. Define the column headers in the first row. For example: id, Client Name, Date and Time, ISO Date. 5. n8n Import the workflow JSON file into your n8n instance. Set up the credentials: Telegram: Create a new credential and paste your bot's token. Google Calendar & Google Sheets (OAuth2): Create a new credential and paste the Client ID and Client Secret from the Google Cloud Console. Review the Google Calendar and Google Sheets nodes to select your correct calendar and spreadsheet. Activate the workflow! 💬 Usage Once the bot is running, you can interact with it using the following commands in Telegram: To start the bot:** /start To schedule a new appointment:** agendar YYYY-MM-DD HH:MM Your Full Name To cancel an existing appointment:** cancelar YYYY-MM-DD HH:MM Your Full Name To view your future appointments:** mis citas Your Full Name 👥 Authors Jaren Pazmiño President of the Polytechnic Artificial Intelligence Club (CIAP)
by Danny
This workflow automates lead ingestion from Google Sheets and Telegram, leveraging Gemini AI and Lusha for intelligent matching and deep data enrichment. By normalizing incoming data into a standard structure, it uses custom fuzzy logic to identify existing HubSpot records—preventing duplicates and ensuring your CRM stays clean with validated contact and company details. Key Features: Agnostic Intake: Seamlessly processes leads from structured Google Sheets or raw Telegram messages parsed by Gemini AI. Intelligent Matching: Custom JS engine performs two-tier matching (hard & fuzzy) to save Lusha credits and keep CRM data integrity. Deep Enrichment: Automatically triggers Lusha API to find missing emails and update firmographic data like revenue and industry. Automated Sync: Closes the loop by notifying the team on Telegram and updating the spreadsheet status once a lead is processed. Setup Instructions: Connect your HubSpot, Lusha, Gemini, Google Sheets, and Telegram credentials. Input your Spreadsheet ID in the 'Trigger' and 'Acknowledge' nodes. Adjust the similarity threshold in the 'Switch Logic' node (default 80) based on your data needs.
by Growth AI
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Website sitemap generator and visual tree creator Who's it for Web developers, SEO specialists, UX designers, and digital marketers who need to analyze website structure, create visual sitemaps, or audit site architecture for optimization purposes. What it does This workflow automatically generates a comprehensive sitemap from any website URL and creates an organized hierarchical structure in Google Sheets. It follows the website's sitemap to discover all pages, then organizes them by navigation levels (Level 1, Level 2, etc.) with proper parent-child relationships. The output can be further processed to create visual tree diagrams and mind maps. How it works The workflow follows a five-step automation process: URL Input: Accepts website URL via chat interface Site Crawling: Uses Firecrawl to discover all pages following the website's sitemap only Success Validation: Checks if crawling was successful (some sites block external crawlers) Hierarchical Organization: Processes URLs into a structured tree with proper level relationships Google Sheets Export: Creates a formatted spreadsheet with the complete site architecture The system respects robots.txt and follows only sitemap-declared pages to ensure ethical crawling. Requirements Firecrawl API key (for website crawling and sitemap discovery) Google Sheets access Google Drive access (for template duplication) How to set up Step 1: Prepare your template (recommended) It's recommended to create your own copy of the base template: Access the base Google Sheets template Make a copy for your personal use Update the workflow's "Copy template" node with your template's file ID (replace the default ID: 12lV4HwgudgzPPGXKNesIEExbFg09Tuu9gyC_jSS1HjI) This ensures you have control over the template formatting and can customize it as needed Step 2: Configure API credentials Set up the following credentials in n8n: Firecrawl API: For crawling websites and discovering sitemaps Google Sheets OAuth2: For creating and updating spreadsheets Google Drive OAuth2: For duplicating the template file Step 3: Configure Firecrawl settings (optional) The workflow uses optimized Firecrawl settings: ignoreSitemap: false - Respects the website's sitemap sitemapOnly: true - Only crawls URLs listed in sitemap files These settings ensure ethical crawling and faster processing Step 4: Access the workflow The workflow uses a chat trigger interface - no manual configuration needed Simply provide the website URL you want to analyze when prompted How to use the workflow Basic usage Start the chat: Access the workflow via the chat interface Provide URL: Enter the website URL you want to analyze (e.g., "https://example.com") Wait for processing: The system will crawl, organize, and export the data Receive your results: Get an automatic direct clickable link to your generated Google Sheets - no need to search for the file Error handling Invalid URLs: If the provided URL is invalid or the website blocks crawling, you'll receive an immediate error message Graceful failure: The workflow stops without creating unnecessary files when errors occur Common causes: Incorrect URL format, robots.txt restrictions, or site security settings File organization Automatic naming: Generated files follow the pattern "[Website URL] - n8n - Arborescence" Google Drive storage: Files are automatically organized in your Google Drive Instant access: Direct link provided immediately upon completion Advanced processing for visual diagrams Step 1: Copy sitemap data Once your Google Sheets is ready: Copy all the hierarchical data from the generated spreadsheet Prepare it for AI processing Step 2: Generate ASCII tree structure Use any AI model with this prompt: Create a hierarchical tree structure from the following website sitemap data. Return ONLY the tree structure using ASCII tree formatting with ├── and └── characters. Do not include any explanations, comments, or additional text - just the pure tree structure. The tree should start with the root domain and show all pages organized by their hierarchical levels. Use proper indentation to show parent-child relationships. Here is the sitemap data: [PASTE THE SITEMAP DATA HERE] Requirements: Use ASCII tree characters (├── └── │) Show clear hierarchical relationships Include all pages from the sitemap Return ONLY the tree structure, no other text Start with the root domain as the top level Step 3: Create visual mind map Visit the Whimsical Diagrams GPT Request a mind map creation using your ASCII tree structure Get a professional visual representation of your website architecture Results interpretation Google Sheets output structure The generated spreadsheet contains: Niv 0 to Niv 5: Hierarchical levels (0 = homepage, 1-5 = navigation depth) URL column: Complete URLs for reference Hyperlinked structure: Clickable links organized by hierarchy Multi-domain support: Handles subdomains and different domain structures Data organization features Automatic sorting: Pages organized by navigation depth and alphabetical order Parent-child relationships: Clear hierarchical structure maintained Domain separation: Main domains and subdomains processed separately Clean formatting: URLs decoded and formatted for readability Workflow limitations Sitemap dependency: Only discovers pages listed in the website's sitemap Crawling restrictions: Some websites may block external crawlers Level depth: Limited to 5 hierarchical levels for clarity Rate limits: Respects Firecrawl API limitations Template dependency: Requires access to the base template for duplication Use cases SEO audits: Analyze site structure for optimization opportunities UX research: Understand navigation patterns and user paths Content strategy: Identify content gaps and organizational issues Site migrations: Document existing structure before redesigns Competitive analysis: Study competitor site architectures Client presentations: Create visual site maps for stakeholder reviews
by WeblineIndia
WhatsApp AI Sales Agent using PDF Vector Store This workflow turns your WhatsApp number into an intelligent AI-powered Sales Agent that answers product queries using real data extracted from a PDF brochure. It loads a product brochure via HTTP Request, converts it into embeddings using OpenAI, stores them in an in-memory vector store and allows the AI Agent to provide factual answers to users via WhatsApp. Non-text messages are filtered and only text queries are processed. This makes the workflow ideal for building a lightweight chatbot that understands your product documentation deeply. Quick Start: 5-Step Fast Implementation Insert your WhatsApp credentials in the WhatsApp Trigger and WhatsApp Send nodes. Add your OpenAI API Key to all OpenAI-powered nodes. Replace the PDF URL in the HTTP Request node with your own brochure. Run the Manual Trigger once to build the vector store. Activate the workflow and start chatting from WhatsApp. What It Does This workflow converts a product brochure (PDF) into a searchable knowledgebase using LangChain vector embeddings. Incoming WhatsApp messages are processed and if the message is text, the AI Sales Agent uses OpenAI + the vector store to produce accurate, brochure-based answers. The AI responds naturally to customer queries, supports conversation memory across the session and retrieves information directly from the brochure when needed. Non-text messages are filtered out to maintain clean conversational flow. The workflow is fully modular: you can replace the PDF, modify AI prompts, plug into CRM systems or extend it into a broader sales automation pipeline. Who’s It For This workflow is ideal for: Businesses wanting a WhatsApp-based AI customer assistant. Sales teams needing automated product query handling. Companies with large product catalog PDFs. Marketers wanting a zero-code product brochure chatbot. Technical teams experimenting with LangChain + OpenAI inside n8n. Requirements to Use This Workflow To run this workflow successfully, you need: An n8n instance (cloud or self-hosted). A WhatsApp Business API connection. An OpenAI API key. A publicly accessible PDF brochure URL. Basic familiarity with n8n node configuration. Optional: A custom vector store backend (Qdrant, Pinecone) – the template uses in-memory storage. How It Works & How To Set Up 1. Import the Workflow JSON Upload the workflow JSON provided. 2. Configure WhatsApp Trigger Open WhatsApp Trigger Add your WhatsApp credentials Set the webhook correctly to match your n8n endpoint 3. Configure WhatsApp Response Nodes The workflow uses two WhatsApp send nodes: Reply To User** → Sends AI response Reply To User1** → Sends “unsupported message” reply Add your WhatsApp credentials to both. 4. Replace the PDF Brochure In get Product Brochure (HTTP Request): Update the url parameter with your own PDF 5. Run the PDF → Vector Store Setup (One-Time Only) Use the Manual Trigger ("When clicking ‘Test workflow’") to: Download the PDF Extract text Split into chunks Generate embeddings Store them in Product Catalogue vector store > You must run this once after importing the workflow. 6. Set OpenAI Credentials Add your OpenAI API Key to the following nodes: OpenAI Chat Model OpenAI Chat Model1 Embeddings OpenAI Embeddings OpenAI1 7. Review the AI Agent Prompt Inside AI Sales Agent, you can edit the system message to match: Your brand Your product types Your tone of voice 8. Activate the Workflow Once activated, WhatsApp users can chat with your AI Sales Agent. How to Customize Nodes? Here are common customization options: Customize the PDF / Knowledgebase Change the URL in get Product Brochure or Upload your own file via other nodes. Customize AI Behavior Edit the systemMessage inside AI Sales Agent: Change personality Set product rules Restrict/expand scope Change Supported Message Types Modify Handle Message Types switch logic to allow: Image → OCR Audio → Whisper Documents → Additional processing Modify WhatsApp Message Templates Inside the textBody of response nodes. Extend or replace Vector Store Swap vectorStoreInMemory with: Qdrant Pinecone Redis vector store By updating the vector store node. Add-Ons (Optional Enhancements) You can extend this workflow with: 1. Multi-language support Add OpenAI translation nodes before agent input. 2. CRM Integration Send user queries and chat logs into: HubSpot Salesforce Zoho CRM 3. Product Recommendation Engine Use embeddings similarity to suggest products. 4. Order Placement Workflow Connect to Stripe or Shopify APIs. 5. Analytics Dashboard Log chats into Airtable / Postgres for analysis. Use Case Examples Here are some practical uses: Product Inquiry Chatbot Customers ask about specs, pricing, or compatibility. Digital Catalog Assistant Converts PDF brochures into interactive WhatsApp search. Sales Support Bot Reduces load on human sales reps by handling common questions. Internal Knowledge Bot Teams access manuals, training documents, or service guides. Event/Product Launch Assistant Provides instant details about newly launched items. And many more similar use cases where an AI-powered WhatsApp assistant is valuable. Troubleshooting Guide | Issue | Possible Cause | Solution | | ------------------------------------------ | -------------------------------------- | ------------------------------------------------------------- | | WhatsApp messages not triggering workflow | Wrong webhook URL or inactive workflow | Ensure webhook is correct & activate workflow | | AI replies are empty | Missing OpenAI credentials | Add OpenAI API key to all AI nodes | | Vector store not populated | Manual trigger not executed | Run the Test Workflow trigger once | | PDF extraction returns blank text | PDF is image-based | Use OCR before text splitting | | “Unsupported message type” always triggers | Message type filter misconfigured | Check conditions in Handle Message Types | | AI not using brochure data | VectorStore tool not linked properly | Check connections between Embeddings → VectorStore → AI Agent | Need Help with Support & Extensions? If you need help setting up, customizing or extending this workflow, feel free to reach out to our n8n automation developers at WeblineIndia. We can help with Custom WhatsApp automation workflows AI-powered product catalog systems Integrating CRM, ERP or eCommerce platforms Building advanced LangChain-powered n8n automations Deploying scalable vector stores (Qdrant/Pinecone) And so much more.
by Joe V
🔄 AI Video Polling Engine - Long-Running Job Handler for Veo, Sora & Seedance The async backbone that makes AI video generation production-ready ⚡🎬 🎥 See It In Action 🔗 Full Demo: youtu.be/OI_oJ_2F1O0 ⚠️ Must Read First This is a companion workflow for the main AI Shorts Generator: 🔗 Main Workflow: AI Shorts Reactor This workflow handles the "waiting game" so your main bot stays fast and responsive. Think of it as the backstage crew that handles the heavy lifting while your main workflow performs on stage. 🤔 The Problem This Solves Without This Workflow: User sends message ↓ Bot calls AI API ↓ ⏳ Bot waits 2-5 minutes... (BLOCKED) ↓ ❌ Timeout errors ❌ Execution limits exceeded ❌ Users think bot is broken ❌ Can't handle multiple requests With This Workflow: User sends message ↓ Bot calls AI API ↓ ✅ Bot responds instantly: "Video generating..." ↓ 🔄 This webhook polls in background ↓ ⚡ Main bot handles other users ↓ ✅ Video ready → Auto-sends to user Result: Your bot feels instant, scales infinitely, and never times out 🚀 🔁 What This Workflow Does This is a dedicated polling webhook that acts as the async job handler for AI video generation. It's the invisible worker that: 1️⃣ Receives the Job POST /webhook/poll-video { "sessionId": "user_123", "taskId": "veo_abc456", "model": "veo3", "attempt": 1 } 2️⃣ Responds Instantly 200 OK - "Polling started" (Main workflow never waits!) 3️⃣ Polls in Background Wait 60s → Check status → Repeat ⏱️ Waits 1 minute between checks (API-friendly) 🔄 Polls up to 15 times (~15 minutes max) 🎯 Supports Veo, Sora, and Seedance APIs 4️⃣ Detects Completion Handles multiple API response formats: // Veo format { status: "completed", videoUrl: "https://..." } // Market format (Sora/Seedance) { job: { status: "success", result: { url: "..." } } } // Legacy format { data: { video_url: "..." } } (No matter how the API responds, this workflow figures it out) 5️⃣ Delivers the Video Once ready: 📥 Downloads video from AI provider ☁️ Uploads to your S3 storage 💾 Restores user session from Redis 📱 Sends Telegram preview with buttons 🔄 Enables video extension (Veo only) 📊 Logs metadata for analytics ⚙️ Technical Architecture The Flow: Main Workflow Polling Webhook │ │ ├──[Trigger AI Job]──────────┤ │ "Task ID: abc123" │ │ │ ├──[Return Instantly] │ │ "Generating..." │ │ │ ├──[Handle New User] │ │ ├──[Wait 60s] │ │ │ ├──[Check Status] │ │ "Processing..." │ │ │ ├──[Wait 60s] │ │ │ ├──[Check Status] │ │ "Completed!" │ │ │ ├──[Download Video] │ │ │ ├──[Upload to S3] │ │ │ └──[Send to User] │ │ └──────────────────────────────────┘ "Your video is ready!" 🚀 Key Features ⚡ Non-Blocking Architecture Main workflow never waits Handle unlimited concurrent jobs Each user gets instant responses 🔄 Intelligent Polling Respects API rate limits (60s intervals) Auto-retries on transient failures Graceful timeout handling (15 attempts max) 🎯 Multi-Provider Support Handles different API formats: Veo** - record-info endpoint Sora** - Market job status Seedance** - Market job status 🛡️ Robust Error Handling ✅ Missing video URL → Retry with fallback parsers ✅ API timeout → Continue polling ✅ Invalid response → Parse alternative formats ✅ Max attempts reached → Notify user gracefully 💾 Session Management Stores state in Redis Restores full context when video is ready Supports video extension workflows Maintains user preferences 📊 Production Features Detailed logging at each step Metadata tracking (generation time, model used, etc.) S3 storage integration Telegram notifications Analytics-ready data structure 🧩 Integration Points Works Seamlessly With: | Use Case | How It Helps | |----------|--------------| | 🤖 Telegram Bots | Keeps bot responsive during 2-5 min video generation | | 📺 YouTube Automation | Polls video, then triggers auto-publish | | 🎬 Multi-Video Pipelines | Handles 10+ videos simultaneously | | 🏢 Content Agencies | Production-grade reliability for clients | | 🧪 A/B Testing | Generate multiple variations without blocking | Required Components: ✅ Main workflow that triggers video generation ✅ Redis for session storage ✅ S3-compatible storage for videos ✅ KIE.ai API credentials ✅ Telegram Bot (for notifications) 📋 How to Use Step 1: Set Up Main Workflow Import and configure the AI Shorts Reactor Step 2: Import This Webhook Add this workflow to your n8n instance Step 3: Configure Credentials KIE.ai API key Redis connection S3 storage credentials Telegram bot token Step 4: Link Workflows In your main workflow, call this webhook: // After triggering AI video generation const response = await httpRequest({ method: 'POST', url: 'YOUR_WEBHOOK_URL/poll-video', body: { sessionId: sessionId, taskId: taskId, model: 'veo3', attempt: 1 } }); Step 5: Activate & Test Activate this polling webhook Trigger a video generation from main workflow Watch it poll in background and deliver results 🎯 Real-World Example Scenario: User generates 3 videos simultaneously Without This Workflow: User A: "Generate video" → Bot: ⏳ Processing... (BLOCKED 5 min) User B: "Generate video" → Bot: ❌ Timeout (main workflow still processing User A) User C: "Generate video" → Bot: ❌ Never receives request With This Workflow: User A: "Generate video" → Bot: ✅ "Generating! Check back in 3 min" → Polling webhook handles in background User B: "Generate video" → Bot: ✅ "Generating! Check back in 3 min" → Second polling instance starts User C: "Generate video" → Bot: ✅ "Generating! Check back in 3 min" → Third polling instance starts 3 minutes later--- User A: 📹 "Your video is ready!" [Preview] [Publish] User B: 📹 "Your video is ready!" [Preview] [Publish] User C: 📹 "Your video is ready!" [Preview] [Publish] All three users served simultaneously with zero blocking! 🚀 🔧 Customization Options Adjust Polling Frequency // Default: 60 seconds // For faster testing (use credits faster): const waitTime = 30; // seconds // For more API-friendly (slower updates): const waitTime = 90; // seconds Change Timeout Limits // Default: 15 attempts (15 minutes) const maxAttempts = 20; // Increase for longer videos Add More Providers Extend to support other AI video APIs: switch(model) { case 'veo3': // Existing Veo logic case 'runway': // Add Runway ML polling case 'pika': // Add Pika Labs polling } Custom Notifications Replace Telegram with: Discord webhooks Slack messages Email notifications SMS via Twilio Push notifications 📊 Monitoring & Analytics What Gets Logged: { "sessionId": "user_123", "taskId": "veo_abc456", "model": "veo3", "status": "completed", "attempts": 7, "totalTime": "6m 32s", "videoUrl": "s3://bucket/videos/abc456.mp4", "metadata": { "duration": 5.2, "resolution": "1080x1920", "fileSize": "4.7MB" } } Track Key Metrics: ⏱️ Average generation time per model 🔄 Polling attempts before completion ❌ Failure rate by provider 💰 Cost per video (API usage) 📈 Concurrent job capacity 🚨 Troubleshooting "Video never completes" ✅ Check KIE.ai API status ✅ Verify task ID is valid ✅ Increase maxAttempts if needed ✅ Check API response format hasn't changed "Polling stops after 1 attempt" ✅ Ensure webhook URL is correct ✅ Check n8n execution limits ✅ Verify Redis connection is stable "Video downloads but doesn't send" ✅ Check Telegram bot credentials ✅ Verify S3 upload succeeded ✅ Ensure Redis session exists ✅ Check Telegram chat ID is valid "Multiple videos get mixed up" ✅ Confirm sessionId is unique per user ✅ Check Redis key collisions ✅ Verify taskId is properly passed 🏗️ Architecture Benefits Why Separate This Logic? | Aspect | Monolithic Workflow | Separated Webhook | |--------|--------------------|--------------------| | ⚡ Response Time | 2-5 minutes | <1 second | | 🔄 Concurrency | 1 job at a time | Unlimited | | 💰 Execution Costs | High (long-running) | Low (short bursts) | | 🐛 Debugging | Hard (mixed concerns) | Easy (isolated logic) | | 📈 Scalability | Poor | Excellent | | 🔧 Maintenance | Complex | Simple | 🛠️ Requirements Services Needed: ✅ n8n Instance (cloud or self-hosted) ✅ KIE.ai API (Veo, Sora, Seedance access) ✅ Redis (session storage) ✅ S3-compatible Storage (videos) ✅ Telegram Bot (optional, for notifications) Skills Required: Basic n8n knowledge Understanding of webhooks Redis basics (key-value storage) S3 upload concepts Setup Time: ~15 minutes Technical Level: Intermediate 🏷️ Tags webhook polling async-jobs long-running-tasks ai-video veo sora seedance production-ready redis s3 telegram youtube-automation content-pipeline scalability microservices n8n-webhook job-queue background-worker 💡 Best Practices Do's ✅ Keep polling interval at 60s minimum (respect API limits) Always handle timeout scenarios Log generation metadata for analytics Use unique session IDs per user Clean up Redis after job completion Don'ts ❌ Don't poll faster than 30s (risk API bans) Don't store videos in Redis (use S3) Don't skip error handling Don't use this for real-time updates (<10s) Don't forget to activate the webhook 🌟 Success Stories After Implementing This Webhook: | Metric | Before | After | |--------|--------|-------| | ⚡ Bot response time | 2-5 min | <1 sec | | 🎬 Concurrent videos | 1 | 50+ | | ❌ Timeout errors | 30% | 0% | | 😊 User satisfaction | 6/10 | 9.5/10 | | 💰 Execution costs | $50/mo | $12/mo | 🔗 Related Workflows 🎬 Main: AI Shorts Reactor - The full video generation bot 📤 YouTube Auto-Publisher - Publish completed videos 🎨 Video Style Presets - Custom prompt templates 📊 Analytics Dashboard - Track all generations 📜 License MIT License - Free to use, modify, and distribute! ⚡ Make your AI video workflows production-ready. Let the webhook handle the waiting. ⚡ Created by Joe Venner | Built with ❤️ and n8n | Part of the AI Shorts Reactor ecosystem
by Wessel Bulte
What this template does Receives meeting data via a webform, cleans/structures it, fills a Word docx template, uploads the file to SharePoint, appends a row to Excel 365, and sends an Outlook email with the document attached. Good to know Uses a community node: DocxTemplater to render the DOCX from a template. Install it from the Community Nodes catalog. The template context is the workflow item JSON. In your docx file, use placeholders. Includes a minimal HTML form snippet (outside n8n) you can host anywhere. Replace the placeholder WEBHOOK_URL with your Webhook URL before testing. Microsoft nodes require Azure app credentials with correct permissions (SharePoint, Excel/Graph, Outlook). How it works Webhook — Receives meeting form JSON (POST). Code (Parse Meeting Data) — Parses/normalizes fields, builds semicolon‑separated strings for attendees/absentees, and flattens discussion points / action items. SharePoint (Download) — Fetches the DOCX template (e.g., meeting_minutes_template.docx). Merge — Combines template binary + JSON context by position. DocxTemplater — Renders meeting_{{now:yyyy-MM-dd}}.docx using the JSON context. SharePoint (Upload) — Saves the generated DOCX to a target folder (e.g., /Meetings). Microsoft Excel 365 (Append) — Appends a row to your sheet (Date, Time, Attendees, etc.). Microsoft Outlook (Send message) — Emails the generated DOCX as an attachment. Requirements Community node DocxTemplater installed Microsoft 365 access with credentials for: SharePoint (download template + upload output) Excel 365 (append to table/worksheet) Outlook (send email) A Word template with placeholders matching the JSON keys Need Help 🔗 LinkedIn – Wessel Bulte
by Weilun
🔄 n8n Workflow: Check and Update n8n Version This workflow automatically checks if the local n8n version is outdated and, if so, creates a file to signal an update is needed. 🖥️ Working Environment Operating System:** Ubuntu 24.04 n8n Installation:** Docker container 📁 Project Directory Structure n8n/ ├── check_update.txt ├── check-update.sh ├── compose.yml ├── update_n8n.cron 🧾 File Descriptions check_update.txt Contains a single word: true: Update is needed false: No update required check-update.sh #!/bin/bash PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin if grep -q "true" /home/sysadmin/n8n/check_update.txt; then Place your update logic here echo "Update needed - please insert update logic" echo true > /home/sysadmin/n8n/check_update.txt fi Purpose: Checks the contents of check_update.txt If it contains true, executes update logic (currently a placeholder) Resets check_update.txt to true update_n8n.cron SHELL=/bin/sh 10 5 * * * /bin/sh /home/sysadmin/n8n/check-update.sh Purpose: Runs the check-update.sh script daily at 5:10 AM Uses /bin/sh as the shell environment 🧩 n8n Workflow Breakdown 1. Schedule Trigger 🕓 Purpose:** Triggers the workflow every day at 5:00 AM Node Type:** Schedule Trigger 2. Get the latest n8n version 🌐 Purpose:** Fetches the latest version of n8n from npm Endpoint:** https://registry.npmjs.org/n8n/latest Node Type:** HTTP Request 3. Get Local n8n version 🖥️ Purpose:** Retrieves the currently running n8n version Endpoint:** http://192.168.100.18:5678/rest/settings Node Type:** HTTP Request 4. If 🔍 Purpose:** Compares the local and latest versions Condition:** If not equal → update is needed 5. SSH1 🧾 Purpose:** Writes the result to a file on the host via SSH Logic:** echo "{{ $('If').params.conditions ? 'false' : 'true' }}" > check_update.txt Effect: Updates check_update.txt with "true" if an update is needed, "false" otherwise. 🛠️ Setting up Crontab on Ubuntu 1. Register the cron job with: crontab update_n8n.cron 2. Verify that your cron job is registered: crontab -l ✅ Result 5:00 AM** – n8n workflow checks versions and writes result to check_update.txt 5:10 AM** – Cron runs check-update.sh to respond to update flag