by Growth AI
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Website sitemap generator and visual tree creator Who's it for Web developers, SEO specialists, UX designers, and digital marketers who need to analyze website structure, create visual sitemaps, or audit site architecture for optimization purposes. What it does This workflow automatically generates a comprehensive sitemap from any website URL and creates an organized hierarchical structure in Google Sheets. It follows the website's sitemap to discover all pages, then organizes them by navigation levels (Level 1, Level 2, etc.) with proper parent-child relationships. The output can be further processed to create visual tree diagrams and mind maps. How it works The workflow follows a five-step automation process: URL Input: Accepts website URL via chat interface Site Crawling: Uses Firecrawl to discover all pages following the website's sitemap only Success Validation: Checks if crawling was successful (some sites block external crawlers) Hierarchical Organization: Processes URLs into a structured tree with proper level relationships Google Sheets Export: Creates a formatted spreadsheet with the complete site architecture The system respects robots.txt and follows only sitemap-declared pages to ensure ethical crawling. Requirements Firecrawl API key (for website crawling and sitemap discovery) Google Sheets access Google Drive access (for template duplication) How to set up Step 1: Prepare your template (recommended) It's recommended to create your own copy of the base template: Access the base Google Sheets template Make a copy for your personal use Update the workflow's "Copy template" node with your template's file ID (replace the default ID: 12lV4HwgudgzPPGXKNesIEExbFg09Tuu9gyC_jSS1HjI) This ensures you have control over the template formatting and can customize it as needed Step 2: Configure API credentials Set up the following credentials in n8n: Firecrawl API: For crawling websites and discovering sitemaps Google Sheets OAuth2: For creating and updating spreadsheets Google Drive OAuth2: For duplicating the template file Step 3: Configure Firecrawl settings (optional) The workflow uses optimized Firecrawl settings: ignoreSitemap: false - Respects the website's sitemap sitemapOnly: true - Only crawls URLs listed in sitemap files These settings ensure ethical crawling and faster processing Step 4: Access the workflow The workflow uses a chat trigger interface - no manual configuration needed Simply provide the website URL you want to analyze when prompted How to use the workflow Basic usage Start the chat: Access the workflow via the chat interface Provide URL: Enter the website URL you want to analyze (e.g., "https://example.com") Wait for processing: The system will crawl, organize, and export the data Receive your results: Get an automatic direct clickable link to your generated Google Sheets - no need to search for the file Error handling Invalid URLs: If the provided URL is invalid or the website blocks crawling, you'll receive an immediate error message Graceful failure: The workflow stops without creating unnecessary files when errors occur Common causes: Incorrect URL format, robots.txt restrictions, or site security settings File organization Automatic naming: Generated files follow the pattern "[Website URL] - n8n - Arborescence" Google Drive storage: Files are automatically organized in your Google Drive Instant access: Direct link provided immediately upon completion Advanced processing for visual diagrams Step 1: Copy sitemap data Once your Google Sheets is ready: Copy all the hierarchical data from the generated spreadsheet Prepare it for AI processing Step 2: Generate ASCII tree structure Use any AI model with this prompt: Create a hierarchical tree structure from the following website sitemap data. Return ONLY the tree structure using ASCII tree formatting with ├── and └── characters. Do not include any explanations, comments, or additional text - just the pure tree structure. The tree should start with the root domain and show all pages organized by their hierarchical levels. Use proper indentation to show parent-child relationships. Here is the sitemap data: [PASTE THE SITEMAP DATA HERE] Requirements: Use ASCII tree characters (├── └── │) Show clear hierarchical relationships Include all pages from the sitemap Return ONLY the tree structure, no other text Start with the root domain as the top level Step 3: Create visual mind map Visit the Whimsical Diagrams GPT Request a mind map creation using your ASCII tree structure Get a professional visual representation of your website architecture Results interpretation Google Sheets output structure The generated spreadsheet contains: Niv 0 to Niv 5: Hierarchical levels (0 = homepage, 1-5 = navigation depth) URL column: Complete URLs for reference Hyperlinked structure: Clickable links organized by hierarchy Multi-domain support: Handles subdomains and different domain structures Data organization features Automatic sorting: Pages organized by navigation depth and alphabetical order Parent-child relationships: Clear hierarchical structure maintained Domain separation: Main domains and subdomains processed separately Clean formatting: URLs decoded and formatted for readability Workflow limitations Sitemap dependency: Only discovers pages listed in the website's sitemap Crawling restrictions: Some websites may block external crawlers Level depth: Limited to 5 hierarchical levels for clarity Rate limits: Respects Firecrawl API limitations Template dependency: Requires access to the base template for duplication Use cases SEO audits: Analyze site structure for optimization opportunities UX research: Understand navigation patterns and user paths Content strategy: Identify content gaps and organizational issues Site migrations: Document existing structure before redesigns Competitive analysis: Study competitor site architectures Client presentations: Create visual site maps for stakeholder reviews
by Roshan Ramani
Who's it for This template is perfect for content creators, researchers, marketers, and Reddit enthusiasts who want to stay updated on specific topics without manually browsing Reddit. If you need curated, AI-summarized Reddit insights delivered directly to your Telegram, this workflow automates the entire process. What it does This workflow transforms your Telegram into a powerful Reddit search engine with AI-powered curation. Simply send any keyword to your Telegram bot, and it will: Search Reddit across 4 different sorting methods (top, hot, relevance) to capture diverse perspectives Automatically remove duplicate posts from multiple search results Filter posts based on quality metrics (minimum 50 upvotes, recent content within 15 days, non-empty text) Extract key information: title, upvotes, subreddit, publication date, URL, and content Generate a clean, Telegram-formatted summary using Google Gemini AI Deliver structured results with direct links back to you instantly The AI summary includes post titles, upvote counts, timestamps, brief insights, and direct Reddit links—all formatted for easy mobile reading. How it works Step 1: Telegram Trigger User sends a search keyword via Telegram (e.g., "voice AI agents") Step 2: Parallel Reddit Searches Four simultaneous Reddit API calls search with different sorting algorithms: Top posts (all-time popularity) Hot posts (trending now) Relevance (best keyword matches) Top posts (duplicate for broader coverage) Step 3: Merge & Deduplicate All search results combine into one stream, then a JavaScript code node removes duplicate posts by comparing post IDs Step 4: Field Extraction The Edit Fields node extracts and formats: Post title Upvote count Subreddit name and subscriber count Publication date (converted from Unix timestamp) Reddit URL Post content (selftext) Step 5: Quality Filtering The Filter node applies three conditions: Minimum 50 upvotes (ensures quality) Non-empty content (excludes link-only posts) Posted within last 15 days (ensures freshness) Step 6: Data Aggregation All filtered posts aggregate into a single dataset for AI processing Step 7: AI Summarization Google Gemini AI analyzes the aggregated posts and generates a concise, Telegram-formatted summary with: Emoji indicators for better readability Point-wise breakdown of top 5-7 posts Upvote counts and relative timestamps Brief 1-2 sentence summaries Direct Reddit links Step 8: Delivery The formatted summary sends back to the user's Telegram chat Requirements Credentials needed: Reddit OAuth2 API** - For searching Reddit posts (Get Reddit API credentials) Google Gemini API** - For AI-powered summarization (Get Gemini API key) Telegram Bot Token** - For receiving queries and sending results (Create Telegram Bot) n8n Version: Self-hosted or Cloud (latest version recommended) Setup Instructions 1. Create Telegram Bot Message @BotFather on Telegram Send /newbot and follow prompts Copy the bot token for n8n credentials Start a chat with your new bot 2. Configure Reddit API Go to https://www.reddit.com/prefs/apps Click "Create App" → Select "script" Note your Client ID and Secret Add credentials to n8n's Reddit OAuth2 3. Get Gemini API Key Visit https://ai.google.dev/ Create a new API key Add to n8n's Google Gemini credentials 4. Import & Configure Workflow Import this template into n8n Add your three credentials to respective nodes Remove pinData from "Telegram Trigger" node (test data) Activate the workflow 5. Test It Send any keyword to your Telegram bot (e.g., "machine learning") Wait 10-20 seconds for results Receive AI-summarized Reddit insights How to customize Adjust Quality Filters: Edit the Filter node conditions: Change minimum upvotes (currently 50) Modify time range (currently 15 days) Add subreddit subscriber minimum Limit Results: Add a Limit node after Filter to cap results at 10-15 posts for faster processing Change Search Strategies: Modify the Reddit nodes' "sort" parameter: new - Latest posts first comments - Most commented controversial - Controversial content Customize AI Output: Edit the AI Agent's system message to: Change summary style (more/less detail) Adjust formatting (bullets, numbered lists) Modify language/tone Add emoji preferences Add User Feedback: Insert a Telegram Send Message node after the trigger: "🔍 Searching Reddit for '{{ $json.message.text }}'... Please wait." Enable Error Handling: Create an Error Workflow: Add Error Trigger node Send fallback message: "❌ Search failed. Please try again." Sort by Popularity: Add a Sort node after Filter: Field: upvotes Order: Descending
by Andi Sakti
Brief Description: Your personal finance assistant inside Telegram! Chat naturally with an AI agent to track expenses, log income, view spending history, and manage your budget—all through simple conversation. No forms, no spreadsheets, just chat. How it works: Chat** – Send expense or income details naturally via Telegram Understand** – AI agent parses your message and determines the action (add/get/delete) Execute** – Performs CRUD operations on Google Sheets (expense & income tabs) Respond** – Replies with confirmations, summaries, or requested data with formatted responses Set up steps: ⏱️ Setup time: ~15-20 minutes Create a Telegram bot via BotFather and get your API token. Connect Google Gemini (or your preferred LLM) for the AI agent. Set up Google Sheets with separate tabs for expenses and income. Configure the Google Sheets tools with your sheet IDs and column mappings.
by Rully Saputra
Decodo-powered review aggregation to Google Sheets with Gemini analysis and Telegram alerts Who’s it for This template is designed for e-commerce owners, marketplace sellers, product teams, and CX/reputation managers who need an automated way to monitor product reviews. It’s ideal for anyone tracking Amazon listings or other URLs and wants AI-powered sentiment, summaries, and alerts without manual scraping. What it does This workflow automatically retrieves product URLs from Google Sheets, scrapes reviews using Decodo (community node), formats the extracted data, and analyzes it using Gemini AI. It produces both sentiment classification and a concise review summary. Results are saved to a Google Sheets log, and the workflow sends a Telegram alert whenever new reviews are processed. The entire pipeline runs on a schedule, ensuring continuous and fully automated monitoring. How it works A scheduled trigger starts the workflow. Google Sheets provides the list of product URLs. Each URL is processed through Decodo to extract user reviews. A Code node formats the raw review data. Gemini performs sentiment analysis and summarization. Results are appended to a Google Sheets review log. A Telegram message delivers a real-time summary and sentiment snapshot. Sign up for Decodo — get better pricing here Requirements Decodo API credentials (self-hosted community node) Google Sheets API Key Gemini AI credentials Telegram Bot + Chat ID n8n self-hosted (required for Decodo community node) How to set up Add your Decodo credentials to the Decodo node. Update both Google Sheets nodes with your document ID and sheet names. Insert your Gemini API key. Provide your Telegram Bot token and Chat ID. Adjust the schedule interval to your preference. Run the workflow once to validate mappings and output fields. How to customize Modify the Code node to change how reviews are formatted. Extend Gemini prompts for deeper analysis (keywords, categories, toxicity). Add filters to trigger alerts only on negative sentiment. Append additional metadata (timestamps, product IDs) to the Sheets log. Add email, Slack, or other communication channels. Disclaimer (Community Node) This workflow uses a community node (Decodo) and therefore works only on self-hosted n8n instances. Be sure to install and trust the package before using it.
by Sona Labs
Automatically enrich company records with comprehensive firmographic data by pulling domains from Google Sheets, setting up custom HubSpot fields, enriching through Sona API, and syncing complete profiles to HubSpot CRM with custom property mapping. Import company domains from a Google Sheet, configure custom HubSpot fields for Sona data, automatically enrich domains with detailed firmographic intelligence, and create fully populated company records in HubSpot—so you can build rich prospect databases without manual research. How it works Step 1: Get Company List Reads company domains from your Google Sheet Aggregates all domains into a single array Prepares data for batch processing Step 2: Setup HubSpot Fields Creates custom Sona fields in HubSpot CRM Defines all enrichment data fields needed Ensures proper field mapping for incoming data Step 3: Prepare for Processing Converts aggregated domains into individual items Sets data for batch loop processing Readies each company for enrichment Step 4: Enrich & Sync to HubSpot Loops through each company domain Calls Sona API for enrichment data Creates company in HubSpot with standard fields Formats and updates custom Sona properties Combines firmographics + tech data in one profile Includes 2-second wait between operations for rate limiting What you'll get The workflow enriches each company record with: Firmographic Data**: Company size, employee count, revenue estimates, headquarters location, and founding year Contact Information**: Phone numbers, social media profiles, and timezone details Business Intelligence**: Company descriptions and industry positioning Custom HubSpot Properties**: All Sona data mapped to dedicated custom fields Organized CRM Records**: All data automatically synced to HubSpot for immediate use Domain Tracking**: Companies linked to their websites for future reference Why use this Eliminate manual research**: Save 10-15 minutes per company by automating firmographic lookups Build rich databases**: Transform basic domain lists into comprehensive company profiles Custom field management**: Automatically creates and populates HubSpot custom properties Improve targeting**: Segment and prioritize accounts based on size, location, and other firmographics Keep data current**: Run scheduled enrichments to maintain up-to-date company information Scale your prospecting**: Process hundreds of companies in minutes instead of days Better lead qualification**: Make informed decisions with complete company intelligence Streamlined workflow**: One-click enrichment from spreadsheet to CRM with custom field setup Setup instructions Before you start, you'll need: Google Sheets with a column named website_Domain containing company domains (e.g., example.com) HubSpot Account & App Token - Get an app token by creating a legacy app: Go to HubSpot Settings → Integrations → Legacy Apps Click Create Legacy App Select Private (for one account) In the scopes section, enable the following permissions: crm.schemas.companies.write crm.objects.companies.write crm.schemas.companies.read crm.objects.companies.read Click Create Copy the access token from the Auth tab Sona API Key (for company enrichment) Sign up at https://app.sonalabs.com Free tier available for testing Configuration steps: Prepare your data: Create a Google Sheet with a "website_Domain" column and add 2-3 test companies (e.g., example.com, anthropic.com) Connect Google Sheets: In the "Get Company List from Sheet" node, authenticate with Google and select your spreadsheet and sheet name Configure HubSpot field creation: In the "Create Custom HubSpot Fields" node (Step 2), authenticate with your HubSpot access token and review the custom Sona fields that will be created Add Sona credentials: In the "Sona Enrich" node, authenticate with your Sona API key Connect HubSpot for company creation: In the "Create HubSpot Company" and "Update Company with AI Data" nodes, authenticate using your HubSpot access token Test with sample data: Run the workflow with 2-3 test companies and verify: Custom fields are created in HubSpot Company records appear correctly in HubSpot All firmographic data is populated in custom properties Add error handling: Configure notifications for failed enrichments or API errors (optional but recommended) Scale and automate: Process your full company list, then optionally add a Schedule Trigger for automatic daily or weekly enrichment to keep your CRM data fresh
by Đỗ Thành Nguyên
Automated Facebook Page Story Video Publisher (Google Drive → Facebook → Google Sheet) > Recommended: Self-hosted via tino.vn/vps-n8n?affid=388 — use code VPSN8N for up to 39% off. This workflow is an automated solution for publishing video content from Google Drive to your Facebook Page Stories, while using Google Sheets as a posting queue manager. What This Workflow Does (Workflow Function) This automation orchestrates a complete multi-step process for uploading and publishing videos to Facebook Stories: Queue Management: Every 2 hours and 30 minutes, the workflow checks a Google Sheet (Get Row Sheet node) to find the first video whose Stories column is empty — meaning it hasn’t been posted yet. Conditional Execution: An If node confirms that the video’s File ID exists before proceeding. Video Retrieval: Using the File ID, the workflow downloads the video from Google Drive (Google Drive node) and calculates its binary size (Set to the total size in bytes node). Facebook 3-Step Upload: It performs the Facebook Graph API’s three-step upload process through HTTP Request nodes: Step 1 – Initialize Session: Starts an upload session and retrieves the upload_url and video_id. Step 2 – Upload File: Uploads the binary video data to the provided upload_url. Step 3 – Publish Video: Finalizes and publishes the uploaded video as a Facebook Story. Status Update: Once completed, the workflow updates the same row in Google Sheets (Update upload status in sheet node) using the row_number to mark the video as processed. Prerequisites (What You Need Before Running) 1. n8n Instance > Recommended: Self-hosted via tino.vn/vps-n8n?affid=388 — use code VPSN8N for up to 39% off. 2. Google Services Google Drive Credentials:** OAuth2 credentials for Google Drive to let n8n download video files. Google Sheets Credentials:** OAuth2 credentials for Google Sheets to read the posting queue and update statuses. Google Sheet:** A spreadsheet (ID: 1RnE5O06l7W6TLCLKkwEH5Oyl-EZ3OE-Uc3OWFbDohYI) containing: File ID — the video’s unique ID in Google Drive. Stories — posting status column (leave empty for pending videos). row_number — used for updating the correct row after posting. 3. Facebook Setup Page ID:** Your Facebook Page ID (currently hardcoded as 115432036514099 in the info node). Access Token:* A *Page Access Token** with permissions such as pages_manage_posts and pages_read_engagement. This token is hardcoded in the info node and again in Step 3. Post video. Usage Guide and Implementation Notes How to Use Queue Videos: Add video entries to your Google Sheet. Each entry must include a valid Google Drive File ID. Leave the Stories column empty for videos that haven’t been posted. Activate: Save and activate the workflow. The Schedule Trigger will automatically handle new uploads every 2 hours and 30 minutes. Implementation Notes ⚠️ Token Security:* Hardcoding your *Access Token* inside the info node is *not recommended**. Tokens expire and expose your Page to risk if leaked. 👉 Action: Replace the static token with a secure Credential setup that supports token rotation. Loop Efficiency:* The *“false”** output of the If node currently loops back to the Get Row Sheet node. This creates unnecessary cycles if no videos are found. 👉 Action: Disconnect that branch so the workflow stops gracefully when no unposted videos remain. Status Updates:* To prevent re-posting the same video, the final Update upload status in sheet node must update the *Stories** column (e.g., write "POSTED"). 👉 Action: Add this mapping explicitly to your Google Sheets node. Automated File ID Sync:** This workflow assumes that the Google Sheet already contains valid File IDs. 👉 You can build a secondary workflow (using Schedule Trigger1 → Search files and folders → Append or update row in sheet) to automatically populate new video File IDs from your Google Drive. ✅ Result Once active, this workflow automatically: pulls pending videos from your Google Sheet, uploads them to Facebook Stories, and marks them as posted — all without manual intervention.
by Mirza Ajmal
Description This powerful workflow automates the evaluation of new digital tools, websites, or platforms with the goal of assessing their potential impact on your business. By leveraging Telegram for user input, Apify for deep content extraction, advanced AI for contextual analysis, and Google Sheets for personalized data integration and record-keeping, this tool delivers clear, actionable verdicts that help you determine whether a tool is worth adopting or exploring further. Key Features and Workflow User-Friendly Input: Submit URLs of tools or websites directly through Telegram for quick and easy evaluation requests. Dynamic Content Extraction: The workflow retrieves detailed content from the submitted URLs using the Apify web crawler, capturing rich data for analysis. AI-Powered Cleaning & Analysis: Sophisticated AI models filter out noise, distill meaningful insights, and contextualize findings based on your business profile and goals stored in Google Sheets. Personalized Business Context: Integration with Google Sheets brings in your company’s specialization, current focus, and strategic objectives to tailor the analysis specifically to your needs. Structured Analysis Output: Receive a thorough, structured report including concise summaries, key considerations, business impact, benefits, risks, actionable insights, and an easy-to-understand final verdict on the tool’s relevance. Decision Support: The tool estimates effort, time to value, urgency, and confidence levels, enabling informed prioritization and strategic decision-making. Seamless Communication: Results are sent back via Telegram, ensuring you get timely and direct feedback without needing to leave your messaging app. Record Keeping & Tracking: All analyses and decisions are logged automatically into Google Sheets, creating a searchable knowledge base for ongoing reference and reporting. Setup Instructions for Key Nodes Telegram Trigger Node: Configure your Telegram bot API credentials here. Link the bot to your Telegram account to receive messages for URL submissions. URL Extraction Node: No credentials needed. This node extracts URLs from incoming messages for processing. Apify Web Crawler Node Setup Guide: Go to Apify's website, sign up for an account if you don’t have one, and get your API token from your profile’s API tokens section. Then, paste this token into the Apify Node’s API Key field in n8n. AI Cleaning and Analysis Nodes: Configure OpenRouter or compatible AI service API keys for content processing. Customize prompts or models if desired to align analysis style. Google Sheets Nodes: Connect using your Google account and provide access to the specified Google Sheet. Ensure sheets for Company Details and Analysis Results exist with proper columns as per this workflow. Telegram Reply Node: Use the Telegram bot API credentials to send analysis summaries and verdicts back to users. Access and Edit the Google Sheet You can access the Google Sheet used by this workflow here: Access the google sheet here Please make a copy of the sheet to your own Google Drive before connecting it with this workflow. This allows you to customize the sheets, update company information, and manage analysis results securely without affecting the original template. Extendibility Beyond manual URL submissions, you can enhance this workflow by scheduling automated daily checks of new product launches from platforms like Product Hunt. The system can proactively analyze emerging tools and deliver timely updates via Telegram, email, or other channels, helping you stay ahead of innovation effortlessly.
by Haruki Kuwai
🧠 About this workflow This workflow automatically generates personalized B2B outreach email messages by combining AI-based company research and text generation. It’s designed to help sales and marketing professionals automate the creation of tailored cold emails for prospects. ⚙️ How it works Get rows from Google Sheets — Retrieves companies marked as “ready” for outreach. Loop Over Items — Processes each company individually. Company Research (LangChain Agent) — Uses the Tavily search tool to collect key company insights such as overview, offerings, and recent news. Generate Outreach Message (LLM Chain) — Drafts a professional, concise, and fully personalized email body in English using the AI training context from YOUR_COMPANY_NAME. This example uses an AI training and automation service context, but you can easily modify the prompt to fit your own company’s products, services, or industry. Add to Google Sheets — Writes the generated messages back into the sheet. (Optional) Add to Instantly.ai — Sends the finalized lead data to your Instantly campaign for cold email distribution. 👥Use Cases 💼Sales & CRM:Automatically build and update your client database from received business cards 🏢Back Office & Admin: Digitize incoming cards for unified company records 📧Marketing Teams: Collect and manage leads efficiently 📚 AI / OCR Research: Build structured datasets for training AI models or internal automation 🧩 Troubleshooting If the workflow does not generate emails or data fails to appear in Google Sheets, please check the following: Google Sheets credentials — Ensure that the connected account has edit permissions and the document ID and sheet name are correctly set. API keys — Verify that your OpenRouter and Tavily API credentials are valid and not expired. Rate limits — Tavily and OpenRouter may throttle requests when processing multiple records. Try lowering the batch size in the “Limit” node. Empty company background — If the “Company Research” node returns no output, make sure the input company name is correct and includes sufficient context (e.g., full company name, not abbreviation). LLM output format — Ensure the “Generate Outreach Message” node is set to return plain text, not JSON or markdown. Instantly.ai integration (optional) — If leads are not added, confirm that your API key and campaign ID are valid, and that the node is not disabled. If the issue persists, enable “Always Output Data” in key nodes (such as Company Research and Generate Outreach Message) to debug intermediate results. You can also use the Execution Log to inspect where the flow stops or returns an empty output. ⚠️ Disclaimer This workflow uses AI language models and third-party APIs (OpenRouter, Tavily). Ensure that you add your own API credentials securely and verify all AI-generated content before sending emails.
by GrowSpire Agency
Who this is for B2B companies, including: Founders Marketing and sales professionals Recruiters involved in people search and B2B outreach With this workflow: No more manual list building No time spent researching what each company does No manual CRM work — all found data is saved to a spreadsheet automatically What it does This workflow helps you quickly build a list of prospects for outreach using the LeadIQ provider. It collects: Full name LinkedIn profile Company website and description Emails (when available in the LeadIQ database) You can start contacting people via LinkedIn manually right away. You simply provide a natural language prompt, for example: “Founder at a software engineering firm, 11–50 employees, based in New York, using AI technologies.” The embedded AI agent transforms your input into a GraphQL query, which is then used to pull leads from the database. 📹 Video walkthrough: Click Here Benefits: LeadIQ is an affordable database, with a cost per lead of approximately $0.03–$0.05 USD, depending on your plan and volume No credit card or paid plan is required to start using the LeadIQ API — just sign up and access the API The API includes 50 free credits, which is enough to test the workflow The workflow enriches company details from the open web (company description, HQ address) No need to manually configure filters — use a simple natural language prompt All data is saved automatically to Airtable CRM (using their standard CRM template from the template library) ⚠️ Important: This workflow is not ideal if email addresses are the only data you need, as LeadIQ does not always provide emails. It works best when you need: A curated list of people based on specific criteria Their LinkedIn profiles Automated saving of leads to a database You can later enrich email data using other paid databases by pulling records from Airtable. How to customize the workflow Sign up for LeadIQ: https://leadiq.com Obtain the API string called “Secret Base64 API key” Add the API key to all HTTP nodes: Method: POST URL: https://api.leadiq.com/graphql Enable “Send Headers” and add: Authorization: Basic <your API string here> Content-Type: application/json Sign up for Airtable Find the template: Left panel → Templates & apps → Marketing → “Sales CRM” In Airtable, generate an API key: Builder Hub → Developers → Personal access token Add your Sales CRM database to the token scope Set the correct base and sheet in all Airtable nodes Use the Code node called “Manage number of leads” to control how many records are pulled from the database Default value: 1 (to save LeadIQ credits) To change it, edit: input.limit = 1; Replace 1 with the desired number of leads Launch the workflow using the “Open Chat” trigger node Enter a prompt containing the criteria below Prompt structure: 📌 Contact-level criteria (optional) Job titles**: “Founder” Roles**: “Entrepreneurship”, “Business Development”, “Information Technology”, “Legal”, “Accounting”, etc. Seniority**: Executive, VP, Director, Manager, Senior Individual Contributor, Other Location (city and country only)**: “New York, United States” 📌 Company-level criteria (optional) Employee count range**: “1–10”, “50–200”, or terms like “small startup”, “SMB”, “mid-market”, “enterprise” Industry**: “Business Consulting and Services”, “IT Services and IT Consulting”, etc. Technologies**: “AI”, “HubSpot” (may not always work if the database has limited overlap) Revenue range (in millions USD)**: “0–1M”, “1–10M”, etc. (availability may vary) The workflow includes two AI agents that map your natural language input to the closest existing database filters, so you can write prompts in your own words. Email enrichment note The lower part of the workflow (“Enrichment: Search Data & Email”) attempts to pull emails from the LeadIQ database for existing leads. Not every lead has an email available, so this step is optional and limited. Workflow updates I will continue to add new functionality and improve this workflow, including: Additional enrichment sources New lead databases Email sending infrastructure The latest version will always be available on my Patreon
by Cheng Siong Chin
How It Works This workflow automates end-to-end concert ticket booking validation and fan experience management using two coordinated AI agents. It is designed for ticketing platforms, event operators, and venue operations teams that must prevent fraud, manage inventory accurately, and deliver seamless customer communication at scale. The workflow solves key challenges such as duplicate bookings, bot-driven purchases, payment failures, overbooking, and inconsistent refund handling. When a booking request enters via webhook, inventory data is fetched and normalized. The Ticket Validation Agent analyzes structured booking, payment authorization, pricing tier rules, seat allocation, resale restrictions, and fraud signals to produce a standardized risk classification. Based on risk level, the workflow routes transactions for auto-approval, conditional handling, or escalation. The Fan Experience Orchestration Agent then manages confirmations, ticket system updates, waitlist activation, refund or compensation logic, and SLA-driven escalation to operations. All outcomes are merged and logged into an audit trail, ensuring compliance, transparency, and consistent service enforcement while minimizing manual review. Setup Steps Add OpenAI/Nvidia API credentials in n8n. Configure ticketing system HTTP endpoint. Connect Gmail for confirmations. Connect Slack for escalation alerts. Connect Google Sheets for audit logging. Define risk thresholds and SLA rules. Prerequisites n8n account, OpenAI API key, ticketing API access, Gmail, Slack, Google Sheets Use Cases Concert ticket sales, festival booking systems, venue seat management, VIP allocation handling Customization Adjust fraud thresholds, add SMS notifications, integrate CRM, extend loyalty logic Benefits Reduces fraud, automates fan communication, enforces ticketing policies
by WeblineIndia
WhatsApp AI Sales Agent using PDF Vector Store This workflow turns your WhatsApp number into an intelligent AI-powered Sales Agent that answers product queries using real data extracted from a PDF brochure. It loads a product brochure via HTTP Request, converts it into embeddings using OpenAI, stores them in an in-memory vector store and allows the AI Agent to provide factual answers to users via WhatsApp. Non-text messages are filtered and only text queries are processed. This makes the workflow ideal for building a lightweight chatbot that understands your product documentation deeply. Quick Start: 5-Step Fast Implementation Insert your WhatsApp credentials in the WhatsApp Trigger and WhatsApp Send nodes. Add your OpenAI API Key to all OpenAI-powered nodes. Replace the PDF URL in the HTTP Request node with your own brochure. Run the Manual Trigger once to build the vector store. Activate the workflow and start chatting from WhatsApp. What It Does This workflow converts a product brochure (PDF) into a searchable knowledgebase using LangChain vector embeddings. Incoming WhatsApp messages are processed and if the message is text, the AI Sales Agent uses OpenAI + the vector store to produce accurate, brochure-based answers. The AI responds naturally to customer queries, supports conversation memory across the session and retrieves information directly from the brochure when needed. Non-text messages are filtered out to maintain clean conversational flow. The workflow is fully modular: you can replace the PDF, modify AI prompts, plug into CRM systems or extend it into a broader sales automation pipeline. Who’s It For This workflow is ideal for: Businesses wanting a WhatsApp-based AI customer assistant. Sales teams needing automated product query handling. Companies with large product catalog PDFs. Marketers wanting a zero-code product brochure chatbot. Technical teams experimenting with LangChain + OpenAI inside n8n. Requirements to Use This Workflow To run this workflow successfully, you need: An n8n instance (cloud or self-hosted). A WhatsApp Business API connection. An OpenAI API key. A publicly accessible PDF brochure URL. Basic familiarity with n8n node configuration. Optional: A custom vector store backend (Qdrant, Pinecone) – the template uses in-memory storage. How It Works & How To Set Up 1. Import the Workflow JSON Upload the workflow JSON provided. 2. Configure WhatsApp Trigger Open WhatsApp Trigger Add your WhatsApp credentials Set the webhook correctly to match your n8n endpoint 3. Configure WhatsApp Response Nodes The workflow uses two WhatsApp send nodes: Reply To User** → Sends AI response Reply To User1** → Sends “unsupported message” reply Add your WhatsApp credentials to both. 4. Replace the PDF Brochure In get Product Brochure (HTTP Request): Update the url parameter with your own PDF 5. Run the PDF → Vector Store Setup (One-Time Only) Use the Manual Trigger ("When clicking ‘Test workflow’") to: Download the PDF Extract text Split into chunks Generate embeddings Store them in Product Catalogue vector store > You must run this once after importing the workflow. 6. Set OpenAI Credentials Add your OpenAI API Key to the following nodes: OpenAI Chat Model OpenAI Chat Model1 Embeddings OpenAI Embeddings OpenAI1 7. Review the AI Agent Prompt Inside AI Sales Agent, you can edit the system message to match: Your brand Your product types Your tone of voice 8. Activate the Workflow Once activated, WhatsApp users can chat with your AI Sales Agent. How to Customize Nodes? Here are common customization options: Customize the PDF / Knowledgebase Change the URL in get Product Brochure or Upload your own file via other nodes. Customize AI Behavior Edit the systemMessage inside AI Sales Agent: Change personality Set product rules Restrict/expand scope Change Supported Message Types Modify Handle Message Types switch logic to allow: Image → OCR Audio → Whisper Documents → Additional processing Modify WhatsApp Message Templates Inside the textBody of response nodes. Extend or replace Vector Store Swap vectorStoreInMemory with: Qdrant Pinecone Redis vector store By updating the vector store node. Add-Ons (Optional Enhancements) You can extend this workflow with: 1. Multi-language support Add OpenAI translation nodes before agent input. 2. CRM Integration Send user queries and chat logs into: HubSpot Salesforce Zoho CRM 3. Product Recommendation Engine Use embeddings similarity to suggest products. 4. Order Placement Workflow Connect to Stripe or Shopify APIs. 5. Analytics Dashboard Log chats into Airtable / Postgres for analysis. Use Case Examples Here are some practical uses: Product Inquiry Chatbot Customers ask about specs, pricing, or compatibility. Digital Catalog Assistant Converts PDF brochures into interactive WhatsApp search. Sales Support Bot Reduces load on human sales reps by handling common questions. Internal Knowledge Bot Teams access manuals, training documents, or service guides. Event/Product Launch Assistant Provides instant details about newly launched items. And many more similar use cases where an AI-powered WhatsApp assistant is valuable. Troubleshooting Guide | Issue | Possible Cause | Solution | | ------------------------------------------ | -------------------------------------- | ------------------------------------------------------------- | | WhatsApp messages not triggering workflow | Wrong webhook URL or inactive workflow | Ensure webhook is correct & activate workflow | | AI replies are empty | Missing OpenAI credentials | Add OpenAI API key to all AI nodes | | Vector store not populated | Manual trigger not executed | Run the Test Workflow trigger once | | PDF extraction returns blank text | PDF is image-based | Use OCR before text splitting | | “Unsupported message type” always triggers | Message type filter misconfigured | Check conditions in Handle Message Types | | AI not using brochure data | VectorStore tool not linked properly | Check connections between Embeddings → VectorStore → AI Agent | Need Help with Support & Extensions? If you need help setting up, customizing or extending this workflow, feel free to reach out to our n8n automation developers at WeblineIndia. We can help with Custom WhatsApp automation workflows AI-powered product catalog systems Integrating CRM, ERP or eCommerce platforms Building advanced LangChain-powered n8n automations Deploying scalable vector stores (Qdrant/Pinecone) And so much more.
by Joe V
🔄 AI Video Polling Engine - Long-Running Job Handler for Veo, Sora & Seedance The async backbone that makes AI video generation production-ready ⚡🎬 🎥 See It In Action 🔗 Full Demo: youtu.be/OI_oJ_2F1O0 ⚠️ Must Read First This is a companion workflow for the main AI Shorts Generator: 🔗 Main Workflow: AI Shorts Reactor This workflow handles the "waiting game" so your main bot stays fast and responsive. Think of it as the backstage crew that handles the heavy lifting while your main workflow performs on stage. 🤔 The Problem This Solves Without This Workflow: User sends message ↓ Bot calls AI API ↓ ⏳ Bot waits 2-5 minutes... (BLOCKED) ↓ ❌ Timeout errors ❌ Execution limits exceeded ❌ Users think bot is broken ❌ Can't handle multiple requests With This Workflow: User sends message ↓ Bot calls AI API ↓ ✅ Bot responds instantly: "Video generating..." ↓ 🔄 This webhook polls in background ↓ ⚡ Main bot handles other users ↓ ✅ Video ready → Auto-sends to user Result: Your bot feels instant, scales infinitely, and never times out 🚀 🔁 What This Workflow Does This is a dedicated polling webhook that acts as the async job handler for AI video generation. It's the invisible worker that: 1️⃣ Receives the Job POST /webhook/poll-video { "sessionId": "user_123", "taskId": "veo_abc456", "model": "veo3", "attempt": 1 } 2️⃣ Responds Instantly 200 OK - "Polling started" (Main workflow never waits!) 3️⃣ Polls in Background Wait 60s → Check status → Repeat ⏱️ Waits 1 minute between checks (API-friendly) 🔄 Polls up to 15 times (~15 minutes max) 🎯 Supports Veo, Sora, and Seedance APIs 4️⃣ Detects Completion Handles multiple API response formats: // Veo format { status: "completed", videoUrl: "https://..." } // Market format (Sora/Seedance) { job: { status: "success", result: { url: "..." } } } // Legacy format { data: { video_url: "..." } } (No matter how the API responds, this workflow figures it out) 5️⃣ Delivers the Video Once ready: 📥 Downloads video from AI provider ☁️ Uploads to your S3 storage 💾 Restores user session from Redis 📱 Sends Telegram preview with buttons 🔄 Enables video extension (Veo only) 📊 Logs metadata for analytics ⚙️ Technical Architecture The Flow: Main Workflow Polling Webhook │ │ ├──[Trigger AI Job]──────────┤ │ "Task ID: abc123" │ │ │ ├──[Return Instantly] │ │ "Generating..." │ │ │ ├──[Handle New User] │ │ ├──[Wait 60s] │ │ │ ├──[Check Status] │ │ "Processing..." │ │ │ ├──[Wait 60s] │ │ │ ├──[Check Status] │ │ "Completed!" │ │ │ ├──[Download Video] │ │ │ ├──[Upload to S3] │ │ │ └──[Send to User] │ │ └──────────────────────────────────┘ "Your video is ready!" 🚀 Key Features ⚡ Non-Blocking Architecture Main workflow never waits Handle unlimited concurrent jobs Each user gets instant responses 🔄 Intelligent Polling Respects API rate limits (60s intervals) Auto-retries on transient failures Graceful timeout handling (15 attempts max) 🎯 Multi-Provider Support Handles different API formats: Veo** - record-info endpoint Sora** - Market job status Seedance** - Market job status 🛡️ Robust Error Handling ✅ Missing video URL → Retry with fallback parsers ✅ API timeout → Continue polling ✅ Invalid response → Parse alternative formats ✅ Max attempts reached → Notify user gracefully 💾 Session Management Stores state in Redis Restores full context when video is ready Supports video extension workflows Maintains user preferences 📊 Production Features Detailed logging at each step Metadata tracking (generation time, model used, etc.) S3 storage integration Telegram notifications Analytics-ready data structure 🧩 Integration Points Works Seamlessly With: | Use Case | How It Helps | |----------|--------------| | 🤖 Telegram Bots | Keeps bot responsive during 2-5 min video generation | | 📺 YouTube Automation | Polls video, then triggers auto-publish | | 🎬 Multi-Video Pipelines | Handles 10+ videos simultaneously | | 🏢 Content Agencies | Production-grade reliability for clients | | 🧪 A/B Testing | Generate multiple variations without blocking | Required Components: ✅ Main workflow that triggers video generation ✅ Redis for session storage ✅ S3-compatible storage for videos ✅ KIE.ai API credentials ✅ Telegram Bot (for notifications) 📋 How to Use Step 1: Set Up Main Workflow Import and configure the AI Shorts Reactor Step 2: Import This Webhook Add this workflow to your n8n instance Step 3: Configure Credentials KIE.ai API key Redis connection S3 storage credentials Telegram bot token Step 4: Link Workflows In your main workflow, call this webhook: // After triggering AI video generation const response = await httpRequest({ method: 'POST', url: 'YOUR_WEBHOOK_URL/poll-video', body: { sessionId: sessionId, taskId: taskId, model: 'veo3', attempt: 1 } }); Step 5: Activate & Test Activate this polling webhook Trigger a video generation from main workflow Watch it poll in background and deliver results 🎯 Real-World Example Scenario: User generates 3 videos simultaneously Without This Workflow: User A: "Generate video" → Bot: ⏳ Processing... (BLOCKED 5 min) User B: "Generate video" → Bot: ❌ Timeout (main workflow still processing User A) User C: "Generate video" → Bot: ❌ Never receives request With This Workflow: User A: "Generate video" → Bot: ✅ "Generating! Check back in 3 min" → Polling webhook handles in background User B: "Generate video" → Bot: ✅ "Generating! Check back in 3 min" → Second polling instance starts User C: "Generate video" → Bot: ✅ "Generating! Check back in 3 min" → Third polling instance starts 3 minutes later--- User A: 📹 "Your video is ready!" [Preview] [Publish] User B: 📹 "Your video is ready!" [Preview] [Publish] User C: 📹 "Your video is ready!" [Preview] [Publish] All three users served simultaneously with zero blocking! 🚀 🔧 Customization Options Adjust Polling Frequency // Default: 60 seconds // For faster testing (use credits faster): const waitTime = 30; // seconds // For more API-friendly (slower updates): const waitTime = 90; // seconds Change Timeout Limits // Default: 15 attempts (15 minutes) const maxAttempts = 20; // Increase for longer videos Add More Providers Extend to support other AI video APIs: switch(model) { case 'veo3': // Existing Veo logic case 'runway': // Add Runway ML polling case 'pika': // Add Pika Labs polling } Custom Notifications Replace Telegram with: Discord webhooks Slack messages Email notifications SMS via Twilio Push notifications 📊 Monitoring & Analytics What Gets Logged: { "sessionId": "user_123", "taskId": "veo_abc456", "model": "veo3", "status": "completed", "attempts": 7, "totalTime": "6m 32s", "videoUrl": "s3://bucket/videos/abc456.mp4", "metadata": { "duration": 5.2, "resolution": "1080x1920", "fileSize": "4.7MB" } } Track Key Metrics: ⏱️ Average generation time per model 🔄 Polling attempts before completion ❌ Failure rate by provider 💰 Cost per video (API usage) 📈 Concurrent job capacity 🚨 Troubleshooting "Video never completes" ✅ Check KIE.ai API status ✅ Verify task ID is valid ✅ Increase maxAttempts if needed ✅ Check API response format hasn't changed "Polling stops after 1 attempt" ✅ Ensure webhook URL is correct ✅ Check n8n execution limits ✅ Verify Redis connection is stable "Video downloads but doesn't send" ✅ Check Telegram bot credentials ✅ Verify S3 upload succeeded ✅ Ensure Redis session exists ✅ Check Telegram chat ID is valid "Multiple videos get mixed up" ✅ Confirm sessionId is unique per user ✅ Check Redis key collisions ✅ Verify taskId is properly passed 🏗️ Architecture Benefits Why Separate This Logic? | Aspect | Monolithic Workflow | Separated Webhook | |--------|--------------------|--------------------| | ⚡ Response Time | 2-5 minutes | <1 second | | 🔄 Concurrency | 1 job at a time | Unlimited | | 💰 Execution Costs | High (long-running) | Low (short bursts) | | 🐛 Debugging | Hard (mixed concerns) | Easy (isolated logic) | | 📈 Scalability | Poor | Excellent | | 🔧 Maintenance | Complex | Simple | 🛠️ Requirements Services Needed: ✅ n8n Instance (cloud or self-hosted) ✅ KIE.ai API (Veo, Sora, Seedance access) ✅ Redis (session storage) ✅ S3-compatible Storage (videos) ✅ Telegram Bot (optional, for notifications) Skills Required: Basic n8n knowledge Understanding of webhooks Redis basics (key-value storage) S3 upload concepts Setup Time: ~15 minutes Technical Level: Intermediate 🏷️ Tags webhook polling async-jobs long-running-tasks ai-video veo sora seedance production-ready redis s3 telegram youtube-automation content-pipeline scalability microservices n8n-webhook job-queue background-worker 💡 Best Practices Do's ✅ Keep polling interval at 60s minimum (respect API limits) Always handle timeout scenarios Log generation metadata for analytics Use unique session IDs per user Clean up Redis after job completion Don'ts ❌ Don't poll faster than 30s (risk API bans) Don't store videos in Redis (use S3) Don't skip error handling Don't use this for real-time updates (<10s) Don't forget to activate the webhook 🌟 Success Stories After Implementing This Webhook: | Metric | Before | After | |--------|--------|-------| | ⚡ Bot response time | 2-5 min | <1 sec | | 🎬 Concurrent videos | 1 | 50+ | | ❌ Timeout errors | 30% | 0% | | 😊 User satisfaction | 6/10 | 9.5/10 | | 💰 Execution costs | $50/mo | $12/mo | 🔗 Related Workflows 🎬 Main: AI Shorts Reactor - The full video generation bot 📤 YouTube Auto-Publisher - Publish completed videos 🎨 Video Style Presets - Custom prompt templates 📊 Analytics Dashboard - Track all generations 📜 License MIT License - Free to use, modify, and distribute! ⚡ Make your AI video workflows production-ready. Let the webhook handle the waiting. ⚡ Created by Joe Venner | Built with ❤️ and n8n | Part of the AI Shorts Reactor ecosystem