by Mariela Slavenova
This template crawls a website from its sitemap, deduplicates URLs in Supabase, scrapes pages with Crawl4AI, cleans and validates the text, then stores content + metadata in a Supabase vector store using OpenAI embeddings. It’s a reliable, repeatable pipeline for building searchable knowledge bases, SEO research corpora, and RAG datasets. ⸻ Good to know • Built-in de-duplication via a scrape_queue table (status: pending/completed/error). • Resilient flow: waits, retries, and marks failed tasks. • Costs depend on Crawl4AI usage and OpenAI embeddings. • Replace any placeholders (API keys, tokens, URLs) before running. • Respect website robots/ToS and applicable data laws when scraping. How it works Sitemap fetch & parse — Load sitemap.xml, extract all URLs. De-dupe — Normalize URLs, check Supabase scrape_queue; insert only new ones. Scrape — Send URLs to Crawl4AI; poll task status until completed. Clean & score — Remove boilerplate/markup, detect content type, compute quality metrics, extract metadata (title, domain, language, length). Chunk & embed — Split text, create OpenAI embeddings. Store — Upsert into Supabase vector store (documents) with metadata; update job status. Requirements • Supabase (Postgres + Vector extension enabled) • Crawl4AI API key (or header auth) • OpenAI API key (for embeddings) • n8n credentials set for HTTP, Postgres/Supabase How to use Configure credentials (Supabase/Postgres, Crawl4AI, OpenAI). (Optional) Run the provided SQL to create scrape_queue and documents. Set your sitemap URL in the HTTP Request node. Execute the workflow (manual trigger) and monitor Supabase statuses. Query your documents table or vector store from your app/RAG stack. Potential Use Cases This automation is ideal for: Market research teams collecting competitive data Content creators monitoring web trends SEO specialists tracking website content updates Analysts gathering structured data for insights Anyone needing reliable, structured web content for analysis Need help customizing? Contact me for consulting and support: LinkedIn
by Growth AI
SEO Content Generation Workflow - n8n Template Instructions Who's it for This workflow is designed for SEO professionals, content marketers, digital agencies, and businesses who need to generate optimized meta tags, H1 headings, and content briefs at scale. Perfect for teams managing multiple clients or large keyword lists who want to automate competitor analysis and SEO content creation while maintaining quality and personalization. How it works The workflow automates the entire SEO content creation process by analyzing your target keywords against top competitors, then generating optimized meta elements and comprehensive content briefs. It uses AI-powered analysis combined with real competitor data to create SEO-friendly content that's tailored to your specific business context. The system processes keywords in batches, performs Google searches, scrapes competitor content, analyzes heading structures, and generates personalized SEO content using your company's database information for maximum relevance. Requirements Required Services and Credentials Google Sheets API**: For reading configuration and updating results Anthropic API**: For AI content generation (Claude Sonnet 4) OpenAI API**: For embeddings and vector search Apify API**: For Google search results Firecrawl API**: For competitor website scraping Supabase**: For vector database (optional but recommended) Template Spreadsheet Copy this template spreadsheet and configure it with your information: Template Link How to set up Step 1: Copy and Configure Template Make a copy of the template spreadsheet Fill in the Client Information sheet: Client name: Your company or client's name Client information: Brief business description URL: Website address Supabase database: Database name (prevents AI hallucination) Tone of voice: Content style preferences Restrictive instructions: Topics or approaches to avoid Complete the SEO sheet with your target pages: Page: Page you're optimizing (e.g., "Homepage", "Product Page") Keyword: Main search term to target Awareness level: User familiarity with your business Page type: Category (homepage, blog, product page, etc.) Step 2: Import Workflow Import the n8n workflow JSON file Configure all required API credentials in n8n: Google Sheets OAuth2 Anthropic API key OpenAI API key Apify API key Firecrawl API key Supabase credentials (if using vector database) Step 3: Test Configuration Activate the workflow Send your Google Sheets URL to the chat trigger Verify that all sheets are readable and credentials work Test with a single keyword row first Workflow Process Overview Phase 0: Setup and Configuration Copy template spreadsheet Configure client information and SEO parameters Set up API credentials in n8n Phase 1: Data Input and Processing Chat trigger receives Google Sheets URL System reads client configuration and SEO data Filters valid keywords and empty H1 fields Initiates batch processing Phase 2: Competitor Research and Analysis Searches Google for top 10 results per keyword Scrapes first 5 competitor websites Extracts heading structures (H1-H6) Analyzes competitor meta tags and content organization Phase 3: Meta Tags and H1 Generation AI analyzes keyword context and competitor data Accesses client database for personalization Generates optimized meta title (65 chars max) Creates compelling meta description (165 chars max) Produces user-focused H1 (70 chars max) Phase 4: Content Brief Creation Analyzes search intent percentages Develops content strategy based on competitor analysis Creates detailed MECE page structure Suggests rich media elements Provides writing recommendations and detail level scoring Phase 5: Data Integration and Updates Combines all generated content into unified structure Updates Google Sheets with new SEO elements Preserves existing data while adding new content Continues batch processing for remaining keywords How to customize the workflow Adjusting AI Models Replace Anthropic Claude with other LLM providers Modify system prompts for different content styles Adjust character limits for meta elements Modifying Competitor Analysis Change number of competitors analyzed (currently 5) Adjust scraping parameters in Firecrawl nodes Modify heading extraction logic in JavaScript nodes Customizing Output Format Update Google Sheets column mapping in Code node Modify structured output parser schema Change batch processing size in Split in Batches node Adding Quality Controls Insert validation nodes between phases Add error handling and retry logic Implement content quality scoring Extending Functionality Add keyword research capabilities Include image optimization suggestions Integrate social media content generation Connect to CMS platforms for direct publishing Best Practices Test with small batches before processing large keyword lists Monitor API usage and costs across all services Regularly update system prompts based on output quality Maintain clean data in your Google Sheets template Use descriptive node names for easier workflow maintenance Troubleshooting API Errors**: Check credential configuration and usage limits Scraping Failures**: Firecrawl nodes have error handling enabled Empty Results**: Verify keyword formatting and competitor availability Sheet Updates**: Ensure proper column mapping in final Code node Processing Stops**: Check batch processing limits and timeout settings
by Growth AI
SEO Content Generation Workflow (Basic Version) - n8n Template Instructions Who's it for This workflow is designed for SEO professionals, content marketers, digital agencies, and businesses who need to generate optimized meta tags, H1 headings, and content briefs at scale. Perfect for teams managing multiple clients or large keyword lists who want to automate competitor analysis and SEO content creation without the complexity of vector databases. How it works The workflow automates the entire SEO content creation process by analyzing your target keywords against top competitors, then generating optimized meta elements and comprehensive content briefs. It uses AI-powered analysis combined with real competitor data to create SEO-friendly content that's tailored to your specific business context. The system processes keywords in batches, performs Google searches, scrapes competitor content, analyzes heading structures, and generates personalized SEO content using your company information for maximum relevance. Requirements Required Services and Credentials Google Sheets API**: For reading configuration and updating results Anthropic API**: For AI content generation (Claude Sonnet 4) Apify API**: For Google search results Firecrawl API**: For competitor website scraping Template Spreadsheet Copy this template spreadsheet and configure it with your information: Template Link How to set up Step 1: Copy and Configure Template Make a copy of the template spreadsheet Fill in the Client Information sheet: Client name: Your company or client's name Client information: Brief business description URL: Website address Tone of voice: Content style preferences Restrictive instructions: Topics or approaches to avoid Complete the SEO sheet with your target pages: Page: Page you're optimizing (e.g., "Homepage", "Product Page") Keyword: Main search term to target Awareness level: User familiarity with your business Page type: Category (homepage, blog, product page, etc.) Step 2: Import Workflow Import the n8n workflow JSON file Configure all required API credentials in n8n: Google Sheets OAuth2 Anthropic API key Apify API key Firecrawl API key Step 3: Test Configuration Activate the workflow Send your Google Sheets URL to the chat trigger Verify that all sheets are readable and credentials work Test with a single keyword row first Workflow Process Overview Phase 0: Setup and Configuration Copy template spreadsheet Configure client information and SEO parameters Set up API credentials in n8n Phase 1: Data Input and Processing Chat trigger receives Google Sheets URL System reads client configuration and SEO data Filters valid keywords and empty H1 fields Initiates batch processing Phase 2: Competitor Research and Analysis Searches Google for top 10 results per keyword using Apify Scrapes first 5 competitor websites using Firecrawl Extracts heading structures (H1-H6) from competitor pages Analyzes competitor meta tags and content organization Processes markdown content to identify heading hierarchies Phase 3: Meta Tags and H1 Generation AI analyzes keyword context and competitor data using Claude Incorporates client information for personalization Generates optimized meta title (65 characters maximum) Creates compelling meta description (165 characters maximum) Produces user-focused H1 (70 characters maximum) Uses structured output parsing for consistent formatting Phase 4: Content Brief Creation Analyzes search intent percentages (informational, transactional, navigational) Develops content strategy based on competitor analysis Creates detailed MECE page structure with H2 and H3 sections Suggests rich media elements (images, videos, infographics, tables) Provides writing recommendations and detail level scoring (1-10 scale) Ensures SEO optimization while maintaining user relevance Phase 5: Data Integration and Updates Combines all generated content into unified structure Updates Google Sheets with new SEO elements Preserves existing data while adding new content Continues batch processing for remaining keywords Key Differences from Advanced Version This basic version focuses on core SEO functionality without additional complexity: No Vector Database**: Removes Supabase integration for simpler setup Streamlined Architecture**: Fewer dependencies and configuration steps Essential Features Only**: Core competitor analysis and content generation Faster Setup**: Reduced time to deployment Lower Costs**: Fewer API services required How to customize the workflow Adjusting AI Models Replace Anthropic Claude with other LLM providers in the agent nodes Modify system prompts for different content styles or languages Adjust character limits for meta elements in the structured output parser Modifying Competitor Analysis Change number of competitors analyzed (currently 5) by adding/removing Scrape nodes Adjust scraping parameters in Firecrawl nodes for different content types Modify heading extraction logic in JavaScript Code nodes Customizing Output Format Update Google Sheets column mapping in the final Code node Modify structured output parser schema for different data structures Change batch processing size in Split in Batches node Adding Quality Controls Insert validation nodes between workflow phases Add error handling and retry logic to critical nodes Implement content quality scoring mechanisms Extending Functionality Add keyword research capabilities with additional APIs Include image optimization suggestions Integrate social media content generation Connect to CMS platforms for direct publishing Best Practices Setup and Testing Always test with small batches before processing large keyword lists Monitor API usage and costs across all services Regularly update system prompts based on output quality Maintain clean data in your Google Sheets template Content Quality Review generated content before publishing Customize system prompts to match your brand voice Use descriptive node names for easier workflow maintenance Keep competitor analysis current by running regularly Performance Optimization Process keywords in small batches to avoid timeouts Set appropriate retry policies for external API calls Monitor workflow execution times and optimize bottlenecks Troubleshooting Common Issues and Solutions API Errors Check credential configuration in n8n settings Verify API usage limits and billing status Ensure proper authentication for each service Scraping Failures Firecrawl nodes have error handling enabled to continue on failures Some websites may block scraping - this is normal behavior Check if competitor URLs are accessible and valid Empty Results Verify keyword formatting in Google Sheets Ensure competitor websites contain the expected content structure Check if meta tags are properly formatted in system prompts Sheet Update Errors Ensure proper column mapping in final Code node Verify Google Sheets permissions and sharing settings Check that target sheet names match exactly Processing Stops Review batch processing limits and timeout settings Check for errors in individual nodes using execution logs Verify all required fields are populated in input data Template Structure Required Sheets Client Information: Business details and configuration SEO: Target keywords and page information Results Sheet: Where generated content will be written Expected Columns Keywords**: Target search terms Description**: Brief page description Type de page**: Page category Awareness level**: User familiarity level title, meta-desc, h1, brief**: Generated output columns This streamlined version provides all essential SEO content generation capabilities while being easier to set up and maintain than the advanced version with vector database integration.
by Avkash Kakdiya
How it works This workflow enriches and personalizes your lead profiles by integrating HubSpot contact data, scraping social media information, and using AI to generate tailored outreach emails. It streamlines the process from contact capture to sending a personalized email — all automatically. The system fetches new or updated HubSpot contacts, verifies and enriches their Twitter/LinkedIn data via Phantombuster, merges the profile and engagement insights, and finally generates a customized email ready for outreach. Step-by-step 1. Trigger & Input HubSpot Contact Webhook: Fires when a contact is created or updated in HubSpot. Fetch Contact: Pulls the full contact details (email, name, company, and social profiles). Update Google Sheet: Logs Twitter/LinkedIn usernames and marks their tracking status. 2. Validation Validate Twitter/LinkedIn Exists: Checks if the contact has a valid social profile before proceeding to scraping. 3. Social Media Scraping (via Phantombuster) Launch Profile Scraper & 🎯 Launch Tweet Scraper: Triggers Phantombuster agents to fetch profile details and recent tweets. Wait Nodes: Ensures scraping completes (30–60 seconds). Fetch Profile/Tweet Results: Retrieves output files from Phantombuster. Extract URL: Parses the job output to extract the downloadable .json or .csv data file link. 4. Data Download & Parsing Download Profile/Tweet Data: Downloads scraped JSON files. Parse JSON: Converts the raw file into structured data for processing. 5. Data Structuring & Merging Format Profile Fields: Maps stats like bio, followers, verified status, likes, etc. Format Tweet Fields: Captures tweet data and associates it with the lead’s email. Merge Data Streams: Combines tweet and profile datasets. Combine All Data: Produces a single, clean object containing all relevant lead details. 6. AI Email Generation & Delivery Generate Personalized Email: Feeds the merged data into OpenAI GPT (via LangChain) to craft a custom HTML email using your brand details. Parse Email Content: Cleans AI output into structured subject and body fields. Sends Email: Automatically delivers the personalized email to the lead via Gmail. Benefits Automated Lead Enrichment — Combines CRM and real-time social media data with zero manual research. Personalized Outreach at Scale — AI crafts unique, relevant emails for each contact. Improved Engagement Rates — Targeted messages based on actual social activity and profile details. Seamless Integration — Works directly with HubSpot, Google Sheets, Gmail, and Phantombuster. Time & Effort Savings — Replaces hours of manual lookup and email drafting with an end-to-end automated flow.
by Zain Khan
Categories: Business Automation, Customer Support, AI, Knowledge Management This comprehensive workflow enables businesses to build and deploy a custom-trained AI Chatbot in minutes. By combining a sophisticated data scraping engine with a RAG-based (Retrieval-Augmented Generation) chat interface, it allows you to transform website content into a high-performance support agent. Powered by Google Gemini and Pinecone, this system ensures your chatbot provides accurate, real-time answers based exclusively on your business data. Benefits Instant Knowledge Sync** - Automatically crawls sitemaps and URLs to keep your AI up-to-date with your latest website content. Embeddable Anywhere** - Features a ready-to-use chat trigger that can be integrated into the bottom-right of any website via a simple script. High-Fidelity Retrieval** - Uses vector embeddings to ensure the AI "searches" your documentation before answering, reducing hallucinations. Smart Conversational Memory** - Equipped with a 10-message window buffer, allowing the bot to handle complex follow-up questions naturally. Cost-Efficient Scaling** - Leverages Gemini’s efficient API and Pinecone’s high-speed indexing to manage thousands of customer queries at a low cost. How It Works Dual-Path Ingestion: The process begins with an n8n Form where you provide a sitemap or individual URLs. The workflow automatically handles XML parsing and URL cleaning to prepare a list of pages for processing. Clean Content Extraction: Using Decodo, the workflow fetches the HTML of each page and uses a specialized extraction node to strip away code, ads, and navigation, leaving only the high-value text content. SignUp using: dashboard.decodo.com/register?referral_code=55543bbdb96ffd8cf45c2605147641ee017e7900. Vectorization & Storage: The cleaned text is passed to the Gemini Embedding model, which converts the information into 3076-dimensional vectors. These are stored in a Pinecone "supportbot" index for instant retrieval. RAG-Powered Chat Agent: When a user sends a message through the chat widget, an AI Agent takes over. It uses the user's query to search the Pinecone database for relevant business facts. Intelligent Response Generation: The AI Agent passes the retrieved facts and the current chat history to Google Gemini, which generates a polite, accurate, and contextually relevant response for the user. Requirements n8n Instance:** A self-hosted or cloud instance of n8n. Google Gemini API Key:** For text embeddings and chat generation. Pinecone Account:** An API key and a "supportbot" index to store your knowledge base. Decodo Access:** For high-quality website content extraction. How to Use Initialize the Knowledge Base: Use the Form Trigger to input your website URL or Sitemap. Run the ingestion flow to populate your Pinecone index. Configure Credentials: Authenticate your Google Gemini and Pinecone accounts within n8n. Deploy the Chatbot: Enable the Chat Trigger node. Use the provided webhook URL to connect the backend to your website's frontend chat widget. Test & Refine: Interact with the bot to ensure it retrieves the correct data, and update your knowledge base by re-running the ingestion flow whenever your website content changes. Business Use Cases Customer Support Teams** - Automate answers to 80% of common FAQs using your existing documentation. E-commerce Sites** - Help customers find product details, shipping policies, and return information instantly. SaaS Providers** - Build an interactive technical documentation assistant to help users navigate your software. Marketing Agencies** - Offer "AI-powered site search" as an add-on service for client websites. Efficiency Gains Reduce Ticket Volume** by providing instant self-service options. Eliminate Manual Data Entry** by scraping content directly from the live website. Improve UX** with 24/7 availability and zero wait times for customers. Difficulty Level: Intermediate Estimated Setup Time: 30 min Monthly Operating Cost: Low (variable based on AI usage and Pinecone tier)
by Hugo Le Poole
Generate AI voice receptionist agents for local businesses using VAPI Automate the creation of personalized AI phone receptionists for local businesses by scraping Google Maps, analyzing websites, and deploying voice agents to VAPI. Who is this for? Agencies** offering AI voice solutions to local businesses Consultants** helping SMBs modernize their phone systems Developers** building lead generation tools for voice AI services Entrepreneurs** launching AI receptionist services at scale What this workflow does This workflow automates the entire process of creating customized AI voice agents: Collects business criteria through a form (city, keywords, quantity) Scrapes Google Maps for matching local businesses using Apify Fetches and analyzes each business website Generates tailored voice agent prompts using Claude AI Automatically provisions voice assistants via VAPI API Logs all created agents to Google Sheets for tracking The AI adapts prompts based on business type (salon, restaurant, dentist, spa) with appropriate tone, services, and booking workflows. Setup requirements Apify account** with Google Maps Scraper actor access Anthropic API key** for prompt generation OpenRouter API key** for website analysis VAPI account** with API access Google Sheets** connected via OAuth How to set up Import the workflow template Add your Apify credentials to the scraping node Configure Anthropic and OpenRouter API keys Replace YOUR_VAPI_API_KEY in the HTTP Request node header Connect your Google Sheets account Create a Google Sheet with columns: Business Name, Category, Address, Phone, Agent ID, Agent URL Update the Sheet URL in both Google Sheets nodes Activate the workflow and submit the form Customization options Business templates**: Edit the prompt in "Generate Agent Messages" to add new business categories Voice settings**: Modify ElevenLabs voice parameters (stability, similarity boost) LLM model**: Switch between GPT-4, Claude, or other models via OpenRouter Output format**: Customize the results page HTML in the final Form node
by TOMOMITSU ASANO
Intelligent Invoice Processing with AI Classification and XML Export Summary Automated invoice processing pipeline that extracts data from PDF invoices, uses AI Agent for intelligent expense categorization, generates XML for accounting systems, and routes high-value invoices for approval. Detailed Description A comprehensive accounts payable automation workflow that monitors for new PDF invoices, extracts text content, uses AI to classify expenses and detect anomalies, converts to XML format for accounting system integration, and implements approval workflows for high-value or unusual invoices. Key Features PDF Text Extraction**: Extract from File node parses invoice PDFs automatically AI-Powered Classification**: AI Agent categorizes expenses, suggests GL codes, detects anomalies XML Export**: Convert structured data to accounting-compatible XML format Approval Workflow**: Route invoices over $5,000 or low confidence for human review Multi-Trigger Support**: Google Drive monitoring or manual webhook upload Comprehensive Logging**: Archive all processed invoices to Google Sheets Use Cases Accounts payable automation Expense report processing Vendor invoice management Financial document digitization Audit trail generation Required Credentials Google Drive OAuth (for PDF source folder) OpenAI API key Slack Bot Token Gmail OAuth Google Sheets OAuth Node Count: 24 (19 functional + 5 sticky notes) Unique Aspects Uses Extract from File node for PDF text extraction (rarely used) Uses XML node for JSON to XML conversion (very rare) Uses AI Agent node for intelligent classification Uses Google Drive Trigger for file monitoring Implements approval workflow with conditional routing Webhook response** mode for API integration Workflow Architecture [Google Drive Trigger] [Manual Webhook] | | +----------+-----------+ | v [Filter PDF Files] | v [Download Invoice PDF] | v [Extract PDF Text] | v [Parse Invoice Data] (Code) | v [AI Invoice Classifier] <-- [OpenAI Chat Model] | v [Parse AI Classification] | v [Convert to XML] | v [Format XML Output] | v [Needs Approval?] (If) / \ Yes (>$5000) No (Auto) | | [Email Approval] [Slack Notify] | | +------+-------+ | v [Archive to Google Sheets] | v [Respond to Webhook] Configuration Guide Google Drive: Set folder ID to monitor in Drive Trigger node Approval Threshold: Default $5,000, adjust in "Needs Approval?" node Email Recipients: Configure finance-approvers@example.com Slack Channel: Set #finance-notifications for updates GL Codes: AI suggests codes; customize in AI prompt if needed Google Sheets: Configure document for invoice archive
by giangxai
Overview Automatically generate viral short-form health videos using AI and publish them to social platforms with n8n and Veo 3. This workflow collects viral ideas, analyzes engagement patterns, generates AI video scripts, renders videos with Veo 3, and handles publishing and tracking fully automated, with no manual editing. Who is this for? This template is ideal for: Content creators building faceless health channels (Shorts, Reels, TikTok) Affiliate marketers promoting health products with video content AI marketers running high-volume short-form content funnels Automation builders combining LLMs, video AI, and n8n Teams that want a scalable, repeatable system for viral AI video production If you want to create health niche videos at scale without manually scripting, rendering, and uploading each video, this workflow is for you. What problem is this workflow solving? Creating viral short-form health videos usually involves many manual steps and disconnected tools, such as: Manually collecting and validating viral content ideas Writing hooks and scripts for each video Switching between AI tools for analysis and video generation Waiting for videos to render and checking status manually Uploading videos and tracking what has been published This workflow connects all these steps into a single automated pipeline and removes repetitive manual work. What this workflow does This automated AI health video workflow: Runs on a defined schedule Collects viral health content ideas from external sources Normalizes and stores ideas in Google Sheets Loads pending viral ideas for processing Analyzes each idea and generates AI-optimized video scripts Creates AI videos automatically using the Veo 3 API Waits for video rendering and checks completion status Retrieves the final rendered videos Optionally aggregates or merges video assets Publishes videos to social platforms Updates Google Sheets with processing and publishing results The entire process runs end-to-end with minimal human intervention. Setup 1. Prepare Google Sheets Create a Google Sheet to manage your content pipeline with columns such as: idea / topic – Viral idea or source content analysis – AI analysis or hook summary script – Generated video script status – pending / processing / completed / failed video_url – Final rendered video link publish_result – Publishing status or notes Only rows marked as pending will be processed by the workflow. 2. Connect Google Sheets Authenticate your Google Sheets account in n8n Select the spreadsheet in the load and update nodes Ensure the workflow can write status updates back to the same sheet 3. Configure AI & Veo 3 Add credentials for your AI model (e.g. Gemini or similar) Configure prompt logic for health niche content Add your Veo 3 API credentials Test video creation with a small number of ideas before scaling 4. Configure Publishing & Schedule Set up publishing credentials for your target social platforms Open the Schedule triggers and define how often the workflow runs The schedule controls how frequently new AI health videos are created and published How to customize this workflow to your needs You can adapt this workflow without changing the core structure: Replace viral idea sources with your own research or internal data Adjust AI prompts for different health sub-niches Add manual approval steps before video creation Disable publishing and use the workflow only for video generation Add retry logic for failed renders or API errors Extend the workflow with analytics or performance tracking Best practices Start with a small batch of test ideas Keep status values consistent in Google Sheets Focus on strong hooks for health-related content Monitor rendering and publishing nodes during early runs Adjust schedule frequency based on API limits Documentation For a full walkthrough and advanced customization ideas, see the Video Guide.
by Igor Chernyaev
Template name Smart AI Support Assistant for Telegram Short description Smart AI Support Assistant for Telegram automatically answers repeated questions in your group using a Q&A knowledge base in Pinecone and forwards new or unclear questions to a human expert. Long description (Description поле) How it works Question detection listens to messages in a Telegram group and checks whether each new message is a real question or an expert reply. Knowledge base search looks for an existing answer in the Pinecone vector store for valid questions from the group. Auto‑reply from cache sends the saved answer straight back to the group when a good match is found, without involving the expert. Escalation to expert creates a ticket and forwards unanswered questions to the expert in a private chat with the same bot. Expert learning loop saves the expert’s reply to Pinecone so that similar questions are answered automatically in the future. Setup steps Connect Telegram Trigger to a single Telegram bot that is added as an admin to the group/supergroup and receives all user messages. Use the same bot for the expert: the expert’s private chat with this bot is where tickets and questions are delivered. Set up Pinecone: create an index, note the environment and index name, and add your Pinecone API key to n8n credentials. Add your AI model API key (for example, OpenAI) and select the model used for embeddings and answer rewriting. Configure any environment variables or n8n credentials for project IDs and spaces/namespaces used in Pinecone. Test the full flow: send a question in the group, confirm that a ticket reaches the expert in a private chat, reply once, and check that the next similar question is answered automatically from the cache.
by Yusuke
🧠 Overview Discover and analyze the most valuable community-built n8n workflows on GitHub. This automation searches public repositories, analyzes JSON workflows using AI, and saves a ranked report to Google Sheets — including summaries, use cases, difficulty, stars, node count, and repository links. ⚙️ How It Works Search GitHub Code API — queries for extension:json n8n and splits results Fetch & Parse — downloads each candidate file’s raw JSON and safely parses it Extract Metadata — detects AI-powered flows and collects key node information AI Analysis — evaluates the top N workflows (description, use case, difficulty) Merge Insights — combines AI analysis with GitHub data Save to Google Sheets — appends or updates by workflow name 🧩 Setup Instructions (5–10 min) Open Config node and set: search_query — e.g., "openai" extension:json n8n max_results — number of results to fetch (1–100) ai_analysis_top — number of workflows analyzed with AI SPREADSHEET_ID, SHEET_NAME — Google Sheets target Add GitHub PAT via HTTP Header Credential: Authorization: Bearer <YOUR_TOKEN> Connect OpenAI Credential to OpenAI Chat Model Connect Google Sheets (OAuth2) to Save to Google Sheets (Optional) Enable Schedule Trigger to run weekly for automatic updates > 💡 Tip: If you need to show literal brackets, use backticks like `<example>` (no HTML entities needed). 📚 Use Cases 1) Trend Tracking for AI Automations Goal:** Identify the fastest-growing AI-powered n8n workflows on GitHub. Output:** Sorted list by stars and AI detection, updated weekly. 2) Internal Workflow Benchmarking Goal:** Compare your organization’s workflows against top public examples. Output:** Difficulty, node count, and AI usage metrics in Google Sheets. 3) Market Research for Automation Agencies Goal:** Discover trending integrations and tool combinations (e.g., OpenAI + Slack). Output:** Data-driven insights for client projects and content planning. 🧪 Notes & Best Practices 🔐 No hardcoded secrets — use n8n Credentials 🧱 Works with self-hosted or cloud n8n 🧪 Start small (max_results = 10) before scaling 🧭 Use “AI Powered” + “Stars” columns in Sheets to identify top templates 🧩 Uses only Markdown sticky notes — no HTML formatting required 🔗 Resources GitHub (template JSON):** github-workflow-finder-ai.json
by Fakhar Khan
Multi-Agent AI Healthcare Assistant Demo ⚠️ EDUCATIONAL DEMONSTRATION ONLY - NOT FOR PRODUCTION MEDICAL USE ⚠️ A comprehensive demonstration of n8n's advanced multi-agent AI orchestration capabilities, showcasing how to build sophisticated conversational AI systems with specialized agent coordination. 🎯 What This Demo Shows Advanced Multi-Agent Architecture: Main Orchestrator Agent** - Traffic controller and decision maker Patient Registration Agent** - Specialized data collection and validation Appointment Scheduler Agent** - Complex multi-step booking workflows Medical Report Analyzer** - Document processing and analysis Prescription Medicine Analyzer** - Medicine verification and safety checks Technical Learning Objectives: Multi-agent coordination patterns Conditional agent routing and tool selection Memory management across conversations Multi-modal input processing (text, audio, images, documents) Complex state management in AI workflows External system integration (Google Sheets, WhatsApp, OpenAI) 🏗️ Architecture Highlights Multi-Modal Processing Pipeline: Text Messages** → Direct agent processing Audio Messages** → Transcription → Text processing → Audio response Images** → Vision analysis → Context integration Documents** → PDF extraction → Content analysis Agent Specialization: Each agent has focused responsibilities and constraints Intelligent document classification and routing Context-aware tool selection Error handling and recovery mechanisms Memory & State Management: Session-based conversation persistence Context sharing between specialized agents Multi-step workflow state tracking 🔧 Technical Implementation Key n8n Features Demonstrated: @n8n/n8n-nodes-langchain.agent - Main orchestrator @n8n/n8n-nodes-langchain.agentTool - Specialized sub-agents @n8n/n8n-nodes-langchain.memoryPostgresChat - Conversation memory n8n-nodes-base.googleSheetsTool - External data integration Complex conditional logic and routing Integration Patterns: WhatsApp Business API integration OpenAI GPT-4 model orchestration Google Sheets as data backend PostgreSQL for conversation memory Multi-step document processing 📚 Learning Value For n8n Developers: Enterprise-grade workflow architecture patterns AI agent orchestration best practices Complex conditional logic implementation Memory management in conversational AI Multi-modal data processing techniques Error handling and recovery strategies For AI Engineers: Agent specialization and coordination Tool calling and function integration Context management across conversations Multi-step workflow design Production workflow considerations ⚙️ Setup Requirements Required Credentials: OpenAI API Key (GPT-4 access recommended) WhatsApp Business API credentials Google Sheets OAuth2 API PostgreSQL database connection External Dependencies: Google Sheets database (template structure provided) WhatsApp Business Account PostgreSQL database for conversation memory 🚨 Important Disclaimers Educational Use Only: This is a DEMONSTRATION of n8n capabilities NOT suitable for actual medical use** NOT HIPAA compliant** Use only with fictional/test data Production Considerations: Requires proper security implementation Needs compliance review for medical use Consider HIPAA-compliant alternatives for healthcare Implement proper data encryption and access controls 🎓 Educational Applications Perfect for Learning: Advanced n8n workflow patterns Multi-agent AI system design Complex automation architecture Integration pattern best practices Conversational AI development Workshop & Training Use: AI automation workshops n8n advanced training sessions Multi-agent system demonstrations Integration pattern tutorials 🔄 Workflow Components Main Flow: WhatsApp message reception and media processing Input classification and routing Main agent orchestration and tool selection Specialized agent execution Response formatting and delivery Sub-Agents: Registration Tool** - Patient data collection Scheduler Tool** - Appointment booking logic Report Analyzer** - Medical document analysis Medicine Analyzer** - Prescription verification 💡 Customization Ideas Extend the Demo: Add more specialized agents Implement different communication channels Integrate with other healthcare APIs Add more sophisticated document processing Implement advanced analytics and reporting Adapt for Other Industries: Customer service automation Educational assistance systems E-commerce support workflows Technical support orchestration 🎯 Perfect for: Learning advanced n8n patterns, AI system architecture, multi-agent coordination ⏱️ Setup Time: 30-45 minutes (with credentials) 📈 Skill Level: Intermediate to Advanced 🏷️ Tags: AI Agents, Multi-Agent Systems, Healthcare Demo, Educational, Advanced Workflows
by Cheng Siong Chin
How It Works This workflow automates student progress monitoring and academic intervention orchestration through intelligent AI-driven analysis. Designed for educational institutions, learning management systems, and academic advisors, it solves the critical challenge of identifying at-risk students while coordinating timely interventions across faculty and support services. The system receives student data via webhook, fetches historical learning records, and merges these sources for comprehensive progress analysis. It employs a dual-agent AI framework for student progress validation and academic orchestration, detecting performance gaps, engagement issues, and intervention opportunities. The workflow intelligently routes findings based on validation status, triggering orchestration actions for students requiring support while logging compliant progress for successful learners. By executing multi-channel interventions through HTTP APIs and email notifications, it ensures educators and students receive timely guidance while maintaining complete audit trails for academic accountability and accreditation compliance. Setup Steps Configure Student Data Webhook trigger endpoint Connect Workflow Configuration node with academic performance parameters Set up Fetch Student Learning History node with LMS API credentials Configure Merge Student Data node for data consolidation Connect Student Progress Validation Agent with Claude/OpenAI API credentials Set up AI processing nodes Configure Route by Validation Status node with performance thresholds Connect Academic Orchestration Agent with AI API credentials for intervention planning Set up orchestration processing Prerequisites Claude/OpenAI API credentials for AI agents, learning management system API access Use Cases Universities identifying students requiring academic support, online learning platforms detecting engagement drops Customization Adjust validation thresholds for institutional academic standards Benefits Reduces student identification lag by 75%, eliminates manual progress tracking