by Mantaka Mahir
How it works This workflow automates the process of converting Google Drive documents into searchable vector embeddings for AI-powered applications: • Takes a Google Drive folder URL as input • Initializes a Supabase vector database with pgvector extension • Fetches all files from the specified Drive folder • Downloads and converts each file to plain text • Generates 768-dimensional embeddings using Google Gemini • Stores documents with embeddings in Supabase for semantic search Built for the Study Agent workflow to power document-based Q&A, but also works perfectly for any RAG system, AI chatbot, knowledge base, or semantic search application that needs to query document collections. Set up steps Prerequisites: • Google Drive OAuth2 credentials • Supabase account with Postgres connection details • Google Gemini API key (free tier available) Setup time: ~10 minutes Steps: Add your Google Drive OAuth2 credentials to the Google Drive nodes Configure Supabase Postgres credentials in the SQL node Add Supabase API credentials to the Vector Store node Add Google Gemini API key to the Embeddings node Update the input with your Drive folder URL Execute the workflow Note: The SQL query will drop any existing "documents" table, so backup data if needed. Detailed node-by-node instructions are in the sticky notes within the workflow. Works with: Study Agent (main use case), custom AI agents, chatbots, documentation search, customer support bots, or any RAG application.
by Calvin Cunningham
Use Cases -Personal or family budget tracking. -Small business expense logging via Telegram -Hands-free logging (using voice messages) How it works: -Trigger receives text or voice. -Optional branch transcribes audio to text. -AI parses into a structured array (SOP enforces schema). -Split Out produces 1 item per expense. -Loop Over Items appends rows sequentially with a Wait, preventing missed writes. -In parallel, Item Lists (Aggregate) builds a single summary string; Merge (Wait for Both) releases one final Telegram confirmation. Setup Instructions Connect credentials: Telegram, Google, OpenAI. Sheets: Create a sheet with headers Date, Category, Merchant, Amount, Note. Copy Spreadsheet ID + sheet name. Map columns in Append to Google Sheet. Pick models: set Chat model (e.g., gpt-4o-mini) and Whisper for transcription if using audio. Wait time: keep 500–1000 ms to avoid API race conditions. Run: Send a Telegram message like: Gas 34.67, Groceries 82.45, Coffee 6.25, Lunch 14.90. Customization ideas: -Add categories map (Memory/Set) for consistent labeling. -Add currency detection/formatting. -Add error-to-Telegram path for invalid schema.
by Rahul Joshi
Description Transform Figma design files into detailed QA test cases with AI-driven analysis and structured export to Google Sheets. This workflow helps QA and product teams streamline design validation, test coverage, and documentation — all without manual effort. 🎨🤖📋 What This Template Does Step 1: Trigger manually and input your Figma file ID. 🎯 Step 2: Fetches the full Figma design data (layers, frames, components) via API. 🧩 Step 3: Sends structured design JSON to GPT-4o-mini for intelligent test case generation. 🧠 Step 4: AI analyzes UI components, user flows, and accessibility aspects to generate 5–10 test cases. ✅ Step 5: Parses and formats results into a clean structure. Step 6: Exports test cases directly to Google Sheets for QA tracking and reporting. 📊 Key Benefits ✅ Saves 2–3 hours per design by automating test case creation ✅ Ensures consistent, comprehensive QA documentation ✅ Uses AI to detect UX, accessibility, and functional coverage gaps ✅ Centralizes output in Google Sheets for easy collaboration Features Figma API integration for design parsing GPT-4o-mini model for structured test generation Automated Google Sheets export Dynamic file ID and output schema mapping Built-in error handling for large design files Requirements Figma Personal Access Token OpenAI API key (GPT-4o-mini) Google Sheets OAuth2 credentials Target Audience QA and Test Automation Engineers Product & Design Teams Startups and Agencies validating Figma prototypes Setup Instructions Connect your Figma token as HTTP Header Auth (X-Figma-Token). Add your OpenAI API key in n8n credentials (model: gpt-4o-mini). Configure Google Sheets OAuth2 and select your sheet. Input Figma file ID from the design URL. Run once manually, verify output, then enable for regular use.
by Yasser Sami
Human-in-the-Loop LinkedIn Post Generator (Telegram + AI) This n8n template demonstrates how to build a human-in-the-loop AI workflow that helps you create professional LinkedIn posts via Telegram. The agent searches the web, drafts content, asks for your approval, and refines it based on your feedback — ensuring every post sounds polished and on-brand. Who’s it for Content creators and marketers who want to save time drafting LinkedIn posts. SaaS founders or solopreneurs who regularly share updates or insights. Anyone who wants an AI writing assistant with human control in the loop. How it works / What it does Trigger: The workflow starts when you send a message to the Telegram bot asking it to write a LinkedIn post (e.g., “Write a LinkedIn post about AI in marketing”). Research: The AI agent uses the Tavily tool to search the web and gather context for your topic. Drafting: An AI model (OpenAI or Gemini) creates a professional LinkedIn post based on the findings. Human-in-the-loop: The bot sends the draft to you in Telegram and asks: “Good to go?” If you approve → The post is saved to a Google Sheet, ready to publish. If you disapprove and give feedback → The feedback is sent to a second AI agent that revises and improves the post. The improved draft is sent back to you again for final approval. Finalization: Once approved, the post is appended to a Google Sheet — your ready-to-post content library. This workflow combines AI creativity with human oversight to produce polished, authentic LinkedIn content every time. How to set up Import this template into your n8n account. Connect your Telegram bot (via Telegram Trigger and Send Message nodes). Connect your Google Sheets account to store approved posts. Set up your AI model provider (OpenAI or Gemini) and Tavily API key for web search. Activate the workflow and start chatting with your AI writing assistant on Telegram! Requirements n8n account. Telegram bot token. OpenAI or Google Gemini account (for text generation). Tavily API key (for web search). Google Sheets account (for saving approved posts). How to customize the workflow Post Tone**: Adjust AI prompts to match your personal voice (professional, storytelling, inspirational, etc.). Approval Logic**: Modify the approval step to allow multiple revision loops or add a “draft-only” mode. Storage Options**: Instead of Google Sheets, save approved posts to Notion, Airtable, or your CMS. Multi-platform**: Extend the same logic for X (Twitter) or Threads by changing the final output destination. Branding**: Add your brand guidelines or preferred hashtags to the AI prompts for consistent style. This template helps you write better LinkedIn posts faster — keeping you in full control while AI does the heavy lifting.
by Stephan Koning
VEXA: AI-Powered Meeting Intelligence I'll be honest, I built this because I was getting lazy in meetings and missing key details. I started with a simple VEXA integration for transcripts, then added AI to pull out summaries and tasks. But that just solved part of the problem. The real breakthrough came when we integrated Mem0, creating a persistent memory of every conversation. Now, you can stop taking notes and actually focus on the person you're talking to, knowing a system is tracking everything that matters. This is the playbook for how we built it. How It Works This isn't just one workflow; it's a two-part system designed to manage the entire meeting lifecycle from start to finish. Bot Management: It starts when you flick a switch in your CRM (Baserow). A command deploys or removes an AI bot from Google Meet. No fluff—it's there when you need it, gone when you don't. The workflow uses a quick "digital sticky note" in Redis to remember who the meeting is with and instantly updates the status in your Baserow table. AI Analysis & Memory: Once the meeting ends, VEXA sends the transcript over. Using the client ID (thank god for redis) , we feed the conversation to an AI model (OpenAI). It doesn't just summarize; it extracts actionable next steps and potential risks. All this structured data is then logged into a memory layer (Mem0), creating a permanent, searchable record of every client conversation. Setup Steps: Your Action Plan This is designed for rapid deployment. Here's what you do: Register Webhook: Run the manual trigger in the workflow once. This sends your n8n webhook URL to VEXA, telling it where to dump transcripts after a call. Connect Your CRM: Copy the vexa-start webhook URL from n8n. Paste it into your Baserow automation so it triggers when you set the "Send Bot" field to Start_Bot. Integrate Your Tools: Plug your VEXA, Mem0, Redis, and OpenAI API credentials into n8n. Use the Baserow Template: I've created a free Baserow template to act as your control panel. Grab it here: https://baserow.io/public/grid/t5kYjovKEHjNix2-6Rijk99y4SDeyQY4rmQISciC14w. It has all the fields you need to command the bot. Requirements An active n8n instance or cloud account. Accounts for VEXA.ai, Mem0.ai, Baserow, and OpenAI. A Redis database . Your Baserow table must have these fields: Meeting Link, Bot Name, Send Bot, and Status. Next Steps: Getting More ROI This workflow is the foundation. The real value comes from what you build on top of it. Automate Follow-ups:** Use the AI-identified next steps to automatically trigger follow-up emails or create tasks in your project management tool. Create a Unified Client Memory:** Connect your email and other communication platforms. Use Mem0 to parse and store every engagement, building a complete, holistic view of every client relationship. Build a Headless CRM:** Combine these workflows to build a fully AI-powered system that handles everything from lead capture to client management without any manual data entry. Copy the workflow and stop taking notes
by Cheng Siong Chin
How It Works A scheduled process aggregates content from eight distinct data sources and standardizes all inputs into a unified format. AI models perform sentiment scoring, detect conspiracy or misinformation signals, and generate trend analyses across domains. An MCDN routing model prioritizes and channels insights to the appropriate workflows. Dashboards visualize real-time analytics, trigger KPIs based on thresholds, and compile comprehensive market-intelligence reports for stakeholders. Setup Steps Data Sources: Connect news APIs, social media platforms, academic databases, code repositories, and documentation feeds. AI Analysis: Configure OpenAI models for sentiment analysis, conspiracy detection, and trend scoring. Dashboards: Integrate analytics platforms and enable automated email or reporting outputs. Storage: Configure a database for historical records, trend archives, and competitive-intelligence storage. Prerequisites Multi-source API credentials; OpenAI API key; dashboard platform access; email service; code repository access; academic database credentials Use Cases Competitive intelligence monitoring; market trend analysis; technology landscape tracking; product strategy research; misinformation filtering Customization Adjust sentiment thresholds; add/remove data sources; modify analysis rules; extend AI models Benefits Reduces research time 80%; consolidates market intelligence; improves decision accuracy
by Rahul Joshi
📊 Description Automate your YouTube research workflow by extracting audio from any video, transcribing it with Whisper AI, and generating structured GEO (Goal–Execution–Outcome) summaries using GPT-4o-mini. 🎥🤖 This template transforms unstructured video content into actionable, searchable insights that are automatically stored in Notion with rich metadata. It’s ideal for creators, educators, analysts, and knowledge workers who want to convert long videos into concise, high-quality summaries without manual effort. Perfect for content indexing, research automation, and knowledge-base enrichment. 📚✨ 🔁 What This Template Does • Triggers on a schedule to continuously process new YouTube videos. ⏰ • Fetches video metadata (title, description, thumbnails, published date) via the YouTube API. 🎥 • Downloads audio using RapidAPI and prepares it for transcription. 🎧 • Transcribes audio into text using OpenAI Whisper. 📝 • Skips invalid entries when no transcript is generated. 🚫 • Merges the transcript with metadata for richer AI context. 🔗 • Uses GPT-4o-mini to generate Goal, Execution, Outcome, and Keywords via structured JSON. 🤖📊 • Parses the AI-generated JSON into Notion-friendly formats. 🔍 • Creates a Notion page with GEO sections, keywords, and video metadata. 📄🏷️ • Produces a fully searchable knowledge record for every processed video. 📚✨ ⭐ Key Benefits ✅ Converts long YouTube videos into concise, structured knowledge ✅ AI-powered GEO summaries improve comprehension and recall ✅ Zero manual transcription or note-taking — 100% automated ✅ Seamless Notion integration creates a powerful video knowledge base ✅ Works on autopilot with scheduled triggers ✅ Saves hours for educators, researchers, analysts, and content teams 🧩 Features YouTube API integration for metadata retrieval RapidAPI audio downloader OpenAI Whisper transcription GPT-4o-mini structured analysis through LangChain Memory buffer + structured JSON parser for consistent results Automatic Notion page creation Fail-safe transcript validation (IF node) Metadata + transcript merging for richer AI context GEO (Goal–Execution–Outcome) summarization workflow 🔐 Requirements YouTube OAuth2 credentials OpenAI API key (Whisper + GPT-4o-mini) Notion API integration token + database ID RapidAPI key for YouTube audio downloading n8n with LangChain nodes enabled 🎯 Target Audience YouTubers and content creators archiving their content Researchers and educators summarizing long videos Knowledge managers building searchable Notion databases Automation teams creating video intelligence workflows 🛠️ Step-by-Step Setup Instructions Add YouTube OAuth2, OpenAI, Notion, and RapidAPI credentials. 🔑 Replace the placeholder RapidAPI key in the “Get YouTube Audio” node. ⚙️ Update the Notion database ID where summaries should be stored. 📄 Configure the Schedule Trigger interval based on your needs. ⏰ Replace the hardcoded video ID (if present) with dynamic input or playlist logic. 🔗 Test with a sample video to verify transcription + AI + Notion output. ▶️ Enable the workflow to run automatically. 🚀
by Rahul Joshi
📊 Description Generate high-quality, SEO-optimized content briefs automatically using AI, real-time keyword research, SERP intelligence, and historical content context. This workflow standardizes user inputs, fetches search metrics, analyzes competitors, and produces structured SEO briefs with quality scoring and version control. It also stores all versions in Google Sheets and generates HTML previews for easy review and publishing. 🤖📄📈 What This Template Does Normalizes user input from the chat trigger into structured fields (intent, topic, parameters). ✏️ Fetches real-time keyword metrics such as search volume, CPC, and difficulty from DataForSEO. 🔍 Retrieves SERP insights through SerpAPI for top competitors, headings, and content gaps. 🌐 Loads historical brief versions from Google Sheets for continuity and versioning. 📚 Uses an advanced GPT-4o-mini agent to generate a complete SEO brief with title, metadata, keywords, outline, entities, and internal links. 🤖 Calculates detailed SEO, differentiation, and completeness quality scores. 📊 Validates briefs against quality thresholds (outline length, keywords, word count, overall score). ⚡ Stores approved briefs in Google Sheets with version control and timestamping. 🗂️ Generates an HTML preview with styled formatting for team review or CMS use. 🖥️ Sends Slack alerts when a brief does not meet quality standards. 🚨 Key Benefits ✅ Fully automated SEO content brief generation ✅ Uses real-time keyword + SERP + competitor intelligence ✅ Ensures quality through automated scoring and validation ✅ Built-in version control for content operations teams ✅ Beautiful HTML preview ready for editors or clients ✅ Reduces research time from hours to minutes ✅ Ideal for content agencies, SEO teams, and AI-powered workflows Features Chat-triggered brief generation Real-time DataForSEO keyword metrics SERP analysis tool integration GPT-4o-mini structured AI agent Google Sheets integration for storing & retrieving versions Automated quality scoring (SEO, gaps, completeness) HTML preview builder with rich formatting Slack alerting for low-quality briefs Semantic entities, content gaps, competitor insights Requirements OpenAI API (GPT-4o-mini or compatible model) DataForSEO access credentials (Basic Auth) SerpAPI key for SERP extraction Google Sheets OAuth2 integration Optional: Slack webhook for quality alerts Target Audience SEO teams generating large amounts of content briefs Content agencies scaling production with automation Marketing teams building data-driven content strategies SaaS teams wanting automated keyword-based briefs Anyone needing structured, high-quality content briefs from chat Step-by-Step Setup Instructions Connect your OpenAI API credential and confirm GPT-4o-mini availability. 🔌 Add DataForSEO HTTP Basic Auth for keyword metrics. 📊 Connect SerpAPI for SERP analysis tools. 🌐 Add Google Sheets OAuth2 and link your content_versions sheet. 📄 Optional: Add a Slack webhook URL for quality alerts. 🔔 Test by sending a topic via the chat trigger. Review the generated SEO brief and HTML preview. Enable the workflow for continued use in your content pipeline. 🚀
by Jamot
How it works Your WhatsApp AI Assistant automatically handles customer inquiries by linking your Google Docs knowledge base to incoming WhatsApp messages. The system instantly processes customer questions, references your business documentation, and delivers AI-powered responses through OpenAI or Gemini - all without you lifting a finger. Works seamlessly in individual chats and WhatsApp groups where the assistant can respond on your behalf. Set up steps Time to complete: 15-30 minutes Step 1: Create your WhapAround account and connect your WhatsApp number (5 minutes) Step 2: Prepare your Google Doc with business information and add the document ID to the system (5 minutes) Step 3: Configure the WhatsApp webhook and map message fields (10 minutes) Step 4: Connect your OpenAI or Gemini API key (3 minutes) Step 5: Send a test message to verify everything works (2 minutes) Optional: Set up PostgreSQL database for conversation memory and configure custom branding/escalation rules (additional 15-20 minutes) Detailed technical configurations, webhook URLs, and API parameter settings are provided within each workflow step to guide you through the exact setup process.
by Cheng Siong Chin
How It Works Scheduled triggers run automated price checks across multiple travel data sources. The collected data is aggregated, validated, and processed through an AI analysis layer that compares trends, detects anomalies, and evaluates multi-criteria factors such as price movement, seasonality, and route demand. The system then routes results into booking preparation, report generation, and notification modules. When target price conditions are met, alerts are sent and records are updated accordingly. Setup Steps Connect Google Flights and Skyscanner APIs using authenticated tokens. Configure the OpenAI API for enhanced analysis and multi-factor evaluation. Link Google Sheets for storing historical price data. Add WordPress site credentials to enable automated report publishing. Enable email notifications for price alerts and updates. Adjust the scheduler frequency within the Schedule Price Check node to control how often the workflow runs. Prerequisites Google Flights API, Skyscanner API, flight booking service credentials, OpenAI API key, Google Sheets access, WordPress admin account, email service configured. Use Cases Travel agencies automating client alerts for price drops. Corporate travel managers monitoring bulk bookings. Customization Modify price thresholds in Multi-Criteria Decision node. Add airline or destination filters in search parameters. Benefits Eliminates manual price monitoring. Reduces booking delays through automation.
by Stéphane Bordas
Who is this for? This workflow is for healthcare professionals, consultants, coaches, and service businesses who want to completely automate their appointment booking system via WhatsApp — without manual intervention for reservations, availability checks, or cancellation management. What problem is this workflow solving? / Use case Managing appointments manually via WhatsApp is extremely time-consuming: checking availability, confirmations, rescheduling, cancellations. This workflow automates the entire process — from initial request to final confirmation — allowing your clients to book, modify, or cancel appointments 24/7, in natural language, directly via WhatsApp. What this workflow does Processes multi-modal messages (text, audio, images) from WhatsApp Business API Detects message type and routes to appropriate processing (Whisper for audio, GPT-4 Vision for images) Uses AI Agent with 5 Cal.com tools to manage complete appointment lifecycle Checks real-time availability in your Cal.com calendar Books appointments autonomously without human intervention Handles cancellations and rescheduling requests Maintains conversation context with Simple Memory for natural exchanges Formats responses with Unicode bold for better WhatsApp readability Sends automated replies directly to the client The result: a fully automated 24/7 appointment management system via WhatsApp. Setup 1. WhatsApp Business API Connect your WhatsApp Business API account in n8n. Set up the webhook in Facebook Developer Console (Webhook → Messages → Subscribe). Add your phone_number_id and access token credentials. 2. Cal.com Create a Cal.com account and configure your calendar. Generate an API Key from Cal.com settings. Set up your event types (duration, availability, pricing). Add your Cal.com API credentials in n8n. 3. OpenAI Get an OpenAI API key (for GPT-4, Whisper, and Vision). Add your OpenAI credentials in n8n. The workflow uses GPT-4 for conversation, Whisper for audio transcription, and GPT-4 Vision for image analysis. 4. Customize the AI Agent Edit the System Message to define your agent's personality, tone, and business context. Adjust timezone in tool parameters (default: Europe/Paris). Configure event type IDs for different appointment types. 5. Test & activate Test with different message types (text, audio, image) from WhatsApp. Verify appointments are created correctly in Cal.com. Switch to production mode and activate the workflow. This workflow helps you build a fully autonomous AI booking assistant, transforming WhatsApp into a 24/7 appointment management system. Need help customizing? Contact me for consulting and support: LinkedIn / Youtube
by Kelsey Brown
Never miss a workflow failure Automatically capture, analyze, and debug n8n workflow errors using Claude Sonnet 4 with real-time documentation lookup via Context7 MCP server. Why this works better Documentation-first AI analysis Context7 searches official n8n docs before diagnosing. Claude Sonnet 4 only responds after finding relevant documentation—no hallucinations. Complete error intelligence Every error stored in Supabase with full context: stack traces, execution data, workflow structure, AI analysis. Track patterns across all workflows. Production-ready emails Professional HTML with inline code snippets, proper formatting, and one-click execution links. What happens Root cause - Plain English explanation Specific fix - Exact field names and values Prevention tip - Avoid future errors Execution link - One-click debug access Statistics - Error frequency tracking How it works Error → Your workflow fails Capture → Full context retrieved Research → Context7 searches n8n Analysis → AI diagnoses with context Email → Formatted alert delivered Record → Error stored in database Requirements Supabase** - Free tier works OpenRouter** - $5 credit Context7** - Free API available SMTP** - For email delivery n8n API** - Must be enabled Setup: 15 minutes Overview Complete instructions with SQL script included in workflow sticky notes. Activate: Settings → Error Workflow → Select this workflow Customize Reduce cost: Remove workflow_json and execution_data from prompt Change output: Swap email node for Telegram/Slack/Discord—expressions provided in notes FAQ Works with community nodes? Yes. Context7 searches all n8n documentation. Handles sensitive data? Remove workflow_json and execution_data from prompt to exclude content. Customizable design? Yes. HTML template in "Build HTML" node is fully editable.