by Rahul Joshi
📊 Description Automate your YouTube research workflow by extracting audio from any video, transcribing it with Whisper AI, and generating structured GEO (Goal–Execution–Outcome) summaries using GPT-4o-mini. 🎥🤖 This template transforms unstructured video content into actionable, searchable insights that are automatically stored in Notion with rich metadata. It’s ideal for creators, educators, analysts, and knowledge workers who want to convert long videos into concise, high-quality summaries without manual effort. Perfect for content indexing, research automation, and knowledge-base enrichment. 📚✨ 🔁 What This Template Does • Triggers on a schedule to continuously process new YouTube videos. ⏰ • Fetches video metadata (title, description, thumbnails, published date) via the YouTube API. 🎥 • Downloads audio using RapidAPI and prepares it for transcription. 🎧 • Transcribes audio into text using OpenAI Whisper. 📝 • Skips invalid entries when no transcript is generated. 🚫 • Merges the transcript with metadata for richer AI context. 🔗 • Uses GPT-4o-mini to generate Goal, Execution, Outcome, and Keywords via structured JSON. 🤖📊 • Parses the AI-generated JSON into Notion-friendly formats. 🔍 • Creates a Notion page with GEO sections, keywords, and video metadata. 📄🏷️ • Produces a fully searchable knowledge record for every processed video. 📚✨ ⭐ Key Benefits ✅ Converts long YouTube videos into concise, structured knowledge ✅ AI-powered GEO summaries improve comprehension and recall ✅ Zero manual transcription or note-taking — 100% automated ✅ Seamless Notion integration creates a powerful video knowledge base ✅ Works on autopilot with scheduled triggers ✅ Saves hours for educators, researchers, analysts, and content teams 🧩 Features YouTube API integration for metadata retrieval RapidAPI audio downloader OpenAI Whisper transcription GPT-4o-mini structured analysis through LangChain Memory buffer + structured JSON parser for consistent results Automatic Notion page creation Fail-safe transcript validation (IF node) Metadata + transcript merging for richer AI context GEO (Goal–Execution–Outcome) summarization workflow 🔐 Requirements YouTube OAuth2 credentials OpenAI API key (Whisper + GPT-4o-mini) Notion API integration token + database ID RapidAPI key for YouTube audio downloading n8n with LangChain nodes enabled 🎯 Target Audience YouTubers and content creators archiving their content Researchers and educators summarizing long videos Knowledge managers building searchable Notion databases Automation teams creating video intelligence workflows 🛠️ Step-by-Step Setup Instructions Add YouTube OAuth2, OpenAI, Notion, and RapidAPI credentials. 🔑 Replace the placeholder RapidAPI key in the “Get YouTube Audio” node. ⚙️ Update the Notion database ID where summaries should be stored. 📄 Configure the Schedule Trigger interval based on your needs. ⏰ Replace the hardcoded video ID (if present) with dynamic input or playlist logic. 🔗 Test with a sample video to verify transcription + AI + Notion output. ▶️ Enable the workflow to run automatically. 🚀
by Abdullah Alshiekh
What Problem Does It Solve? Customers often ask product questions or prices in comments. Businesses waste time replying manually, leading to delays. Some comments only need a short thank-you reply, while others need a detailed private response. This workflow solves these by: Replying with a friendly public comment. Sending a private message with details when needed. Handling compliments, complaints, and unclear comments in a consistent way. How to Configure It Facebook Setup Connect your Facebook Page credentials in n8n. Add the webhook URL from this workflow to your Facebook App/Webhook settings. AI Setup Add your Google Gemini API key (or swap for OpenAI/Claude). The included prompt is generic — you can edit it to match your brand tone. Optional Logging If you want to track processed messages, connect a Notion database or another CRM. How It Works Webhook catches new Facebook comments. AI Agent analyzes the comment and categorizes it (question, compliment, complaint, unclear, spam). Replying: For questions/requests → public reply + private message with full details. For compliments → short thank-you reply. For complaints → apology reply + private message for clarification. For unclear comments → ask politely if they need help. For spam/offensive → ignored (no reply). Replies and messages are sent instantly via the Facebook Graph API. Customization Ideas Change the AI prompt to match your brand voice. Add forwarding to Slack/Email if a human should review certain replies. Log conversations in Notion, Google Sheets, or a CRM for reporting. Expand to Instagram or WhatsApp with small adjustments. If you need any help Get In Touch
by Don Jayamaha Jr
Instantly access real-time Binance Spot Market data in Telegram! This workflow connects the Binance REST API with Telegram and optional GPT-4.1-mini formatting, delivering structured insights such as latest prices, 24h stats, order book depth, trades, and candlesticks directly into chat. 🔎 How It Works A Telegram Trigger listens for incoming user requests. User Authentication validates the Telegram ID to restrict access. A Session ID is generated from chat.id to manage session memory. The Binance AI Agent executes HTTP calls to the Binance public API: Latest Price (Ticker) → /api/v3/ticker/price?symbol=BTCUSDT 24h Statistics → /api/v3/ticker/24hr?symbol=BTCUSDT Order Book Depth → /api/v3/depth?symbol=BTCUSDT&limit=50 Best Bid/Ask Snapshot → /api/v3/ticker/bookTicker?symbol=BTCUSDT Candlestick Data (Klines) → /api/v3/klines?symbol=BTCUSDT&interval=15m&limit=200 Recent Trades → /api/v3/trades?symbol=BTCUSDT&limit=100 Utility Tools refine outputs: Calculator → computes spreads, midpoints, averages, % changes. Think → extracts and reformats JSON into human-readable fields. Simple Memory → saves symbol, sessionId, and user context. Message Splitter ensures outputs >4000 characters are chunked for Telegram. Final structured reports are sent back to Telegram. ✅ What You Can Do with This Agent Get real-time Binance Spot prices with 24h stats. Fetch order book depth and liquidity snapshots. View best bid/ask quotes. Retrieve candlestick OHLCV data across timeframes. Check recent trades (up to 100). Calculate spreads, mid-prices, % changes automatically. Receive clean, structured messages instead of raw JSON. 🛠️ Setup Steps Create a Telegram Bot Use @BotFather and save the bot token. Configure in n8n Import Binance AI Agent v1.02.json. Update the User Authentication node with your Telegram ID. Add Telegram credentials (bot token). Add OpenAI API key (Optional) Add Binance API key Deploy & Test Activate the workflow in n8n. Send BTCUSDT to your bot. Instantly receive Binance Spot Market insights inside Telegram. 📤 Output Rules Group outputs by Price, 24h Stats, Order Book, Candles, Trades. Respect Telegram’s 4000-char message limit (auto-split enabled). Only structured summaries — no raw JSON. 📺 Setup Video Tutorial Watch the full setup guide on YouTube: ⚡ Unlock Binance Spot Market insights instantly in Telegram — clean, fast, and API-key free. 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding permitted. 🔗 For support: Don Jayamaha – LinkedIn
by Max
This AI receptionist handles restaurant bookings and delivery orders with Vapi, Telegram, and Airtable Who’s it for This n8n template is built for restaurants that want to automate table bookings and delivery or takeaway orders using an AI receptionist. It’s suitable for small to mid-sized restaurants that receive bookings and orders via voice calls or Telegram and want a structured, reliable backend without manual handling. How it works The workflow powers an AI receptionist that operates through Vapi (voice) and Telegram (chat). For table bookings, it collects party size and preferred time, checks table availability within the requested time range, and returns available options or a “no availability” response. For orders, the menu is fetched from Airtable, items are validated, prices are calculated, and order details are collected. Delivery addresses are validated and checked against supported areas. If delivery is unavailable, the system automatically offers takeaway. All confirmed bookings and orders are saved to Airtable. How to set up Download JSON flows from the Dropbox folder, copy Airtable base with template tables to your account. Get Airtable, OpenAI, Telegram Bot, Google Maps API credentials. Set up credentials and test. How to customize the workflow You can plug a VAPI assistant. Copy the prompt from the AI agent and paste it into VAPI system prompt section. Also add MCP tool and call it restaurant tool. You can adjust booking rules, table capacity logic, menu structure, restaurant location, delivery zones, pricing calculations, and message wording to match your restaurant’s operations.
by Guilherme Campos
This n8n workflow automates the process of creating high-quality, scroll-stopping LinkedIn posts based on live research, AI insight generation, and Google Sheets storage. Instead of relying on recycled AI tips or boring summaries, this system combines real-time trend discovery via Perplexity, structured idea shaping with GPT-4, and content generation tailored to a bold, human LinkedIn voice. The workflow saves each post idea (with image prompt, tone, and summary) to a Google Sheet, sends you a Telegram alert, and even formats your content for direct publishing. Perfect for solopreneurs, startup marketers, or anyone who posts regularly on LinkedIn and wants to sound original, not robotic. Who’s it for Content creators and solopreneurs building an audience on LinkedIn Startup teams, PMs, and tech marketers looking to scale thought leadership Anyone tired of generic AI-generated posts and craving structured, edgy output How it works Daily trigger at 6 AM starts the workflow. Pulls recent post history from Google Sheets to avoid repeated ideas. Perplexity AI scans the web Generates 3 structured post ideas (including tone, hook, visual prompt, and summary). GPT-4 refines each into a bold, human-style LinkedIn post, following detailed brand voice rules. Saves everything to Google Sheets (idea, content, image prompt, post status). Sends a Telegram notification to alert you new ideas are ready. How to set up Connect your Perplexity, OpenAI, Google Sheets, and Telegram credentials. Point to your preferred Google Sheet and sheet tab for storing post data. Adjust the schedule trigger if you want more or fewer ideas per week. (Optional) Tweak the content style prompt to match your personal tone or niche. Requirements Perplexity API account OpenAI API access (GPT-4 or GPT-4-mini) Telegram bot connected to your account Google Sheets document with appropriate column headers How to customize the workflow Change the research sources or prompt tone (e.g., more tactical, more spicy, more philosophical) Add an image generation tool to turn prompts into visuals for each post Filter or tag ideas based on type (trend, tip, story, etc.) Post automatically via LinkedIn API or Buffer integration
by Avinash Raju
How it works When a meeting ends in Fireflies, the transcript is automatically retrieved and sent to OpenAI for analysis. The AI evaluates objection handling, call effectiveness, and extracts key objections raised during the conversation. It then generates specific objection handlers for future calls. The analysis is formatted into a structured report and sent to both Slack for immediate visibility and Google Drive for centralized storage. Set up steps Prerequisites: Fireflies account with API access OpenAI API key Slack workspace Google Drive connected to n8n Configuration: Connect Fireflies webhook to trigger on meeting completion Add OpenAI API key in the AI analysis nodes Configure Slack channel destination for feedback delivery Set Google Drive folder path for report storage Adjust AI prompts in sticky notes to match your objection categories and sales methodology
by Stéphane Bordas
How it Works This workflow lets you build a Messenger AI Agent capable of understanding text, images, and voice notes, and replying intelligently in real time. It starts by receiving messages from a Facebook Page via a Webhook, detects the message type (text, image, or audio), and routes it through the right branch. Each input is then prepared as a prompt and sent to an AI Agent that can respond using text generation, perform quick calculations, or fetch information from Wikipedia. Finally, the answer is formatted and sent back to Messenger via the Graph API, creating a smooth, fully automated chat experience. Set Up Steps Connect credentials Add your OpenAI API key and Facebook Page Access Token in n8n credentials. Plug the webhook Copy the Messenger webhook URL from your workflow and paste it into your Facebook Page Developer settings (Webhook → Messages → Subscribe). Customize the agent Edit the System Message of the AI Agent to define tone, temperature, and purpose (e.g. “customer support”, “math assistant”). Enable memory & tools Turn on Simple Memory to keep conversation context and activate tools like Calculator or Wikipedia. Test & deploy Switch to production mode, test text, image, and voice messages directly from Messenger. Benefits 💬 Multi-modal Understanding — Handles text, images, and audio messages seamlessly. ⚙️ Full Automation — End-to-end workflow from Messenger to AI and back. 🧠 Smart Replies — Uses OpenAI + Wikipedia + Calculator for context-aware answers. 🚀 No-Code Setup — Build your first Messenger AI in less than 30 minutes. 🔗 Extensible — Easily connect more tools or APIs like Airtable, Google Sheets, or Notion.
by Emir Belkahia
🎙️ Voice-to-Slides: Business Review Kickstarter for Customer Success This workflow helps Customer Success Managers brain dump their client knowledge via voice notes and kickstart business review preparation by auto-generating a structured Google Slides draft in their official slide deck template. Who's it for CSMs and Account Managers who want to capture meeting insights quickly via voice and get a head start on business review prep, not a finished presentation, but a solid first draft to build from. What it does (and doesn't do) ✅ It DOES: Transcribe your (potentially unstructured) voice notes accurately Organize your thoughts into Value Realized / Recommendations / Next Steps Create a Google Slides file in your official template Pre-populate placeholders with structured content ❌ It DOESN'T: Generate a client-ready presentation Add charts, metrics, or data visualizations Write polished, final copy Replace the actual work of crafting your business review Think of it as: A smart assistant that turns your brain dump into a structured starting point, not a finished product. How it works Brain dump via voice - Speak freely to your Telegram bot about your client: wins, challenges, recommendations, next steps (no need to be perfectly organized) AI transcription - Groq Whisper converts audio to text Security check - Scans for sensitive data (PII, confidential info) and alerts if found Content structuring - AI categorizes your rambling into three sections Review & approve - You receive an email with extracted content to validate and add client details Template generation - Creates a Google Slides from your template in the client's Drive folder First draft ready - Slides are populated with placeholders filled: now you refine, add data, polish Set up steps Setup time: ~20 minutes Create Telegram bot via @BotFather Prepare your own Google Slides template with placeholders: value_realized_placeholder recommendations_placeholder next_steps_placeholder Connect credentials: Telegram, Groq, OpenAI, Gmail, Google Drive, Google Slides Update template ID in "Copy template to customer Folder" node Set your company name in "Set CSM's company name" node Add your email in all "human in the loop" nodes Requirements Telegram account Groq API key (Whisper transcription) OpenAI API key Google Workspace (Gmail, Drive, Slides) Google Slides template with required placeholders Client Google Drive folders (shared access) Cost breakdown For a typical 3-5 minute voice note: Transcription (Groq Whisper)**: Free AI Processing (GPT-5-nano + GPT-5-mini)**: ~$0.005 💰 Bottom line: Half a cent per business review. You could run 200+ business reviews for $1. The workflow uses cost-effective models (GPT-5-nano for security checks, GPT-5-mini for content extraction) to keep costs negligible while maintaining quality. Note: Costs may vary based on voice note length and verbosity. Prices based on GPT-5-Nano and GPT-5-Mini pricing as of Nov 2025. 💡 Pro tips Be mindful of the guardrail**: It's designed to catch sensitive info (full names + company + financials), but it can sometimes be overzealous. If you find it blocking legitimate content, consider: Adjusting the confidence threshold (currently 0.7) to be less strict Removing the guardrail entirely if you're experienced and know what to avoid Reviewing the "Sensitive information" custom prompt to fine-tune detection rules Structure your thoughts loosely**: While speaking, try to mentally organize around Value Realization → Recommendations → Next Steps. It's totally fine if things mix or overlap, the AI will reorganize, but having this structure in mind helps you cover everything. Record with your tools open**: This is key! Have your previous BRs, CS platform, analytics dashboards, or CRM open while recording. Reference specific metrics, feature adoption rates, and data points directly from your systems. The AI can't look up data for you, feed it the good stuff. Don't overthink it**: Your first recording will probably feel awkward. That's normal. The AI is surprisingly good at cleaning up "umms," tangents, and unstructured rambling. Just brain dump. Keep it under 5 minutes**: Better transcription accuracy, faster processing, and cheaper API costs. If you have more to say, split into multiple voice notes. Review the email summary carefully**: The AI extracts content well but loses the nuance and context you have. Use the email review step to catch misinterpretations before they hit the slides. What to do after the workflow runs Open the generated slides in the client's folder Refine the AI-generated text (add nuance, fix tone) Add charts, screenshots, data visualizations Polish formatting and visual hierarchy
by Cheng Siong Chin
How It Works This workflow automates monthly revenue aggregation from multiple financial sources, including Stripe, PayPal, Shopify, and bank feeds, while delivering intelligent tax forecasting through GPT-4–based structured analysis. It systematically retrieves revenue data, consolidates disparate datasets into a unified view, and applies GPT-4 to predict upcoming tax obligations with greater accuracy. The system then generates clearly formatted, audit-ready reports and automatically distributes tax projections to designated agents via Gmail, while securely storing all outputs in Google Sheets to maintain traceable audit trails. Designed for tax professionals, accounting firms, and finance teams, it enables accurate predictive tax planning and supports a proactive compliance strategy without the need for manual calculations or spreadsheet-driven analysis. Setup Steps Connect Stripe, PayPal, Shopify credentials via n8n authentication. Configure OpenAI GPT-4 API key for structured tax analysis. Connect Gmail account for report distribution and Google Sheets. Set monthly trigger schedule and customize tax category rules. Prerequisites Stripe, PayPal, Shopify, or bank feed accounts; OpenAI API key; Gmail account; Google Sheets. Use Cases Accounting firms automating quarterly tax prep for multiple clients Customization Modify revenue sources, adjust GPT-4 prompts for specific tax scenarios Benefits Eliminates manual tax calculations, reduces forecasting errors
by Jimleuk
On my never-ending quest to find the best embeddings model, I was intrigued to come across Voyage-Context-3 by MongoDB and was excited to give it a try. This template implements the embedding model on a Arxiv research paper and stores the results in a Vector store. It was only fitting to use Mongo Atlas from the same parent company. This template also includes a RAG-based Q&A agent which taps into the vector store as a test to helps qualify if the embeddings are any good and if this is even noticeable. How it works This template is split into 2 parts. The first part being the import of a research document which is then chunked and embedded into our vector store. The second part builds a RAG-based Q&A agent to test the vector store retrieval on the research paper. Read the steps for more details. How to use First ensure you create a Voyage account voyageai.com and a MongoDB database ready. Start with Step 1 and fill in the "Set Variables" node and Click on the Manual Execute Trigger. This will take care of populating the vector store with the research paper. To use the Q&A agent, it is required to publish the workflow to access the public chat interface. This is because "Respond to Chat" works best in this mode and not in editor mode. To use for your own document, edit the "Set Variables" node to define the URL to your own document. This embeddings approach should work best on larger documents. Requirements Voyageai.com account for embeddings. You may need to add credit to get a reasonable RPM for this workflow. MongoDB database either self-hosted or online at https://www.mongodb.com. OpenAI account for RAG Q&A agent. Customising this workflow The Voyage embeddings work with any vector store so feel free to swap out to other such as Qdrant or Pinecone if you're not a fan of MongoDB Atlas. If you're feeling brave, instead of the 3 sequential pages setup I have, why not try the whole document! Fair warning that you may hit memory problems if your instance isn't sufficiently sized - but if it is, go head and share the results!
by hayatofujita
Status: Published Description Generate high-quality, long-form SEO articles from trending news using mobile-first approvals via LINE. 👥 Who’s it for This workflow is designed for content marketers, SEO specialists, and solo bloggers who want to scale their content production without losing editorial control. It is ideal for users who want to manage a deep-dive writing process entirely from their smartphone, ensuring every article meets their standards before being drafted. 🚀 How it works This template acts as a sophisticated "AI Editorial Staff" that connects RSS feeds, LINE, OpenAI, and Google Workspace. Smart News Sourcing: Automatically monitors niche-specific RSS feeds to identify the latest trending topics. AI Strategic Planning: Uses OpenAI (GPT-4o/GPT-4o-mini) to propose high-potential SEO keywords and catchy titles based on live news. Mobile Editorial Control: Sends proposals directly to your LINE. By replying with "Create Article," you trigger the next phase of the workflow. Stateful Session Management: Uses a Google Sheets "Bridge" to store unique resume URLs, allowing you to resume the automated process through simple text commands. High-Quality Looping: Instead of generating an entire article at once (which often results in shallow content), this workflow loops through each chapter. The AI writes one section at a time in detail and appends it to a Google Doc. Automated Archiving: Once the deep-dive writing is finished, it logs the final article title, URL, and date into a master Google Sheets ledger and notifies you via LINE. ⚙️ Setup steps Prepare Google Sheet: Create a spreadsheet with two tabs. Tab 1 (シート1): Headers should be userId and resumeurl. Tab 2 (Article_History): Headers should be Title, URL, and Date. Configure Credentials: Set up your credentials in n8n for OpenAI (API Key), Google Workspace (OAuth2), and LINE Messaging API (Header Auth). Update Workflow Configuration Node: Paste your LINE User ID (from the LINE Developers Console). Paste your Google Spreadsheet ID. Update the RSS URL in the news node to your target niche. Update Google Sheets Nodes: In the "Store Resume Key" and "Get Resume URL" nodes, select your specific spreadsheet. Configure LINE Webhook: Copy the Production URL from the "LINE Webhook Receiver" node. Paste it into the LINE Developers Console under "Webhook URL" and enable "Use Webhook." Activate: Save the workflow and toggle it to “Active.” 📦 Requirements n8n version 1.0 or later OpenAI API Key (GPT-4o recommended for high-quality writing) Google Workspace Account (Google Docs, Google Sheets) LINE Developers Account (Messaging API) 🎨 How to customize Refine Writing Style: Edit the System Message in the "Chapter Writing" node to change the tone (e.g., from "Formal Technical" to "Conversational Storyteller"). Switch Approval Keywords: Change the IF node condition to respond to "Approve" or "Go" instead of "Create Article." Direct-to-WordPress: Replace the "Create Google Doc" and "Update Docs" nodes with a WordPress node to automatically publish or save as a draft on your website once the loop is complete. Language Adaptation: Update the RSS URL and AI prompts to source news in any language and output the final article in another.
by TOMOMITSU ASANO
#Multi-Language Content Translation Pipeline with AI Quality Control This workflow provides a professional-grade translation pipeline that combines the speed of DeepL with the intelligent reasoning of OpenAI's GPT-4. It is designed to help teams scale their global content reach without sacrificing linguistic accuracy or cultural nuance. Who’s it for This template is ideal for content managers, digital marketing teams, and global publishers who need to localize high volumes of articles or documentation while maintaining a "human-in-the-loop" quality standard. How it works The workflow automates the entire translation lifecycle through the following steps: Trigger: Content is ingested via a Webhook or a recurring Schedule. Translation: The source text is translated into multiple target languages simultaneously using the DeepL API. AI Quality Guard: An OpenAI agent evaluates each translation, assigning a quality score based on accuracy and fluency. Automated Publishing: Content that meets your quality threshold is automatically uploaded as a draft to WordPress. Manual Review: Any translations that fall below the threshold are flagged in Slack for human intervention. Data Logging: All results are saved to Google Sheets to build a searchable translation memory. How to set up Credentials: Connect your DeepL, OpenAI, WordPress, Slack, and Google Sheets accounts. Configuration: In the 'Translation Configuration' node, define your source language and a list of target languages (e.g., DE, FR, JA). Google Sheets: Create a sheet with headers: contentId, title, sourceLanguage, targetLanguage, translatedText, and qualityScore. Slack: Choose the notification channels for alerts and summary reports. Requirements DeepL API key OpenAI API key (GPT-4o or GPT-4o-mini recommended) WordPress site with Application Passwords enabled A Slack workspace and a Google Sheets account How to customize the workflow Adjust Quality Standards:** Change the qualityThreshold value in the configuration node to make the AI verification more or less strict. Add Platforms:** Replace the WordPress node with other CMS nodes like Ghost, Strapi, or Contentful to match your stack. Advanced QA:** Modify the AI Quality Reviewer’s prompt to focus on specific brand guidelines or industry terminology.