by Jose Bossa
👥 Who's it for This workflow is perfect for businesses or individuals who want to automate WhatsApp conversations 💬 with an intelligent AI chatbot that can handle text, voice notes 🎵, and images 🖼️. No advanced coding required! 🤖 What it does It automatically receives WhatsApp messages through WasenderAPI, intelligently buffers consecutive messages to avoid fragmented responses, processes multimedia content (transcribing audio and analyzing images with AI), and responds naturally using GPT-4o mini with conversation memory. All while protecting your WhatsApp account from being banned. ⚙️ How it works 📱 Webhook Trigger – Receives new messages from WasenderAPI 🗃️ Redis Buffer System – Groups consecutive messages intelligently (7-second window) 🔀 Content Classifier – Routes messages by type (text, audio, or image) 🎵 Audio Processing – Decrypts and transcribes voice notes using OpenAI Whisper 🖼️ Image Analysis – Decrypts and analyzes images with GPT-4O Vision 🧠 AI Agent (GPT-4o mini) – Generates intelligent responses with 10-message memory ⏱️ Anti-Ban Wait – 6-second delay to simulate human typing 📤 Message Sender – Delivers response back to WhatsApp user 📋 Requirements WasenderAPI account with connected WhatsApp number : https://wasenderapi.com/ Redis database (free tier works fine) OpenAI API key with access to GPT-4o mini and Whisper n8n's AI Agent, LangChain, and Redis nodes 🛠️ How to set up Create your WasenderAPI account and connect a WhatsApp number Set up a free Redis database and get connection credentials Configure OpenAI API key in n8n credentials Replace the WasenderAPI Bearer token in "Get the audio", "Get the photo", and "Send Message to User" nodes Change the Manual Trigger to a Webhook and configure it in WasenderAPI Customize the AI Agent prompt to match your business needs Adjust wait times if needed (default: 6 seconds for responses, 7 seconds for buffer) Save and activate the workflow ✅ 🎨 How to customize Modify the AI Agent prompt to change bot personality and instructions Adjust buffer wait time (7 seconds) for faster/slower message grouping Change response delay (6 seconds) based on your use case , its recomendable 30 seconds. Add more content types (documents, videos) by extending the Switch Type node Configure conversation memory window (default: 10 messages)
by Trung Tran
📚 Telegram RAG Chatbot with PDF Document & Google Drive Backup An upgraded Retrieval-Augmented Generation (RAG) chatbot built in n8n that lets users ask questions via Telegram and receive accurate answers from uploaded PDFs. It embeds documents using OpenAI and backs them up to Google Drive. 👤 Who’s it for Perfect for: Knowledge workers who want instant access to private documents Support teams needing searchable SOPs and guides Educators enabling course material Q&A for students Individuals automating personal document search + cloud backup ⚙️ How it works / What it does 💬 Telegram Chat Handling User sends a message Triggered by the Telegram bot, the workflow checks if the message is text. Text message → OpenAI RAG Agent If the message is text, it's passed to a GPT-powered document agent. This agent: Retrieves relevant info from embedded documents using semantic search Returns a context-aware answer to the user Send answer back The bot sends the generated response back to the Telegram user. Non-text input fallback If the message is not text, the bot replies with a polite unsupported message. 📄 PDF Upload and Embedding User uploads PDFs manually A manual trigger starts the embedding flow. Default Data Loader Reads and chunks the PDF(s) into text segments. Insert to Vector Store (Embedding) Text chunks are embedded using OpenAI and saved for retrieval. Backup to Google Drive The original PDF is uploaded to Google Drive for safekeeping. 🛠️ How to set up Telegram Bot Create via BotFather Connect it to the Telegram Trigger node OpenAI Use your OpenAI API key Connect the Embeddings and Chat Model nodes (GPT-3.5/4) Ensure both embedding and querying use the same Embedding node Google Drive Set up credentials in n8n for your Google account Connect the “Backup to Google Drive” node PDF Ingestion Use the “Upload your PDF here” trigger Connect it to the loader, embedder, and backup flow ✅ Requirements Telegram bot token OpenAI API key (GPT + Embeddings) n8n instance (self-hosted or cloud) Google Drive integration PDF files to upload 🧩 How to customize the workflow | Feature | How to Customize | |-------------------------------|-------------------------------------------------------------------| | Auto-ingest from folders | Add Google Drive/Dropbox watchers for new PDFs | | Add file upload via Telegram | Extend Telegram bot to receive PDFs and run the embedding flow | | Track user questions | Log Telegram usernames and questions to a database | | Summarize documents | Add summarization step on upload | | Add Markdown or HTML support | Format replies for better Telegram rendering | Built with 💬 Telegram + 📄 PDF + 🧠 OpenAI Embeddings + ☁️ Google Drive + ⚡ n8n
by Cheng Siong Chin
How It Works This workflow automates workforce intelligence analysis and strategic planning for HR directors, workforce planners, and operations managers in enterprises managing distributed teams. It solves the challenge of transforming raw workforce data into actionable insights while maintaining human oversight on critical staffing decisions. Weekly triggers initiate the analysis cycle, generating synthetic workforce metrics that flow through specialized AI agents operating in parallel: workforce intelligence assessment, cognitive forecasting for demand prediction, attrition risk calculation, and intelligence analysis for pattern detection. A planning optimization agent synthesizes findings into mobility recommendations and scenario projections. Results route through human approval for critical workforce changes before generating final reports with strategic recommendations. Setup Steps Configure API credentials with Llama-3.1-70B-Instruct model access Set up weekly schedule trigger for Monday morning analysis runs Configure human approval node with workforce planning lead email address Customize AI agent prompts for organization-specific workforce metrics and KPIs Set up final report distribution to stakeholders Prerequisites API key, HR data access (anonymized employee metrics) Use Cases Strategic workforce planning, seasonal staffing optimization Customization Integrate HRIS systems for live data, add department-specific forecasting models Benefits Reduces planning cycle time by 70%, provides predictive insights for proactive decisions
by Cheng Siong Chin
How It Works This workflow automates monthly tax filing processes by retrieving financial data, performing AI-driven tax calculations, coordinating pre-filing reviews with key stakeholders, incorporating feedback, and managing overall submission readiness. It pulls accounting records, executes GPT-5–based tax calculations with transparent reasoning, formats comprehensive pre-filing reports, and routes them to a submission coordinator via email for review. The system captures reviewer feedback through structured prompts, intelligently applies necessary corrections, archives finalized records in Google Drive, and continuously tracks filing status. It is designed for accounting firms, tax practices, and finance departments that require coordinated, multi-stakeholder tax filing with minimal manual intervention. Setup Steps Connect accounting system and configure financial data fetch parameters. Set up OpenAI GPT-4 API for tax calculations and reasoning extraction. Configure Gmail, Chat Model, and Google Drive credentials. Define submission coordinator contacts and configure feedback. Prerequisites Accounting system access; OpenAI API key; Gmail account; Google Drive Use Cases Tax firms managing multi-client monthly filings with partner review Customization Modify tax calculation prompts for jurisdictions, adjust feedback collection fields Benefits Eliminates manual filing coordination, reduces submission errors
by Rahul Joshi
Description Transform Figma design files into detailed QA test cases with AI-driven analysis and structured export to Google Sheets. This workflow helps QA and product teams streamline design validation, test coverage, and documentation — all without manual effort. 🎨🤖📋 What This Template Does Step 1: Trigger manually and input your Figma file ID. 🎯 Step 2: Fetches the full Figma design data (layers, frames, components) via API. 🧩 Step 3: Sends structured design JSON to GPT-4o-mini for intelligent test case generation. 🧠 Step 4: AI analyzes UI components, user flows, and accessibility aspects to generate 5–10 test cases. ✅ Step 5: Parses and formats results into a clean structure. Step 6: Exports test cases directly to Google Sheets for QA tracking and reporting. 📊 Key Benefits ✅ Saves 2–3 hours per design by automating test case creation ✅ Ensures consistent, comprehensive QA documentation ✅ Uses AI to detect UX, accessibility, and functional coverage gaps ✅ Centralizes output in Google Sheets for easy collaboration Features Figma API integration for design parsing GPT-4o-mini model for structured test generation Automated Google Sheets export Dynamic file ID and output schema mapping Built-in error handling for large design files Requirements Figma Personal Access Token OpenAI API key (GPT-4o-mini) Google Sheets OAuth2 credentials Target Audience QA and Test Automation Engineers Product & Design Teams Startups and Agencies validating Figma prototypes Setup Instructions Connect your Figma token as HTTP Header Auth (X-Figma-Token). Add your OpenAI API key in n8n credentials (model: gpt-4o-mini). Configure Google Sheets OAuth2 and select your sheet. Input Figma file ID from the design URL. Run once manually, verify output, then enable for regular use.
by Shun Nakayama
Turn your favorite podcast episodes into engaging social media content automatically. This workflow fetches new episodes from an RSS feed, transcribes the audio using OpenAI Whisper, generates a concise summary using GPT-4o, and drafts a tweet. It then sends the draft to Slack for your review before posting it to X (Twitter). Who is this for Content creators, social media managers, and podcast enthusiasts who want to share insights without manually listening to and typing out every episode. Key Features Large File Support:** Includes a custom logic to download audio in chunks, ensuring stability even with long episodes (preventing timeouts). Human-in-the-Loop:** Nothing gets posted without your approval. You can review the AI-generated draft in Slack before it goes live. High-Quality AI:** Uses OpenAI's Whisper for accurate transcription and GPT-4o for intelligent summarization. How it works Monitor: Checks the Podcast RSS feed daily for new episodes. Process: Downloads the audio (handling large files via chunking) and transcribes it. Draft: AI summarizes the transcript into bullet points and formats it for X (Twitter). Approve: Sends the draft to a Slack channel. Publish: Once approved by you, it posts the tweet to your X account. Requirements OpenAI API Key Slack Account & App (Bot Token) X (Twitter) Developer Account (OAuth2) Setup instructions RSS Feed: The template defaults to "TED Talks Daily" for demonstration. Open the [Step 1] RSS node and replace the URL with your target podcast. Connect Credentials: Set up your credentials for OpenAI, Slack, and X (Twitter) in the respective nodes. Slack Channel: In the [Step 12] Slack node, select the Channel ID where you want to receive the approval request.
by Shelly-Ann Davy
Automate Bug Reports: GitHub Issues → AI Analysis → Jira Tickets with Slack & Discord Alerts Automatically convert GitHub issues into analyzed Jira tickets with AI-powered severity detection, developer assignment, and instant team alerts. Overview This workflow captures GitHub issues in real-time, analyzes them with GPT-4o for severity and categorization, creates enriched Jira tickets, assigns the right developers, and notifies your team across Slack and Discord—all automatically. Features AI-Powered Triage**: GPT-4o analyzes bug severity, category, root cause, and generates reproduction steps Smart Assignment**: Automatically assigns developers based on mentioned files and issue context Two-Way Sync**: Posts Jira ticket links back to GitHub issues Multi-Channel Alerts**: Rich notifications in Slack and Discord with action buttons Time Savings**: Eliminates 15-30 minutes of manual triage per bug Customizable Routing**: Easy developer mapping and priority rules What Gets Created Jira Ticket: Original GitHub issue details with reporter info AI severity assessment and categorization Reproduction steps and root cause analysis Estimated completion time Automatic labeling and priority assignment GitHub Comment: Jira ticket link AI analysis summary Assigned developer and estimated time Team Notifications: Severity badges and quick-access buttons Developer assignment and root cause summary Color-coded priority indicators Use Cases Development teams managing 10+ bugs per week Open source projects handling community reports DevOps teams tracking infrastructure issues QA teams coordinating with developers Product teams monitoring user-reported bugs Setup Requirements Required: GitHub repository with admin access Jira Software workspace OpenAI API key (GPT-4o access) Slack workspace OR Discord server Customization Needed: Update developer email mappings in "Parse GPT Response & Map Data" node Replace YOUR_JIRA_PROJECT_KEY with your project key Update Slack channel name (default: dev-alerts) Replace YOUR_DISCORD_WEBHOOK_URL with your webhook Change your-company.atlassian.net to your Jira URL Setup Time: 15-20 minutes Configuration Steps Import workflow JSON into n8n Add credentials: GitHub OAuth2, Jira API, OpenAI API, Slack, Discord Configure GitHub webhook in repository settings Customize developer mappings and project settings Test with sample GitHub issue Activate workflow Expected Results 90% faster bug triage (20 min → 2 min per issue) 100% consistency in bug analysis Zero missed notifications Better developer allocation Improved bug documentation Tags GitHub, Jira, AI, GPT-4, Bug Tracking, DevOps, Automation, Slack, Discord, Issue Management, Development, Project Management, OpenAI, Webhook, Team Collaboration
by Rahul Joshi
📊 Description Generate high-quality, SEO-optimized content briefs automatically using AI, real-time keyword research, SERP intelligence, and historical content context. This workflow standardizes user inputs, fetches search metrics, analyzes competitors, and produces structured SEO briefs with quality scoring and version control. It also stores all versions in Google Sheets and generates HTML previews for easy review and publishing. 🤖📄📈 What This Template Does Normalizes user input from the chat trigger into structured fields (intent, topic, parameters). ✏️ Fetches real-time keyword metrics such as search volume, CPC, and difficulty from DataForSEO. 🔍 Retrieves SERP insights through SerpAPI for top competitors, headings, and content gaps. 🌐 Loads historical brief versions from Google Sheets for continuity and versioning. 📚 Uses an advanced GPT-4o-mini agent to generate a complete SEO brief with title, metadata, keywords, outline, entities, and internal links. 🤖 Calculates detailed SEO, differentiation, and completeness quality scores. 📊 Validates briefs against quality thresholds (outline length, keywords, word count, overall score). ⚡ Stores approved briefs in Google Sheets with version control and timestamping. 🗂️ Generates an HTML preview with styled formatting for team review or CMS use. 🖥️ Sends Slack alerts when a brief does not meet quality standards. 🚨 Key Benefits ✅ Fully automated SEO content brief generation ✅ Uses real-time keyword + SERP + competitor intelligence ✅ Ensures quality through automated scoring and validation ✅ Built-in version control for content operations teams ✅ Beautiful HTML preview ready for editors or clients ✅ Reduces research time from hours to minutes ✅ Ideal for content agencies, SEO teams, and AI-powered workflows Features Chat-triggered brief generation Real-time DataForSEO keyword metrics SERP analysis tool integration GPT-4o-mini structured AI agent Google Sheets integration for storing & retrieving versions Automated quality scoring (SEO, gaps, completeness) HTML preview builder with rich formatting Slack alerting for low-quality briefs Semantic entities, content gaps, competitor insights Requirements OpenAI API (GPT-4o-mini or compatible model) DataForSEO access credentials (Basic Auth) SerpAPI key for SERP extraction Google Sheets OAuth2 integration Optional: Slack webhook for quality alerts Target Audience SEO teams generating large amounts of content briefs Content agencies scaling production with automation Marketing teams building data-driven content strategies SaaS teams wanting automated keyword-based briefs Anyone needing structured, high-quality content briefs from chat Step-by-Step Setup Instructions Connect your OpenAI API credential and confirm GPT-4o-mini availability. 🔌 Add DataForSEO HTTP Basic Auth for keyword metrics. 📊 Connect SerpAPI for SERP analysis tools. 🌐 Add Google Sheets OAuth2 and link your content_versions sheet. 📄 Optional: Add a Slack webhook URL for quality alerts. 🔔 Test by sending a topic via the chat trigger. Review the generated SEO brief and HTML preview. Enable the workflow for continued use in your content pipeline. 🚀
by Cheng Siong Chin
How It Works This workflow automates academic and professional research proposal generation using a multi-agent AI pipeline. It targets researchers, academics, grant writers, and R&D teams who need structured, high-quality proposals efficiently. The core problem it solves: manually drafting proposals is time-consuming, inconsistent, and prone to missing key elements like ethics, impact, and funding alignment. A Supervisor Agent orchestrates three specialist sub-agents, Research Content, Strategic Planning, and Ethics/Impact, each powered by dedicated AI models. A Funding Agency Research Tool and Web Search Tool supply real-time context. The generated proposal is parsed, then evaluated by a Quality Control Agent. Proposals meeting the quality threshold are formatted and stored; those falling short are flagged for human revision, ensuring only polished outputs reach storage. Setup Steps Add OpenAI (or compatible) API credentials to all AI model nodes. Configure Supervisor, Research Content, Strategic Planning, and QC Agent system prompts. Set up Funding Agency Research Tool with target agency endpoints or search parameters. Connect Web Search Tool credentials (e.g., SerpAPI or Tavily). Configure storage node (Google Sheets/database) with target schema. Set quality score threshold in the Check Quality Score node. Prerequisites Web search API key (SerpAPI/Tavily) Google Sheets or database credentials Use Cases Grant proposal drafting for research institutions Customisation Swap AI models per agent for cost/performance balance Benefits Cuts proposal drafting time by 70–80%
by Cheng Siong Chin
How It Works The workflow runs on a monthly trigger to collect both current-year and multi-year historical HDB data. Once fetched, all datasets are merged with aligned fields to produce a unified table. The system then applies cleaning and normalization rules to ensure consistent scales and comparable values. After preprocessing, it performs pattern mining, anomaly checks, and time-series analysis to extract trends and forecast signals. An AI agent, integrating OpenAI GPT-4, statistical tools, and calculator nodes, synthesizes these results into coherent insights. The final predictions are formatted and automatically written to Google Sheets for reporting and downstream use. Setup Steps 1) Configure fetch nodes to pull current-year HDB data and three years of historical records. 2) Align and map column names across all datasets. 3) Set normalization and standardization parameters in the cleaning node. 4) Add your OpenAI API key (GPT-4) and link the model, forecasting tool, and calculator nodes. 5) Authorize Google Sheets and configure sheet and cell mappings for automated export. Prerequisites Historical data source with API access (3+ years of records) OpenAI API key for GPT-4 model Google Sheets account with API credentials Basic understanding of time series data Use Cases Real Estate: Forecast property prices using multi-year historical HDB/market data with confidence intervals Finance: Predict market trends by aggregating years of transaction or pricing records Customization Data Source: Replace HDB/fetch nodes with stock prices, sensor data, sales records, or any historical dataset Analysis Window: Adjust years fetched (2-5 years) based on data availability and prediction horizon Benefits Automation: Monthly scheduling eliminates manual data gathering and analysis Consolidation: Merges fragmented year-by-year data into unified historical view
by Don Jayamaha Jr
Access live KuCoin Spot Market data instantly in Telegram! This workflow integrates the KuCoin REST API with Telegram and an optional GPT-4.1-mini formatter, delivering real-time insights like latest prices, 24h stats, order book depth, trades, and candlesticks — all structured into clean Telegram messages. 🔎 How It Works A Telegram Trigger listens for user commands. User Authentication validates the Telegram ID against an allowlist. A SessionId is generated from the chat ID to support memory across turns. The KuCoin AI Agent orchestrates API requests: 24h Stats → /api/v1/market/stats?symbol=BTC-USDT Order Book Depth → /api/v1/market/orderbook/level2_100?symbol=BTC-USDT Latest Price → /api/v1/market/orderbook/level1?symbol=BTC-USDT Best Bid/Ask → /api/v1/market/orderbook/level1?symbol=BTC-USDT Klines (Candles) → /api/v1/market/candles?symbol=BTC-USDT&type=15min&limit=20 Recent Trades → /api/v1/market/histories?symbol=BTC-USDT Average Price (via Ticker) → /api/v1/market/orderbook/level1?symbol=BTC-USDT Utility Tools process results: Calculator → spreads, % changes, averages. Think → reshapes JSON, selects fields, formats outputs. Message Splitter breaks outputs >4000 chars (Telegram limit). Final report is sent back via Telegram SendMessage in human-readable format. ✅ What You Can Do with This Agent Get 24h rolling statistics (open, high, low, close, last, volume). Retrieve full order book depth (20, 100 levels) or best bid/ask. Monitor real-time latest prices with spreads. Analyze candlestick data (OHLCV) across supported intervals. View recent public trades with price, size, side, and time. Use average price proxies from bid/ask + last trade. Receive structured Telegram reports — not raw JSON. 🛠️ Setup Steps Create a Telegram Bot Use @BotFather to create a bot and copy its token. Configure in n8n Import KuCoin AI Agent v1.02.json. Update User Authentication node with your Telegram ID. Add Telegram API credentials (bot token). Add OpenAI API key. (Optional) Add KuCoin API key Deploy & Test Activate the workflow in n8n. Send a query like BTC-USDT to your bot. Instantly receive structured KuCoin Spot Market insights in Telegram. 📤 Output Rules Responses grouped into Price, 24h Stats, Order Book, Klines, Trades. No raw JSON (only human-readable summaries). No financial advice or predictions. Always fetch directly from KuCoin’s official API. 📺 Setup Video Tutorial Watch the full setup guide on YouTube: If you want, I can also update embed links & thumbnails elsewhere to match this. ⚡ Unlock KuCoin Spot Market insights in Telegram — fast, reliable, and API-key free. 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding permitted. 🔗 For support: Don Jayamaha – LinkedIn
by Jose Luis Segura
Revolut Extracts Analyzer This n8n template processes Revolut statements, normalizes transactions, and uses AI to categorize expenses automatically. Use cases include detecting subscriptions, separating internal transfers, and building dashboards to track spending. How it works Get Categories from Supabase** Download & Transform** Loop Over Items** LLM Categorizer** Insert into Supabase** How to use Start with the manual trigger node or replace it with a schedule/webhook. Connect Google Drive to provide Revolut CSV files. Ensure Supabase has tables for transactions and categories. Extend with notifications, reports, or BI tools. Requirements Google Drive for CSV files Supabase tables for categories & transactions LLM provider (OpenAI/Gemini)