by Don Jayamaha Jr
Coinbase AI Agent instantly fetches real-time market data directly in Telegram! This workflow integrates the Coinbase REST API with Telegram (plus optional AI-powered formatting) to deliver the latest crypto price, order book, candles, and trade stats in seconds. Perfect for crypto traders, analysts, and investors who want actionable market data at their fingertips—without API keys. How It Works A Telegram bot listens for user requests (e.g., BTC-USD). The workflow calls Coinbase public endpoints (no key required) to fetch real-time data: Latest price (ticker) 24h stats (open, high, low, close, volume) Order book snapshots (best bid/ask + depth) Candlestick data (OHLCV for multiple intervals) Recent trades (executed orders) A Calculator node derives useful values like mid-price and spread. An AI or “Think” node reshapes JSON into clear, human-readable messages. A splitter ensures long messages are broken into safe Telegram chunks. The final market insights are sent instantly back to Telegram. What You Can Do with This Agent This Telegram bot gives you: ✅ Get instant price & 24h stats for any Coinbase spot pair. ✅ Monitor live order books with top bids/asks. ✅ Analyze candle data (e.g., 15m, 1h, 4h, 1d). ✅ Track recent trades to see market activity. ✅ Receive clean, structured reports—optionally AI-enhanced. Set Up Steps Create a Telegram Bot Use @BotFather on Telegram to create your bot and get an API token. Configure in n8n Import the provided workflow JSON. Add your Telegram credentials (bot token + your Telegram ID for authentication). (Optional) Add an OpenAI key if you want AI-enhanced formatting. Deploy and Test Send a query like BTC-USD to your bot. Instantly receive Coinbase spot data in Telegram! 🚀 Unlock powerful, real-time Coinbase market insights directly in Telegram—no Coinbase API key required! 📺 Setup Video Tutorial Watch the full setup guide on YouTube: 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding or resale permitted. 🔗 For support: LinkedIn – Don Jayamaha
by Connor Provines
[Meta] Multi-Format Documentation Generator for N8N Creators (+More) One-Line Description Transform n8n workflow JSON into five ready-to-publish documentation formats including technical guides, social posts, and marketplace submissions. Detailed Description What it does: This workflow takes an exported n8n workflow JSON file and automatically generates a complete documentation package with five distinct formats: technical implementation guide, LinkedIn post, Discord community snippet, detailed use case narrative, and n8n Creator Commons submission documentation. All outputs are compiled into a single Google Doc for easy access and distribution. Who it's for: n8n creators** preparing workflows for the template library or community sharing Automation consultants** documenting client solutions across multiple channels Developer advocates** creating content about automation workflows for different audiences Teams** standardizing workflow documentation for internal knowledge bases Key Features: Parallel AI generation** - Creates all five documentation formats simultaneously using Claude, saving 2+ hours of manual writing Automatic format optimization** - Each output follows platform-specific best practices (LinkedIn character limits, Discord casual tone, n8n marketplace guidelines) Single Google Doc compilation** - All documentation consolidated with clear section separators and automatic workflow name detection JSON upload interface** - Simple form-based trigger accepts workflow exports without technical setup Smart content adaptation** - Same workflow data transformed into technical depth for developers, engaging narratives for social media, and searchable descriptions for marketplaces Ready-to-publish outputs** - No editing required—each format follows platform submission guidelines and style requirements How it works: User uploads exported n8n workflow JSON through a web form interface Five AI agents process the workflow data in parallel, each generating format-specific documentation (technical guide, LinkedIn post, Discord snippet, use case story, marketplace listing) All outputs merge into a formatted document with section headers and separators Google Docs creates a new document with auto-generated title from workflow name and timestamp Final document populates with all five documentation formats, ready for copying to respective platforms Setup Requirements Prerequisites: Anthropic API** (Claude AI) - Powers all documentation generation; requires paid API access or credits Google Docs API** - Creates and updates documentation; free with Google Workspace account n8n instance** - Cloud or self-hosted with AI agent node support (v1.0+) Estimated Setup Time: 20-25 minutes (15 minutes for API credentials, 5-10 minutes for testing with sample workflow) Installation Notes API costs**: Each workflow documentation run uses ~15,000-20,000 tokens across five parallel AI calls (approximately $0.30-0.50 per generation at current Claude pricing) Google Docs folder**: Update the folderId parameter in the "Create a document" node to your target folder—default points to a specific folder that won't exist in your Drive Testing tip**: Use a simple 3-5 node workflow for your first test to verify all AI agents complete successfully before processing complex workflows Wait node purpose**: The 5-second wait between document creation and content update prevents Google Docs API race conditions—don't remove this step Form URL**: After activation, save the form trigger URL for easy access—bookmark it or share with team members who need to generate documentation Customization Options Swappable integrations: Replace Google Docs with Notion, Confluence, or file system storage by swapping final nodes Switch from Claude to GPT-4, Gemini, or other LLMs by changing the language model node (may require prompt adjustments) Add Slack/email notification nodes after completion to alert when documentation is ready Adjustable parameters: Modify AI prompts in each agent node to match your documentation style preferences or add company-specific guidelines Add/remove documentation formats by duplicating or deleting agent nodes and updating merge configuration Change document formatting in the JavaScript code node (section separators, headers, metadata) Extension possibilities: Add automatic posting to LinkedIn/Discord by connecting their APIs after doc generation Create version history tracking by appending to existing docs instead of creating new ones Build approval workflow by adding human-in-the-loop steps before final document creation Generate visual diagrams by adding Mermaid chart generation from workflow structure Create multi-language versions by adding translation nodes after English generation Category Development Tags documentation n8n content-generation ai claude google-docs workflow automation-publishing Use Case Examples Marketplace contributors**: Generate complete n8n template submission packages in minutes instead of hours of manual documentation writing across multiple format requirements Agency documentation**: Automation consultancies can deliver client workflows with professional documentation suite—technical guides for client IT teams, social posts for client marketing, and narrative case studies for portfolio Internal knowledge base**: Development teams standardize workflow documentation across projects, ensuring every automation has consistent technical details, use case examples, and setup instructions for team onboarding
by Divyansh Chauhan
🪄 Prompt To Video (MagicHour API) with Music & YouTube Automate AI video creation, background music, YouTube uploads, and result logging — all from a single text prompt. ⚡ Overview This n8n template turns a text prompt into a complete AI-generated video using the MagicHour API, adds background music, generates YouTube metadata, uploads to YouTube, and logs results in Google Sheets — all in one flow. Perfect for creators, marketers, and startups producing YouTube content at scale — from daily AI Shorts to explainers or marketing clips. 🧩 Use Cases 🎥 Daily AI-generated Shorts 🧠 Product explainers 🚀 Marketing & brand automation 🔁 Repurpose blog posts into videos 💡 AI storytelling or creative projects ⚙️ How It Works Trigger when a new row is added to Google Sheets or via Chat input. Gemini parses and normalizes the text prompt. MagicHour API generates the AI video. Poll until the render completes. (Optional) Mix background audio using MediaFX. Gemini generates YouTube title, description, and tags. Upload the video to YouTube with metadata. Log YouTube URL, metadata, and download link back to Google Sheets. 🧰 Requirements Service Purpose MagicHour API Key Text-to-video generation Gemini API Key Prompt parsing & metadata YouTube OAuth2 Video uploads Google Sheets OAuth2 Trigger & logging (Optional) MediaFX Node Audio mixing 🗂️ Google Sheets Setup Column Description Prompt Text prompt for video Background Music URL (Optional) Royalty-free track Status Tracks flow progress YouTube URL Auto-filled after upload Metadata Title, tags, and description JSON Date Generated (Optional) Auto-filled with video creation date 📅 100 Daily Prompts Automation You can scale this workflow to generate one video per day from a batch of 100 prompts in Google Sheets. Setup Steps Add 100 prompts to your Google Sheet — one per row. Set the Status column for each to Pending. Use a Cron Trigger in n8n to run the workflow once daily (e.g., at 9 AM). Each run picks one Pending prompt, generates a video, uploads to YouTube, then marks it as Done. Continues daily until all 100 prompts are processed. Example Cron Expression 0 9 * * * → Runs the automation every day at 9:00 AM. Node Sequence [Schedule Trigger (Daily)] → [Get Pending Prompt from Sheets] → [Gemini Prompt Parser] → [MagicHour Video Generation] → [Optional: MediaFX Audio Mix] → [Gemini Metadata Generator] → [YouTube Upload] → [Update Row in Sheets] 💡 Optional Enhancements: Add a notification node (Slack, Discord, or Email) after each upload. Add a counter check to stop after 100 videos. Add a “Paused” column to skip specific rows. 🧠 Gemini Integration Gemini handles: JSON parsing for MagicHour requests Metadata generation (title, description, tags) Optional creative rewriting of prompts 🎧 Audio Mixing (Optional) Install MediaFX Community Node → Settings → Community Nodes → n8n-nodes-mediafx Use it to blend background music automatically into videos. 🪶 Error Handling Avoid “Continue on Fail” in key nodes Use IF branches for MagicHour API errors Add retry/timeout logic for polling steps 🧱 Node Naming Tips Rename generic nodes for clarity: Merge → Merge Video & Audio If → Check Video Completion HTTP Request → MagicHour API Request 🚀 How to Use Add MagicHour, Gemini, YouTube, and Sheets credentials Replace background music with your own track Use Google Sheets trigger or daily cron for automation Videos are created, uploaded, and logged — hands-free ⚠️ Disclaimer This template uses community nodes (MediaFX). Install and enable them manually. MagicHour API usage may incur costs based on video duration and quality. 🌐 SEO Keywords MagicHour API, n8n workflow, AI video generator, automated YouTube upload, Gemini metadata, AI Shorts, MediaFX, Google Sheets automation, AI marketing, content automation.
by Jimmy Gay
🤖 Automated SEO Audit with a Team of AI Specialists This workflow performs a comprehensive, automated monthly SEO and performance audit for any website. It uses a "team" of specialized AI agents to analyze data from multiple sources, aggregates their findings, and generates a final strategic report. Every month, it automatically fetches data from Google Analytics, Google Search Console, and Google PageSpeed Insights, and also performs a live crawl of the target website's homepage. Key Features Fully Automated**: Runs on a schedule to deliver monthly reports without manual intervention. Multi-Source Analysis**: Gathers data from four key marketing sources for a 360° view. AI Agent Team**: Uses a sophisticated multi-agent system where each AI specializes in one area (Analytics, Performance, Technical SEO). Master Analyst**: A final AI agent synthesizes all specialist reports into a single, actionable strategic plan. Automated Storage**: All individual and final reports are automatically saved to a designated Google Sheet. ⚙️ Setup Instructions To use this template, you must configure your credentials and set your target website. 1. Set Your Target Domain (Crucial!): Find the Set Target Website node at the beginning of the workflow. In the "Value" field, replace https://www.your-website.com with the URL of the website you want to audit. This will update the URL across the entire workflow automatically. 2. Configure the Schedule Trigger: Click on the Schedule Trigger node to set when you want the monthly report to run. 3. Connect Your Google Credentials: Google Analytics**: Select your credential in the Get a report node. Google Search Console**: Select your credential in the Search Console (HTTP Request) node. Google Sheets*: Select your credential in *all Google Sheets nodes. Google PageSpeed API Key**: Go to the "Credentials" tab in n8n and create a new "Generic Credential" with the type "API Key - Query Param". Name it Google API Key. The "Parameter Name" must be key. Paste your PageSpeed API key into the "API Key" field. Go back to the PageSpeed Insight node, select "API Key - Query Param" for Authentication, and choose your new credential. 4. Connect OpenAI Credentials: This template uses multiple OpenAI Chat Model nodes. Configure each one with your OpenAI API key. 5. Set Your Google Sheet: In each of the Google Sheets nodes, replace the hardcoded "Document ID" with the ID of your own Google Sheet where you want to store the reports. 🔬 Workflow Explained Phase 1: Data Collection: The Schedule Trigger starts the workflow. Four parallel branches collect data from Google Analytics, PageSpeed Insights, Search Console, and a direct website crawl. Phase 2: Data Processing & Specialist Analysis: Each data source is processed by a dedicated Code node to format the data. The formatted data is then sent to a specialized AI agent (ANALYTICS SPECIALIST, PERFORMANCE SPECIALIST, etc.) for in-depth analysis. Phase 3: Report Aggregation: A Merge node waits for all four specialist reports to be completed. A DATA AGGREGATOR node then combines them into a single, comprehensive package. Phase 4: Master Synthesis & Storage: The final MASTER ANALYST agent receives the aggregated data and produces a high-level strategic summary with actionable recommendations. This final report is then saved to Google Sheets.
by Avinash Raju
How it works When a meeting ends in Fireflies, the transcript is automatically retrieved and sent to OpenAI for analysis. The AI evaluates objection handling, call effectiveness, and extracts key objections raised during the conversation. It then generates specific objection handlers for future calls. The analysis is formatted into a structured report and sent to both Slack for immediate visibility and Google Drive for centralized storage. Set up steps Prerequisites: Fireflies account with API access OpenAI API key Slack workspace Google Drive connected to n8n Configuration: Connect Fireflies webhook to trigger on meeting completion Add OpenAI API key in the AI analysis nodes Configure Slack channel destination for feedback delivery Set Google Drive folder path for report storage Adjust AI prompts in sticky notes to match your objection categories and sales methodology
by Stéphane Bordas
How it Works This workflow lets you build a Messenger AI Agent capable of understanding text, images, and voice notes, and replying intelligently in real time. It starts by receiving messages from a Facebook Page via a Webhook, detects the message type (text, image, or audio), and routes it through the right branch. Each input is then prepared as a prompt and sent to an AI Agent that can respond using text generation, perform quick calculations, or fetch information from Wikipedia. Finally, the answer is formatted and sent back to Messenger via the Graph API, creating a smooth, fully automated chat experience. Set Up Steps Connect credentials Add your OpenAI API key and Facebook Page Access Token in n8n credentials. Plug the webhook Copy the Messenger webhook URL from your workflow and paste it into your Facebook Page Developer settings (Webhook → Messages → Subscribe). Customize the agent Edit the System Message of the AI Agent to define tone, temperature, and purpose (e.g. “customer support”, “math assistant”). Enable memory & tools Turn on Simple Memory to keep conversation context and activate tools like Calculator or Wikipedia. Test & deploy Switch to production mode, test text, image, and voice messages directly from Messenger. Benefits 💬 Multi-modal Understanding — Handles text, images, and audio messages seamlessly. ⚙️ Full Automation — End-to-end workflow from Messenger to AI and back. 🧠 Smart Replies — Uses OpenAI + Wikipedia + Calculator for context-aware answers. 🚀 No-Code Setup — Build your first Messenger AI in less than 30 minutes. 🔗 Extensible — Easily connect more tools or APIs like Airtable, Google Sheets, or Notion.
by Rodrigo
How it works This workflow creates a complete AI-powered restaurant ordering system through WhatsApp. It receives customer messages, processes multimedia content (text, voice, images, PDFs, location), uses GPT-4 to understand customer intent and manage conversations, handles the complete ordering flow from menu selection to payment verification, and sends formatted orders to restaurant staff. The system maintains conversation memory, verifies payment receipts using OCR, and provides automated responses in multiple languages. Who's it for Restaurant owners, food delivery services, and hospitality businesses looking to automate customer service and order management through WhatsApp without hiring additional staff. Requirements WhatsApp Business API account OpenAI API key (GPT-4/GPT-4o access recommended) Supabase account (for conversation memory and vector storage) Google Drive account (for menu images and QR codes) Google Maps API key (for location services) Gemini API key (for PDF processing) How to set up Configure credentials - Add your WhatsApp Business API, OpenAI, Supabase, Google Drive, and Gemini API credentials to n8n Update phone numbers - Replace [PHONE_NUMBER] placeholders with your actual restaurant and staff phone numbers Customize restaurant details - Replace [RESTAURANT_NAME], [RESTAURANT_OWNER_NAME], and [BANK_ACCOUNT_NUMBER] with your information Upload menu images - Add your menu images to Google Drive and update the file IDs Set up Supabase - Create tables for chat memory and upload your menu/restaurant information to the vector database Configure AI prompts - Update the restaurant information in the AI agent system messages Test the workflow - Send test messages to verify all integrations work How to customize the workflow Menu management**: Update Google Drive file IDs to display your current menu images Payment verification**: Modify the receipt analysis logic to match your bank's receipt format Order formatting**: Customize the order confirmation template sent to kitchen staff AI personality**: Adjust the restaurant agent's tone and responses in the system prompts Languages**: The AI supports multiple languages - customize welcome messages for your target market Business hours**: Add time-based logic to handle orders outside operating hours Delivery zones**: Integrate with your delivery area logic using the location processing features
by noda
AI Recommender: From Food Photo to Restaurant and Book (Google Books Integrated) What it does Analyzes a food photo with an AI vision model to extract dish name + category Searches nearby restaurants with Google Places and selects the single best (rating → reviews tie-break) Finds a matching book via Google Books and posts a tidy summary to Slack Who it’s for Foodies, bloggers, and teams who want a plug-and-play flow that turns a single food photo into a dining pick + themed reading. How it works Google Drive Trigger detects a new photo Dish Classifier (Vision LLM) → JSON (dish_name, category, basic macros) Search Google Places near your origin; Select Best Place (AI) Recommend Book (AI) → Search Google Books → format details Post to Slack (JP/EN both possible) Requirements Google Drive / Google Places / Google Books credentials, LLM access (OpenRouter/OpenAI), Slack OAuth. Customize Edit origin/radius in Set Origin & Radius, tweak category→keyword mapping in Normalize Classification, adjust Slack channel & message in Post to Slack.
by Cheng Siong Chin
How It Works The workflow runs on a monthly trigger to collect both current-year and multi-year historical HDB data. Once fetched, all datasets are merged with aligned fields to produce a unified table. The system then applies cleaning and normalization rules to ensure consistent scales and comparable values. After preprocessing, it performs pattern mining, anomaly checks, and time-series analysis to extract trends and forecast signals. An AI agent, integrating OpenAI GPT-4, statistical tools, and calculator nodes, synthesizes these results into coherent insights. The final predictions are formatted and automatically written to Google Sheets for reporting and downstream use. Setup Steps 1) Configure fetch nodes to pull current-year HDB data and three years of historical records. 2) Align and map column names across all datasets. 3) Set normalization and standardization parameters in the cleaning node. 4) Add your OpenAI API key (GPT-4) and link the model, forecasting tool, and calculator nodes. 5) Authorize Google Sheets and configure sheet and cell mappings for automated export. Prerequisites Historical data source with API access (3+ years of records) OpenAI API key for GPT-4 model Google Sheets account with API credentials Basic understanding of time series data Use Cases Real Estate: Forecast property prices using multi-year historical HDB/market data with confidence intervals Finance: Predict market trends by aggregating years of transaction or pricing records Customization Data Source: Replace HDB/fetch nodes with stock prices, sensor data, sales records, or any historical dataset Analysis Window: Adjust years fetched (2-5 years) based on data availability and prediction horizon Benefits Automation: Monthly scheduling eliminates manual data gathering and analysis Consolidation: Merges fragmented year-by-year data into unified historical view
by Don Jayamaha Jr
📊 WEEX Spot Market Quant AI Agent (All-in-One Multi-Agent Trading System) ⚡ Overview This multi-agent n8n workflow delivers an automated, intelligent trading analysis system for the WEEX Spot Market. It uses GPT-4o to interpret user prompts, route them to the correct sub-agent tools, analyze technical indicators, price data, sentiment insights, and return concise trading signals via Telegram or downstream automations. No need to download additional workflows—everything is embedded in this single orchestrated agent. 🧠 Core Features 🔹 Single-entry architecture → Built-in orchestration logic with no external subworkflow dependencies 🔹 Multi-timeframe indicator analysis → 15m, 1h, 4h, and 1d 🔹 Sentiment + news insights from crypto sources 🔹 Live price, volume, kline, and order book analysis 🔹 LLM-powered signal evaluation using GPT-4o 🔹 Telegram integration for fast human queries or autonomous alerts 🤖 Built-In Agent Modules | Module | Description | | ----------------------------------- | ---------------------------------------------------------- | | ✅ Financial Analyst Tool | Routes prompts, interprets tokens, and triggers sub-agents | | ✅ News & Sentiment Analyst Tool | Gathers real-time sentiment from crypto news sources | | ✅ Technical Indicator Tools | 15m, 1h, 4h, 1d indicators using WEEX spot market data | | ✅ Price & Order Book Agent | Fetches real-time stats, price, and structure | | ✅ Trading Signal Evaluator | GPT-4o merges all data and generates trading decision | 🖥️ Prompt Flow Example User Input: “Should I long or short ETH on WEEX today?” → Financial Analyst Agent interprets the query → Fetches multi-timeframe indicators, live price, sentiment → GPT-4o evaluates conditions and creates recommendation → Output delivered via Telegram: 📈 ETH/USDT Overview • Price: \$3,710 • 4h RSI: 64.5 – Slightly Overbought • MACD: Bullish Crossover • Market Sentiment: 🔼 Positive Recommendation: Consider long entry with stop at \$3,640. 🔧 Setup Instructions Follow these steps to fully deploy and operate the WEEX Quant AI Agent in your n8n environment: 🟢 Get Telegram Bot API Key Create your bot via @BotFather on Telegram Save the token it gives you (format: 123456789:ABCdefGHIjkLMNopQRStuvWXyz) 🔑 Add OpenAI / DeepSeek Chat API Key Compatible with GPT-4o (OpenAI) or DeepSeek Chat 📈 (Optional) WEEX API Keys If expanding to live trading or authenticated data, get a WEEX Spot API key from your account dashboard Not required for the analysis agent to function 🔗 Connect Telegram to n8n Use Telegram Trigger and Telegram node with your API key Ensure webhook is set correctly (or use polling mode) ✅ Example Use Cases | Scenario | Outcome | | ---------------------------------- | ----------------------------------------------------- | | “Is BTC bullish or bearish?” | Merged indicator + sentiment + price analysis summary | | “Get 15m and 4h trends for SOL” | Multi-timeframe volatility vs macro trend report | | “Latest crypto news on XRP” | Real-time filtered news + DeepSeek sentiment summary | | “What’s the order book structure?” | Level-by-level spread analysis with buy/sell volumes | 🎥 Watch the Live Demo 👨💼 Licensing & Support 🧾 © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade signal framework are IP-protected. No unauthorized rebranding or replication permitted. 📩 Connect with the Creator Don Jayamaha – LinkedIn Profile
by John
How it works User Signup & Verification: The workflow starts when a user signs up. It generates a verification code and sends it via SMS using Twilio. Code Validation: The user replies with the code. The workflow checks the code and, if valid, creates a session for the user. Conversational AI: Incoming SMS messages are analyzed by Chat GPT AI for sentiment, intent, and urgency. The workflow stores the conversation context and generates smart, AI-powered replies. Escalation Handling: If the AI detects urgency or frustration, the workflow escalates the session—alerting your team and sending a supportive SMS to the user. Set up steps Estimated setup time:** 10–20 minutes for most users. What you’ll need:** A free n8n account (self-hosted or cloud) Free Twilio account (for SMS) OpenAI API key (for AI) A PostgreSQL database (Supabase, Neon, or local) Setup process:** Import this workflow into n8n. Add your Twilio and OpenAI credentials as environment variables or n8n credentials. Update webhook URLs in your Twilio console (for incoming SMS). (Optional) Adjust sticky notes in the workflow for detailed, step-by-step guidance.
by Don Jayamaha Jr
Get real-time MEXC Spot Market data instantly in Telegram! This workflow connects the MEXC REST v3 API with Telegram and optional GPT-4.1-mini formatting, providing users with latest prices, 24h stats, order book depth, trades, and candlesticks in structured, Telegram-ready messages. 🔎 How It Works A Telegram Trigger node listens for commands. User Authentication ensures only authorized Telegram IDs can access the bot. A Session ID is generated from chat.id for lightweight memory. The MEXC AI Agent coordinates multiple API calls via HTTP nodes: Ticker (Latest Price) → /api/v3/ticker/price?symbol=BTCUSDT 24h Stats → /api/v3/ticker/24hr?symbol=BTCUSDT Order Book Depth → /api/v3/depth?symbol=BTCUSDT&limit=50 Best Bid/Ask Snapshot → /api/v3/ticker/bookTicker?symbol=BTCUSDT Candlesticks (Klines) → /api/v3/klines?symbol=BTCUSDT&interval=15m&limit=200 Recent Trades → /api/v3/trades?symbol=BTCUSDT&limit=100 Utility Nodes refine the data: Calculator → spreads, averages, mid-prices. Think → formats raw JSON into human-readable summaries. Simple Memory → saves symbol, sessionId, and context across turns. Message Splitter prevents Telegram messages from exceeding 4000 characters. Results are sent back to Telegram in structured, readable reports. ✅ What You Can Do with This Agent Get latest prices & 24h stats for any spot pair. Retrieve order book depth (customizable levels). Monitor best bid/ask quotes for spreads. View candlestick OHLCV data for multiple timeframes. Check recent trades (up to 100). Receive clean Telegram reports — no raw JSON. 🛠️ Setup Steps Create a Telegram Bot Use @BotFather to create a bot and copy its API token. Configure in n8n Import MEXC AI Agent v1.02.json. Update the User Authentication node with your Telegram ID. Add Telegram API credentials (bot token). Add OpenAI API key (Optional) Add MEXC API key Deploy & Test Activate the workflow in n8n. Send a query like BTCUSDT to your bot. Instantly receive structured MEXC Spot Market data in Telegram. 📤 Output Rules Output grouped into Price, 24h Stats, Order Book, Candlesticks, Trades. No raw JSON — formatted summaries only. Complies with Telegram’s 4000-character message limit (auto-split). 📺 Setup Video Tutorial Watch the full setup guide on YouTube: ⚡ Unlock real-time MEXC Spot Market insights in Telegram — clean, fast, and API-key free. 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding permitted. 🔗 For support: Don Jayamaha – LinkedIn