by Peter Zendzian
This n8n template demonstrates how to build an intelligent entity research system that automatically discovers, researches, and creates comprehensive profiles for business entities, concepts, and terms. Use cases are many: Try automating glossary creation for technical documentation, building standardized definition databases for compliance teams, researching industry terminology for content creation, or developing training materials with consistent entity explanations! Good to know Each entity research typically costs $0.08-$0.34, depending on the complexity and sources required. The workflow includes smart duplicate detection to minimize unnecessary API calls. The workflow requires multiple AI services and a vector database, so setup time may be longer than simpler templates. Entity definitions are stored locally in your Qdrant database and can be reused across multiple projects. How it works The workflow checks your existing knowledge base first to avoid duplicate research on entities you've already processed. If the entity is new, an AI research agent intelligently combines your vector database, Wikipedia, and live web research to gather comprehensive information. The system creates structured entity profiles with definitions, categories, examples, common misconceptions, and related entities - perfect for business documentation. AI-powered validation ensures all entity profiles are complete, accurate, and suitable for business use before storage. Each researched entity gets stored in your Qdrant vector database, creating a growing knowledge base that improves research efficiency over time. The workflow includes multiple stages of duplicate prevention to avoid unnecessary processing and API costs. How to use The manual trigger node is used as an example, but feel free to replace this with other triggers such as form submissions, content management systems, or automated content pipelines. You can research multiple related entities in sequence, and the system will automatically identify connections and relationships between them. Provide topic and audience context to get tailored explanations suitable for your specific business needs. Requirements OpenAI API account for o4-mini (entity research and validation) Qdrant vector database instance (local or cloud) Ollama with nomic-embed-text model for embeddings Automate Web Research with GPT-4, Claude & Apify for Content Analysis and Insights workflow (for live web research capabilities) Anthropic API account for Claude Sonnet 4 (used by the web research workflow) Apify account for web scraping (used by the web research workflow) Customizing this workflow Entity research automation can be adapted for many specialized domains. Try focusing on specific industries like legal terminology (targeting official legal sources), medical concepts (emphasizing clinical accuracy), or financial terms (prioritizing regulatory definitions). You can also customize the validation criteria to match your organization's specific quality standards.
by Jay Emp0
🤖 Reddit Auto-Comment Assistant (AI-Driven Marketing Workflow) Automate how you reply to Reddit posts using AI-generated, first-person comments that sound human, follow subreddit rules, and (optionally) promote your own links or products. 🧩 Overview This workflow monitors Reddit mentions (via F5Bot Gmail alerts) and automatically: Fetches the relevant Reddit post. Checks the subreddit’s rules for self-promotion. Generates a comment using GPT-5 style prompting (human-like tone, <255 chars). Optionally promotes your chosen product from Google Sheets. Posts the comment automatically It’s ideal for creators, marketers, or founders who want to grow awareness organically and authentically on Reddit — without sounding like a bot. 🧠 Workflow Diagram 🚀 Key Features | Feature | Description | |----------|--------------| | AI-Generated Reddit Replies | Uses GPT-powered reasoning and prompt structure that mimics a senior marketing pro typing casually. | | Rule-Aware Posting | Reads subreddit rules and adapts tone — no promo where it’s not allowed. | | Product Integration | Pulls product name + URL from your Google Sheet automatically. | | Full Automation Loop | From Gmail → Gsheet → Reddit | | Evaluation Metrics | Logs tool usage, link presence, and formatting to ensure output quality. | 🧰 Setup Guide 1️⃣ Prerequisites | Tool | Purpose | |------|----------| | n8n Cloud or Self-Host | Workflow automation environment | | OpenAI API key | For comment generation | | Reddit OAuth2 credentials | To post comments | | Google Sheets API | To fetch and evaluate products | | Gmail API | To read F5Bot alerts | 2️⃣ Import the Workflow Download Reddit Assistant.json In n8n, click Import Workflow → From File Paste your credentials in the corresponding nodes: Reddit account Gmail account Gsheet account OpenAI API 3️⃣ Connect Your Google Sheets You’ll need two Google Sheets: | Sheet | Purpose | Example Tab | |--------|----------|-------------| | Product List | Contains all your product names, URLs, goals, and CTAs | promo | | Reddit Evaluations | Logs AI performance metrics and tool usage | reddit evaluations | 4️⃣ Set Up Gmail Trigger (F5Bot) Subscribe to F5Bot alerts for keywords like "blog automation" or your brand name. Configure Gmail Trigger to only pull from sender: admin@f5bot.com. 5️⃣ Configure AI Agent Prompt The built-in prompt follows a GPT-5-style structured reasoning chain: Reads the Reddit post + rules. Determines if promotion is allowed. Fetches product data from Google Sheets. Writes a short, human comment (<255 chars). Avoids buzzwords and fake enthusiasm. 📊 Workflow Evaluations The workflow includes automatic evaluation nodes to track: | Metric | Description | |--------|--------------| | contains link | Checks if comment includes a URL | | contains dash | Detects format breaks | | Tools Used | Logs which AI tools were used in reasoning | | executionTime | Monitors average latency | 💡 Why This Workflow Has Value | Value | Explanation | |--------|--------------| | Saves time | Automates Reddit marketing without manual engagement. | | Feels human | AI comments use a fast-typing, casual tone (e.g., “u,” “ur,” “idk”). | | Follows rules | Respects subreddits where promo is banned. | | Data-driven | Logs performance across 10 test cases for validation. | | Monetizable | Can promote Gumroad, YouTube, or SaaS products safely. | ⚙️ Example Use Case > “I used this automation to pull $1.4k by replying to Reddit posts about blog automation. > Each comment felt natural and directed users to my n8n workflow.”
by Guilherme Campos
This n8n workflow automates the process of creating high-quality, scroll-stopping LinkedIn posts based on live research, AI insight generation, and Google Sheets storage. Instead of relying on recycled AI tips or boring summaries, this system combines real-time trend discovery via Perplexity, structured idea shaping with GPT-4, and content generation tailored to a bold, human LinkedIn voice. The workflow saves each post idea (with image prompt, tone, and summary) to a Google Sheet, sends you a Telegram alert, and even formats your content for direct publishing. Perfect for solopreneurs, startup marketers, or anyone who posts regularly on LinkedIn and wants to sound original, not robotic. Who’s it for Content creators and solopreneurs building an audience on LinkedIn Startup teams, PMs, and tech marketers looking to scale thought leadership Anyone tired of generic AI-generated posts and craving structured, edgy output How it works Daily trigger at 6 AM starts the workflow. Pulls recent post history from Google Sheets to avoid repeated ideas. Perplexity AI scans the web Generates 3 structured post ideas (including tone, hook, visual prompt, and summary). GPT-4 refines each into a bold, human-style LinkedIn post, following detailed brand voice rules. Saves everything to Google Sheets (idea, content, image prompt, post status). Sends a Telegram notification to alert you new ideas are ready. How to set up Connect your Perplexity, OpenAI, Google Sheets, and Telegram credentials. Point to your preferred Google Sheet and sheet tab for storing post data. Adjust the schedule trigger if you want more or fewer ideas per week. (Optional) Tweak the content style prompt to match your personal tone or niche. Requirements Perplexity API account OpenAI API access (GPT-4 or GPT-4-mini) Telegram bot connected to your account Google Sheets document with appropriate column headers How to customize the workflow Change the research sources or prompt tone (e.g., more tactical, more spicy, more philosophical) Add an image generation tool to turn prompts into visuals for each post Filter or tag ideas based on type (trend, tip, story, etc.) Post automatically via LinkedIn API or Buffer integration
by Khairul Muhtadin
Auto repost job with RAG is a workflow designed to automatically extract, process, and publish job listings from monitored sources using Google Drive, OpenAI, Supabase, and WordPress. This integration streamlines job reposting by intelligently extracting relevant job data, mapping categories and types accurately, managing media assets, and publishing posts seamlessly. 💡 Why Use Auto repost job with RAG? Automated Publishing: Slash manual entry time by automating job post extraction and publication, freeing hours every week. Error-Resistant Workflow: Avoid incomplete job posts with smart validation checks to ensure all necessary fields are ready before publishing. Consistent Content Quality: Maintain formatting, SEO, and style consistency backed by AI-driven article regeneration adhering strictly to your guidelines. Competitive Edge: Get fresh jobs live faster than your competitors without lifting more than a finger—because robots don't take coffee breaks! ⚡ Perfect For Recruiters & HR Teams: Accelerate your job posting funnel with error-free automation. Content Managers: Keep your job boards fresh with AI-enriched standardized listings. Digital Marketers: Automate content flows to boost SEO and engagement without the headache. 🔧 How It Works ⏱ Trigger: Job link inputs via Telegram. 📎 Process: Auto-download of job documents, data extraction using Jina AI and OpenAI's GPT-4 model to parse content and metadata. 🤖 Smart Logic: AI agent regenerates articles based on strict RAG dataset rules; category & job type IDs mapped to match WordPress taxonomy; fallback attempts with default images for missing logos. 💌 Output: Job posts formatted and published to WordPress; success or failure updates sent back via Telegram notifications. 🗂 Storage: Uses Supabase vector store for document embedding and retrieval related to formatting rules and job data. 🔐 Quick Setup Import the provided JSON workflow into your n8n instances Add credentials: Google Drive OAuth, OpenAI API, Supabase API, Telegram API, WordPress API Customize: Set your Google Drive folder ID, WordPress endpoints, and Telegram chat IDs Update: Confirm default logo URLs and fallback settings as needed Test: Submit a new job link via Telegram or add a file to the watched Drive folder 🧩 You'll Need Active n8n instances Google Drive Account with OAuth2 credentials OpenAI API access for GPT-4 processing Supabase account configured for vector storage WordPress API credentials for job listing publishing Telegram Bot for notifications and job link inputs 🛠️ Level Up Ideas Integrate Slack, Gmail or Teams notifications for teams visibility Add a sentiment analysis step to prioritize certain jobs Automate social media posting of new job listings for wider reach Made by: Khmuhtadin Tags: automation, job-posting, AI, OpenAI, Google Drive, WordPress Category: content automation Need custom work? Contact me
by Rahul Joshi
Description Transform Figma design files into detailed QA test cases with AI-driven analysis and structured export to Google Sheets. This workflow helps QA and product teams streamline design validation, test coverage, and documentation — all without manual effort. 🎨🤖📋 What This Template Does Step 1: Trigger manually and input your Figma file ID. 🎯 Step 2: Fetches the full Figma design data (layers, frames, components) via API. 🧩 Step 3: Sends structured design JSON to GPT-4o-mini for intelligent test case generation. 🧠 Step 4: AI analyzes UI components, user flows, and accessibility aspects to generate 5–10 test cases. ✅ Step 5: Parses and formats results into a clean structure. Step 6: Exports test cases directly to Google Sheets for QA tracking and reporting. 📊 Key Benefits ✅ Saves 2–3 hours per design by automating test case creation ✅ Ensures consistent, comprehensive QA documentation ✅ Uses AI to detect UX, accessibility, and functional coverage gaps ✅ Centralizes output in Google Sheets for easy collaboration Features Figma API integration for design parsing GPT-4o-mini model for structured test generation Automated Google Sheets export Dynamic file ID and output schema mapping Built-in error handling for large design files Requirements Figma Personal Access Token OpenAI API key (GPT-4o-mini) Google Sheets OAuth2 credentials Target Audience QA and Test Automation Engineers Product & Design Teams Startups and Agencies validating Figma prototypes Setup Instructions Connect your Figma token as HTTP Header Auth (X-Figma-Token). Add your OpenAI API key in n8n credentials (model: gpt-4o-mini). Configure Google Sheets OAuth2 and select your sheet. Input Figma file ID from the design URL. Run once manually, verify output, then enable for regular use.
by Yasser Sami
Human-in-the-Loop LinkedIn Post Generator (Telegram + AI) This n8n template demonstrates how to build a human-in-the-loop AI workflow that helps you create professional LinkedIn posts via Telegram. The agent searches the web, drafts content, asks for your approval, and refines it based on your feedback — ensuring every post sounds polished and on-brand. Who’s it for Content creators and marketers who want to save time drafting LinkedIn posts. SaaS founders or solopreneurs who regularly share updates or insights. Anyone who wants an AI writing assistant with human control in the loop. How it works / What it does Trigger: The workflow starts when you send a message to the Telegram bot asking it to write a LinkedIn post (e.g., “Write a LinkedIn post about AI in marketing”). Research: The AI agent uses the Tavily tool to search the web and gather context for your topic. Drafting: An AI model (OpenAI or Gemini) creates a professional LinkedIn post based on the findings. Human-in-the-loop: The bot sends the draft to you in Telegram and asks: “Good to go?” If you approve → The post is saved to a Google Sheet, ready to publish. If you disapprove and give feedback → The feedback is sent to a second AI agent that revises and improves the post. The improved draft is sent back to you again for final approval. Finalization: Once approved, the post is appended to a Google Sheet — your ready-to-post content library. This workflow combines AI creativity with human oversight to produce polished, authentic LinkedIn content every time. How to set up Import this template into your n8n account. Connect your Telegram bot (via Telegram Trigger and Send Message nodes). Connect your Google Sheets account to store approved posts. Set up your AI model provider (OpenAI or Gemini) and Tavily API key for web search. Activate the workflow and start chatting with your AI writing assistant on Telegram! Requirements n8n account. Telegram bot token. OpenAI or Google Gemini account (for text generation). Tavily API key (for web search). Google Sheets account (for saving approved posts). How to customize the workflow Post Tone**: Adjust AI prompts to match your personal voice (professional, storytelling, inspirational, etc.). Approval Logic**: Modify the approval step to allow multiple revision loops or add a “draft-only” mode. Storage Options**: Instead of Google Sheets, save approved posts to Notion, Airtable, or your CMS. Multi-platform**: Extend the same logic for X (Twitter) or Threads by changing the final output destination. Branding**: Add your brand guidelines or preferred hashtags to the AI prompts for consistent style. This template helps you write better LinkedIn posts faster — keeping you in full control while AI does the heavy lifting.
by Don Jayamaha Jr
Instantly fetch live Gate.io Spot Market data directly in Telegram! This workflow integrates the Gate.io REST v4 API with GPT-4.1-mini-powered AI and Telegram, giving traders real-time access to price action, order books, candlesticks, and trade data. Perfect for crypto traders, analysts, and DeFi builders who need fast and reliable exchange insights. ⚙️ How It Works A Telegram bot listens for user queries (e.g., "BTC_USDT"). The workflow securely processes the request, authenticates the user, and attaches a sessionId. The Gate AI Agent orchestrates data retrieval via Gate.io Spot Market API, including: ✅ Latest Price & 24h Stats (/spot/tickers) ✅ Order Book Depth (with best bid/ask snapshots) ✅ Klines (candlesticks) for OHLCV data ✅ Recent Trades (up to 100 latest trades) Data is optionally cleaned using Calculator (for spreads, midpoints, % changes) and Think (for formatting). An AI-powered formatter (GPT-4.1-mini) structures results into Telegram-friendly reports. The final Gate.io Spot insights are sent back instantly in HTML-formatted Telegram messages. 💡 What You Can Do with This Agent This AI-driven Telegram bot enables you to: ✅ Track real-time spot prices for any Gate.io pair ✅ Monitor order book depth (liquidity snapshots) ✅ View recent trades for activity insights ✅ Analyze candlesticks across multiple intervals ✅ Compare bid/ask spreads with calculated metrics ✅ Get clean, structured data without raw JSON clutter 🛠️ Setup Steps Create a Telegram Bot Use @BotFather on Telegram to create a bot and obtain an API token. Configure Telegram API Credentials in n8n Add your bot token under Telegram API credentials. Replace the placeholder Telegram ID in the Authentication node with your own. Import & Deploy Workflow Load Gate AI Agent v1.02.json into n8n. Configure your OpenAI API key for . Configure your Gate api key. Save and activate the workflow. Run & Test Send a query (e.g., "BTC_USDT") to your Telegram bot. Receive instant Gate.io market insights formatted for easy reading. 📺 Setup Video Tutorial Watch the full setup guide on YouTube: ⚡ Unlock real-time Gate.io Spot Market insights directly in Telegram — fast, clean, and reliable. 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding permitted. 🔗 For support: Don Jayamaha – LinkedIn
by Dayong Huang
How it works This template creates a fully automated Twitter content system that discovers trending topics, analyzes why they're trending using AI, and posts intelligent commentary about them. The workflow uses MCP (Model Context Protocol) with the twitter154 MCP server from MCPHub to connect with Twitter APIs and leverages OpenAI GPT models to generate brand-safe, engaging content about current trends. Key Features: 🔍 Smart Trend Discovery: Automatically finds US trending topics with engagement scoring 🤖 AI-Powered Analysis: Uses GPT to explain "why it's trending" in 30-60 words 📊 Duplicate Prevention: MySQL database tracks posted trends with 3-day cooldowns 🛡️ Brand Safety: Filters out NSFW content and low-quality hashtags ⚡ Rate Limiting: Built-in delays to respect API limits 🐦 Powered by twitter154: Uses the robust "Old Bird" MCP server for comprehensive Twitter data access Set up steps Setup time: ~10 minutes Prerequisites: OpenAI API key for GPT models Twitter API access for posting MySQL database for trend tracking MCP server access**: twitter154 from aigeon-ai via MCPHub Configuration: Set up MCP integration with twitter154 server endpoint: https://api.mcphub.com/mcp/aigeon-ai-twitter154 Configure credentials for OpenAI, Twitter, and MySQL connections Set up authentication for the twitter154 MCP server (Header Auth required) Create MySQL table for keyword registry (schema provided in workflow) Test the workflow with manual execution before enabling automation Set schedule for automatic trend discovery (recommended: every 2-4 hours) MCP Server Features Used: Search Tweets**: Core functionality for trend analysis Get Trends Near Location**: Discovers trending topics by geographic region AI Tools**: Leverages sentiment analysis and topic classification capabilities Customization Options: Modify trend scoring criteria in the AI agent prompts Adjust cooldown periods in database queries Change target locale from US to other regions (WOEID configuration) Customize tweet formatting and content style Configure different MCP server endpoints if needed Perfect for: Social media managers, content creators, and businesses wanting to stay current with trending topics while maintaining consistent, intelligent posting schedules. Powered by: The twitter154 MCP server ("The Old Bird") provides robust access to Twitter data including tweets, user information, trends, and AI-powered text analysis tools.
by Artem Boiko
Estimate material price and total cost for grouped BIM/CAD elements using an LLM-driven analysis. The workflow accepts an existing XLSX (from your model) or, if missing, can trigger a local RvtExporter.exe to generate one. It enriches each element group with quantities, pricing, confidence, and produces a multi-sheet Excel report plus a polished HTML executive report. What it does Reads grouped element data** (from XLSX or extracted via RvtExporter.exe). Builds enhanced prompts with clear rules (volumes/areas are already aggregated per group). Calls your selected LLM (OpenAI/Anthropic/etc.) to identify materials, pick the pricing unit, and estimate price per unit and total cost. Parses AI output, adds per-group KPIs (cost %, rank), and aggregates **project totals (by material, by category). Exports a multi-sheet XLSX and an HTML executive report (charts, KPIs, top groups). Prerequisites LLM credentials** for your chosen provider (e.g., OpenAI, Anthropic). Enable exactly one chat node and connect credentials. Windows host* only if you want to auto-extract from .rvt/.ifc via RvtExporter.exe. If you already have an XLSX, Windows is *not required**. Optional: Internet access on the LLM side for price lookups (model/tooling dependent). How to use Import this JSON into n8n. Open the Setup node(s) and set: project_file — path to your .rvt/.ifc or to an existing grouped *_rvt.xlsx path_to_converter — C:\\DDC_Converter_Revit\\datadrivenlibs\\RvtExporter.exe (optional) country — used to guide price sources/standards (e.g., Germany) In the canvas, enable one LLM node (e.g., OpenAI or Anthropic) and connect credentials; keep others disabled. Execute workflow (Manual Trigger). It will detect/build the XLSX, run analysis, then write the Excel and open the HTML report. Outputs Excel**: Price_Estimation_Report_YYYY-MM-DD.xlsx with sheets: Summary, Detailed Elements, Material Summary, Top 10 Groups HTML**: executive report with charts (project totals, top materials, top groups). Per-group fields include: Material (EU/DE/US), Quantity & Unit, Price per Unit (EUR), Total Cost (EUR), Assumptions, Confidence. Notes & tips Quantities in the input are already aggregated per group — do not multiply by element count. If you prefer XLSX-only extraction, run your converter with a -no-collada flag upstream. Keep ASCII-safe paths and ensure write permissions to the output folder. Categories Data Extraction · Files & Storage · ETL · CAD/BIM · Cost Estimation Tags cad-bim, price-estimation, cost, revit, ifc, xlsx, html-report, llm, materials, qto Author DataDrivenConstruction.io info@datadrivenconstruction.io Consulting and Training We work with leading construction, engineering, consulting agencies and technology firms around the world to help them implement open data principles, automate CAD/BIM processing and build robust ETL pipelines. If you would like to test this solution with your own data, or are interested in adapting the workflow to real project tasks, feel free to contact us. Docs & Issues: Full Readme on GitHub
by Shelly-Ann Davy
Automate Bug Reports: GitHub Issues → AI Analysis → Jira Tickets with Slack & Discord Alerts Automatically convert GitHub issues into analyzed Jira tickets with AI-powered severity detection, developer assignment, and instant team alerts. Overview This workflow captures GitHub issues in real-time, analyzes them with GPT-4o for severity and categorization, creates enriched Jira tickets, assigns the right developers, and notifies your team across Slack and Discord—all automatically. Features AI-Powered Triage**: GPT-4o analyzes bug severity, category, root cause, and generates reproduction steps Smart Assignment**: Automatically assigns developers based on mentioned files and issue context Two-Way Sync**: Posts Jira ticket links back to GitHub issues Multi-Channel Alerts**: Rich notifications in Slack and Discord with action buttons Time Savings**: Eliminates 15-30 minutes of manual triage per bug Customizable Routing**: Easy developer mapping and priority rules What Gets Created Jira Ticket: Original GitHub issue details with reporter info AI severity assessment and categorization Reproduction steps and root cause analysis Estimated completion time Automatic labeling and priority assignment GitHub Comment: Jira ticket link AI analysis summary Assigned developer and estimated time Team Notifications: Severity badges and quick-access buttons Developer assignment and root cause summary Color-coded priority indicators Use Cases Development teams managing 10+ bugs per week Open source projects handling community reports DevOps teams tracking infrastructure issues QA teams coordinating with developers Product teams monitoring user-reported bugs Setup Requirements Required: GitHub repository with admin access Jira Software workspace OpenAI API key (GPT-4o access) Slack workspace OR Discord server Customization Needed: Update developer email mappings in "Parse GPT Response & Map Data" node Replace YOUR_JIRA_PROJECT_KEY with your project key Update Slack channel name (default: dev-alerts) Replace YOUR_DISCORD_WEBHOOK_URL with your webhook Change your-company.atlassian.net to your Jira URL Setup Time: 15-20 minutes Configuration Steps Import workflow JSON into n8n Add credentials: GitHub OAuth2, Jira API, OpenAI API, Slack, Discord Configure GitHub webhook in repository settings Customize developer mappings and project settings Test with sample GitHub issue Activate workflow Expected Results 90% faster bug triage (20 min → 2 min per issue) 100% consistency in bug analysis Zero missed notifications Better developer allocation Improved bug documentation Tags GitHub, Jira, AI, GPT-4, Bug Tracking, DevOps, Automation, Slack, Discord, Issue Management, Development, Project Management, OpenAI, Webhook, Team Collaboration
by John
How it works User Signup & Verification: The workflow starts when a user signs up. It generates a verification code and sends it via SMS using Twilio. Code Validation: The user replies with the code. The workflow checks the code and, if valid, creates a session for the user. Conversational AI: Incoming SMS messages are analyzed by Chat GPT AI for sentiment, intent, and urgency. The workflow stores the conversation context and generates smart, AI-powered replies. Escalation Handling: If the AI detects urgency or frustration, the workflow escalates the session—alerting your team and sending a supportive SMS to the user. Set up steps Estimated setup time:** 10–20 minutes for most users. What you’ll need:** A free n8n account (self-hosted or cloud) Free Twilio account (for SMS) OpenAI API key (for AI) A PostgreSQL database (Supabase, Neon, or local) Setup process:** Import this workflow into n8n. Add your Twilio and OpenAI credentials as environment variables or n8n credentials. Update webhook URLs in your Twilio console (for incoming SMS). (Optional) Adjust sticky notes in the workflow for detailed, step-by-step guidance.
by Don Jayamaha Jr
Instantly access real-time Binance Spot Market data in Telegram! This workflow connects the Binance REST API with Telegram and optional GPT-4.1-mini formatting, delivering structured insights such as latest prices, 24h stats, order book depth, trades, and candlesticks directly into chat. 🔎 How It Works A Telegram Trigger listens for incoming user requests. User Authentication validates the Telegram ID to restrict access. A Session ID is generated from chat.id to manage session memory. The Binance AI Agent executes HTTP calls to the Binance public API: Latest Price (Ticker) → /api/v3/ticker/price?symbol=BTCUSDT 24h Statistics → /api/v3/ticker/24hr?symbol=BTCUSDT Order Book Depth → /api/v3/depth?symbol=BTCUSDT&limit=50 Best Bid/Ask Snapshot → /api/v3/ticker/bookTicker?symbol=BTCUSDT Candlestick Data (Klines) → /api/v3/klines?symbol=BTCUSDT&interval=15m&limit=200 Recent Trades → /api/v3/trades?symbol=BTCUSDT&limit=100 Utility Tools refine outputs: Calculator → computes spreads, midpoints, averages, % changes. Think → extracts and reformats JSON into human-readable fields. Simple Memory → saves symbol, sessionId, and user context. Message Splitter ensures outputs >4000 characters are chunked for Telegram. Final structured reports are sent back to Telegram. ✅ What You Can Do with This Agent Get real-time Binance Spot prices with 24h stats. Fetch order book depth and liquidity snapshots. View best bid/ask quotes. Retrieve candlestick OHLCV data across timeframes. Check recent trades (up to 100). Calculate spreads, mid-prices, % changes automatically. Receive clean, structured messages instead of raw JSON. 🛠️ Setup Steps Create a Telegram Bot Use @BotFather and save the bot token. Configure in n8n Import Binance AI Agent v1.02.json. Update the User Authentication node with your Telegram ID. Add Telegram credentials (bot token). Add OpenAI API key (Optional) Add Binance API key Deploy & Test Activate the workflow in n8n. Send BTCUSDT to your bot. Instantly receive Binance Spot Market insights inside Telegram. 📤 Output Rules Group outputs by Price, 24h Stats, Order Book, Candles, Trades. Respect Telegram’s 4000-char message limit (auto-split enabled). Only structured summaries — no raw JSON. 📺 Setup Video Tutorial Watch the full setup guide on YouTube: ⚡ Unlock Binance Spot Market insights instantly in Telegram — clean, fast, and API-key free. 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding permitted. 🔗 For support: Don Jayamaha – LinkedIn