by NODA shuichi
Description: Your personal AI Book Curator that reads reviews, recommends books, and supports affiliate links. ๐๐ค This advanced workflow acts as a complete "Reading Assistant Application" with monetization features. It takes a book title via form, researches it using Google APIs, and employs an OpenAI Agent to generate a summary and recommendations. Why use this template? Monetization Support: Just enter your Amazon Affiliate Tag in the config node, and all email links will automatically include your tag. Organized & Scalable: The workflow is clearly grouped into 4 sections (Input, Enrichment, AI, Delivery) with sticky notes for easy navigation. How it works: Input: User submits a book title (e.g., "Atomic Habits"). Research: The workflow fetches book metadata and searches for real-world reviews. Analyze: GPT-4o explains why the book is interesting and suggests 3 related reads. Deliver: Generates a beautiful HTML email with purchase links and logs the request to Google Sheets. Setup Requirements: Google Sheets: Create headers: date, book_title, author, ai_comment, user_email. Credentials: OpenAI, Google Custom Search, Gmail, Google Sheets. Config: Open the "1. Input & Config" section to enter API Keys and IDs.
by Artem Boiko
Estimate material price and total cost for grouped BIM/CAD elements using an LLM-driven analysis. The workflow accepts an existing XLSX (from your model) or, if missing, can trigger a local RvtExporter.exe to generate one. It enriches each element group with quantities, pricing, confidence, and produces a multi-sheet Excel report plus a polished HTML executive report. What it does Reads grouped element data** (from XLSX or extracted via RvtExporter.exe). Builds enhanced prompts with clear rules (volumes/areas are already aggregated per group). Calls your selected LLM (OpenAI/Anthropic/etc.) to identify materials, pick the pricing unit, and estimate price per unit and total cost. Parses AI output, adds per-group KPIs (cost %, rank), and aggregates **project totals (by material, by category). Exports a multi-sheet XLSX and an HTML executive report (charts, KPIs, top groups). Prerequisites LLM credentials** for your chosen provider (e.g., OpenAI, Anthropic). Enable exactly one chat node and connect credentials. Windows host* only if you want to auto-extract from .rvt/.ifc via RvtExporter.exe. If you already have an XLSX, Windows is *not required**. Optional: Internet access on the LLM side for price lookups (model/tooling dependent). How to use Import this JSON into n8n. Open the Setup node(s) and set: project_file โ path to your .rvt/.ifc or to an existing grouped *_rvt.xlsx path_to_converter โ C:\\DDC_Converter_Revit\\datadrivenlibs\\RvtExporter.exe (optional) country โ used to guide price sources/standards (e.g., Germany) In the canvas, enable one LLM node (e.g., OpenAI or Anthropic) and connect credentials; keep others disabled. Execute workflow (Manual Trigger). It will detect/build the XLSX, run analysis, then write the Excel and open the HTML report. Outputs Excel**: Price_Estimation_Report_YYYY-MM-DD.xlsx with sheets: Summary, Detailed Elements, Material Summary, Top 10 Groups HTML**: executive report with charts (project totals, top materials, top groups). Per-group fields include: Material (EU/DE/US), Quantity & Unit, Price per Unit (EUR), Total Cost (EUR), Assumptions, Confidence. Notes & tips Quantities in the input are already aggregated per group โ do not multiply by element count. If you prefer XLSX-only extraction, run your converter with a -no-collada flag upstream. Keep ASCII-safe paths and ensure write permissions to the output folder. Categories Data Extraction ยท Files & Storage ยท ETL ยท CAD/BIM ยท Cost Estimation Tags cad-bim, price-estimation, cost, revit, ifc, xlsx, html-report, llm, materials, qto Author DataDrivenConstruction.io info@datadrivenconstruction.io Consulting and Training We work with leading construction, engineering, consulting agencies and technology firms around the world to help them implement open data principles, automate CAD/BIM processing and build robust ETL pipelines. If you would like to test this solution with your own data, or are interested in adapting the workflow to real project tasks, feel free to contact us. Docs & Issues: Full Readme on GitHub
by Dayong Huang
How it works This template creates a fully automated Twitter content system that discovers trending topics, analyzes why they're trending using AI, and posts intelligent commentary about them. The workflow uses MCP (Model Context Protocol) with the twitter154 MCP server from MCPHub to connect with Twitter APIs and leverages OpenAI GPT models to generate brand-safe, engaging content about current trends. Key Features: ๐ Smart Trend Discovery: Automatically finds US trending topics with engagement scoring ๐ค AI-Powered Analysis: Uses GPT to explain "why it's trending" in 30-60 words ๐ Duplicate Prevention: MySQL database tracks posted trends with 3-day cooldowns ๐ก๏ธ Brand Safety: Filters out NSFW content and low-quality hashtags โก Rate Limiting: Built-in delays to respect API limits ๐ฆ Powered by twitter154: Uses the robust "Old Bird" MCP server for comprehensive Twitter data access Set up steps Setup time: ~10 minutes Prerequisites: OpenAI API key for GPT models Twitter API access for posting MySQL database for trend tracking MCP server access**: twitter154 from aigeon-ai via MCPHub Configuration: Set up MCP integration with twitter154 server endpoint: https://api.mcphub.com/mcp/aigeon-ai-twitter154 Configure credentials for OpenAI, Twitter, and MySQL connections Set up authentication for the twitter154 MCP server (Header Auth required) Create MySQL table for keyword registry (schema provided in workflow) Test the workflow with manual execution before enabling automation Set schedule for automatic trend discovery (recommended: every 2-4 hours) MCP Server Features Used: Search Tweets**: Core functionality for trend analysis Get Trends Near Location**: Discovers trending topics by geographic region AI Tools**: Leverages sentiment analysis and topic classification capabilities Customization Options: Modify trend scoring criteria in the AI agent prompts Adjust cooldown periods in database queries Change target locale from US to other regions (WOEID configuration) Customize tweet formatting and content style Configure different MCP server endpoints if needed Perfect for: Social media managers, content creators, and businesses wanting to stay current with trending topics while maintaining consistent, intelligent posting schedules. Powered by: The twitter154 MCP server ("The Old Bird") provides robust access to Twitter data including tweets, user information, trends, and AI-powered text analysis tools.
by Khairul Muhtadin
Auto repost job with RAG is a workflow designed to automatically extract, process, and publish job listings from monitored sources using Google Drive, OpenAI, Supabase, and WordPress. This integration streamlines job reposting by intelligently extracting relevant job data, mapping categories and types accurately, managing media assets, and publishing posts seamlessly. ๐ก Why Use Auto repost job with RAG? Automated Publishing: Slash manual entry time by automating job post extraction and publication, freeing hours every week. Error-Resistant Workflow: Avoid incomplete job posts with smart validation checks to ensure all necessary fields are ready before publishing. Consistent Content Quality: Maintain formatting, SEO, and style consistency backed by AI-driven article regeneration adhering strictly to your guidelines. Competitive Edge: Get fresh jobs live faster than your competitors without lifting more than a fingerโbecause robots don't take coffee breaks! โก Perfect For Recruiters & HR Teams: Accelerate your job posting funnel with error-free automation. Content Managers: Keep your job boards fresh with AI-enriched standardized listings. Digital Marketers: Automate content flows to boost SEO and engagement without the headache. ๐ง How It Works โฑ Trigger: Job link inputs via Telegram. ๐ Process: Auto-download of job documents, data extraction using Jina AI and OpenAI's GPT-4 model to parse content and metadata. ๐ค Smart Logic: AI agent regenerates articles based on strict RAG dataset rules; category & job type IDs mapped to match WordPress taxonomy; fallback attempts with default images for missing logos. ๐ Output: Job posts formatted and published to WordPress; success or failure updates sent back via Telegram notifications. ๐ Storage: Uses Supabase vector store for document embedding and retrieval related to formatting rules and job data. ๐ Quick Setup Import the provided JSON workflow into your n8n instances Add credentials: Google Drive OAuth, OpenAI API, Supabase API, Telegram API, WordPress API Customize: Set your Google Drive folder ID, WordPress endpoints, and Telegram chat IDs Update: Confirm default logo URLs and fallback settings as needed Test: Submit a new job link via Telegram or add a file to the watched Drive folder ๐งฉ You'll Need Active n8n instances Google Drive Account with OAuth2 credentials OpenAI API access for GPT-4 processing Supabase account configured for vector storage WordPress API credentials for job listing publishing Telegram Bot for notifications and job link inputs ๐ ๏ธ Level Up Ideas Integrate Slack, Gmail or Teams notifications for teams visibility Add a sentiment analysis step to prioritize certain jobs Automate social media posting of new job listings for wider reach Made by: Khmuhtadin Tags: automation, job-posting, AI, OpenAI, Google Drive, WordPress Category: content automation Need custom work? Contact me
by Shinji Watanabe
Whoโs it for Teams that care about space-weather impactโSRE/infra, satellite ops, aviation, power utilities, researchersโor anyone who wants timely, readable alerts when NASA publishes significant solar events. How it works / What it does Every 30 minutes a Cron trigger runs, the NASA DONKI node fetches the past 24 hours of space-weather notifications, and a code step de-duplicates, labels event types, and assigns a severity (CRITICAL / HIGH / OTHER). A Switch routes items: CRITICAL/HIGH** โ an LLM (โAI Agentโ) produces a concise Japanese alert โ Slack posts with local time and source link. OTHER** โ an LLM creates a short summary for record-keeping โ a small merge step prepares fields โ Google Sheets appends a new row. Sticky notes in the canvas explain the schedule, data source, and overall flow. How to set up Add credentials for Slack, Google Sheets, and OpenAI (or compatible LLM). Open the Slack nodes and select your workspace + target channel. Select your Google Sheet and worksheet for logging. (Optional) Adjust the Cron interval and the NASA lookback window. Test with a manual execution, then activate. Requirements Slack Bot with permission to post to the chosen channel Google account with access to the target Sheet OpenAI (or API-compatible) credentials for the LLM nodes Internet access to NASA DONKI (no API key required) How to customize the workflow Tweak severity rules inside the Analyze & Prioritize code node. Edit prompt tone/length in each AI Agent node. Change Slack formatting or mention style (@channel vs none). Add filters (e.g., alert only on CME/FLR) or extend logging fields in the merge step.
by Toshiki Hirao
Managing contracts manually is time-consuming and prone to human error, especially when documents need to be shared, tracked, and stored across different tools. This workflow automates the entire process by capturing contract PDFs and Words uploaded to Slack, extracting key information with GPT, and organizing the data into a structured format inside Google Sheets. Essential fields such as client, service provider, contract value, and important dates are automatically parsed and logged, eliminating repetitive manual entry. Once the data is saved, a confirmation message is posted back to Slack so your team can quickly verify that everything has been recorded accurately. Whoโs it for This workflow is ideal for operations teams, legal departments, or growing businesses that manage multiple contracts and want to maintain accuracy without spending hours on administration. By integrating Slack, GPT, and Google Sheets, you gain a simple but powerful contract management system that reduces risk, improves visibility, and keeps everyone aligned. Instead of scattered files and manual spreadsheets, you have a single automated pipeline that ensures your contract data is always up to date and accessible. How it works The workflow is triggered when a contract in PDF or Word format is shared in the designated Slack channel. The uploaded file is automatically retrieved for processing. Its content is extracted and converted into plain text. If the file is not in PDF or Word format, an error message is sent. GPT interprets the extracted text and structures the essential fields (e.g., Client, Service Provider, Effective Date, Expiration Date, Signature Date, Contract Value). The structured contract information is appended as a new row in the contract tracker spreadsheet on Google Sheets. A summary of the saved data is posted back to Slack for quick validation. How to set up You need to import this workflow into your n8n instance. You must authenticate your Slack account and select the target channel for contract submissions. You should link your Google account and specify the spreadsheet where the contract data will be stored. In this template, the required columns are Client, Service Provider, Effective Date, Expiration Date, Signature Date, and Contract Value. You can adjust the GPT parsing prompt to match the specific fields that your organization requires. You upload a sample contract in PDF or Word format to Slack and verify that the extracted data is correctly recorded in Google Sheets. Requirements You must have an active n8n instance in the cloud. You need a Slack account with permission to upload files and send messages. You must use a Google Sheets account with edit access to the target spreadsheet. You need a GPT integration (e.g., OpenAI) to enable AI-powered text parsing. How to customize the workflow You can modify this workflow to fit your organizationโs unique contract needs. For example, you may update the GPT parsing prompt to capture additional fields, change the target Google Sheets structure, or integrate notifications into other tools. You have full flexibility to expand or simplify the steps so the workflow matches your teamโs processes and compliance requirements.
by rana tamure
This n8n workflow automates the creation of high-quality, SEO-optimized blog posts using AI. It pulls keyword data from Google Sheets, conducts research via Perplexity AI, generates structured content (title, introduction, key takeaways, body, conclusion, and FAQs) with OpenAI and Anthropic models, assembles the post, performs final edits, converts to HTML, and publishes directly to WordPress. Ideal for content marketers, bloggers, or agencies looking to scale content production while maintaining relevance and engagement. Key Features Keyword-Driven Generation: Fetches primary keywords, search intent, and related terms from a Google Sheets spreadsheet to inform content strategy. AI Research & Structuring: Uses Perplexity for in-depth topic research and OpenAI/Anthropic for semantic analysis, outlines, and full content drafting. Modular Content Creation: Generates sections like introductions, key takeaways, outlines, body, conclusions, and FAQs with tailored prompts for tone, style, and SEO. Assembly & Editing: Combines sections into a cohesive Markdown post, adds internal/external links, and applies final refinements for readability and flow. Publishing Automation: Converts Markdown to styled HTML and posts drafts to WordPress. Customization Points: Easily adjust AI prompts, research depth, or output formats via Code and Set nodes. Requirements Credentials: OpenAI API (for GPT models), Perplexity API (for research), Google Sheets OAuth2 (for keyword input), WordPress API (for publishing). Setup: Configure your Google Sheets with columns like "keyword", "search intent", "related keyword", etc. Ensure the sheet is shared with your Google account. Dependencies: No additional packages needed; relies on n8n's built-in nodes for AI, HTTP, and data processing. How It Works Trigger & Input: Start manually or schedule; pulls keyword data from Google Sheets. Research Phase: Uses Perplexity to gather topic insights and citations from reputable sources. Content Generation: AI nodes create title, structure, intro, takeaways, outline, body, conclusion, and FAQs based on research and SEO guidelines. Assembly & Refinement: Merges sections, embeds links, edits for polish, and converts to HTML. Output: Publishes as a WordPress draft or outputs the final HTML for manual use. Benefits Time Savings: Automate 80-90% of content creation, reducing manual writing from hours to minutes. SEO Optimization: Incorporates primary/related keywords naturally, aligns with search intent, and includes semantic structures for better rankings. Scalability: Process multiple keywords in batches; perfect for content calendars or high-volume blogging. Quality Assurance: Built-in editing ensures engaging, error-free content with real-world examples and data-backed insights. Versatility: Adaptable for any niche (e.g., marketing, tech, finance) by tweaking prompts or sheets. Potential Customizations Add more AI models (e.g., via custom nodes) for varied tones. Integrate image generation or social sharing for full content pipelines. Filter sheets for specific topics or add notifications on completion.
by yusan25c
How It Works This template is an n8n workflow that integrates with Jira to provide automated replies. When a ticket is assigned to a user, the workflow analyzes the ticket content, retrieves relevant knowledge from a vector database, and generates a response. By continuously enriching the knowledge base, the system improves response quality in Jira. Prerequisites A Jira account with API access A Pinecone account and credentials (API key and environment settings) An AI provider credential (e.g., OpenAI API key) Setup Instructions Jira Credentials Create Jira credentials in n8n (API token and email). In the Jira node, select the registered Jira account ID. Vector Database Setup (Pinecone) Register your Pinecone credentials (API key and environment variables) in n8n. Ensure that your knowledge base is indexed in Pinecone. AI Assistant Node Configure the OpenAI (or other LLM) node with your API key. Provide a system prompt that explains how to respond to Jira tickets using retrieved knowledge. Workflow Execution The workflow runs only via the Scheduled Trigger node at defined intervals. When Jira tickets are assigned, their summary, description, and latest comments are retrieved. These details are passed to the AI assistant, which queries Pinecone and generates a response. The generated response is then posted as a Jira comment. Step by Step Scheduled Trigger The workflow is executed at regular intervals using the Scheduled Trigger node. Jira Trigger (Issue Assigned) Retrieves the summary, description, and latest comments of assigned tickets. AI Assistant Sends ticket details to the AI assistant, which searches and summarizes relevant knowledge from Pinecone. Response Generation / Ticket Update The AI generates a response and automatically posts it as a Jira comment. (Optionally, the workflow can update the ticket status or mention the assignee.) Notes Keep your Pinecone knowledge base updated to improve accuracy. You can customize the AI assistantโs behavior by adjusting the system prompt. Configure the Scheduled Trigger frequency carefully to avoid API rate limits. Further Reference For a detailed walkthrough (in Japanese), see this article: ๐ Automating Jira responses with n8n, AI, and Pinecone (Qiita) You can find the template file on GitHub here: ๐ Template File on GitHub
by Peter Zendzian
This n8n template demonstrates how to build an intelligent entity research system that automatically discovers, researches, and creates comprehensive profiles for business entities, concepts, and terms. Use cases are many: Try automating glossary creation for technical documentation, building standardized definition databases for compliance teams, researching industry terminology for content creation, or developing training materials with consistent entity explanations! Good to know Each entity research typically costs $0.08-$0.34, depending on the complexity and sources required. The workflow includes smart duplicate detection to minimize unnecessary API calls. The workflow requires multiple AI services and a vector database, so setup time may be longer than simpler templates. Entity definitions are stored locally in your Qdrant database and can be reused across multiple projects. How it works The workflow checks your existing knowledge base first to avoid duplicate research on entities you've already processed. If the entity is new, an AI research agent intelligently combines your vector database, Wikipedia, and live web research to gather comprehensive information. The system creates structured entity profiles with definitions, categories, examples, common misconceptions, and related entities - perfect for business documentation. AI-powered validation ensures all entity profiles are complete, accurate, and suitable for business use before storage. Each researched entity gets stored in your Qdrant vector database, creating a growing knowledge base that improves research efficiency over time. The workflow includes multiple stages of duplicate prevention to avoid unnecessary processing and API costs. How to use The manual trigger node is used as an example, but feel free to replace this with other triggers such as form submissions, content management systems, or automated content pipelines. You can research multiple related entities in sequence, and the system will automatically identify connections and relationships between them. Provide topic and audience context to get tailored explanations suitable for your specific business needs. Requirements OpenAI API account for o4-mini (entity research and validation) Qdrant vector database instance (local or cloud) Ollama with nomic-embed-text model for embeddings Automate Web Research with GPT-4, Claude & Apify for Content Analysis and Insights workflow (for live web research capabilities) Anthropic API account for Claude Sonnet 4 (used by the web research workflow) Apify account for web scraping (used by the web research workflow) Customizing this workflow Entity research automation can be adapted for many specialized domains. Try focusing on specific industries like legal terminology (targeting official legal sources), medical concepts (emphasizing clinical accuracy), or financial terms (prioritizing regulatory definitions). You can also customize the validation criteria to match your organization's specific quality standards.
by Don Jayamaha Jr
Instantly access live Bybit Spot Market data in Telegram! This workflow integrates the Bybit REST v5 API with Telegram and optional GPT-4.1-mini formatting, delivering real-time crypto market insights such as latest prices, order books, trades, and candlesticks โ all presented in clean, structured Telegram messages. ๐ How It Works A Telegram Trigger node listens for incoming user requests. User Authentication checks the Telegram ID against an allowlist. A Session ID is created from chat.id for lightweight memory across interactions. The Bybit AI Agent orchestrates multiple API requests via HTTP nodes: Latest Price & 24h Stats (/v5/market/tickers?category=spot&symbol=BTCUSDT) Order Book Depth (/v5/market/orderbook?category=spot&symbol=BTCUSDT&limit=50) Best Bid/Ask Snapshot (from order book top levels) Candlestick Data (Klines) (/v5/market/kline?category=spot&symbol=BTCUSDT&interval=15&limit=200) Recent Trades (/v5/market/recent-trade?category=spot&symbol=BTCUSDT&limit=100) Utility Nodes process and format the response: Calculator โ computes spreads, mid-prices, % changes. Think โ transforms JSON into human-readable reports. Simple Memory โ stores symbol, sessionId, and previous inputs. Message Splitter ensures responses over 4000 characters are broken into chunks. Final results are sent back to Telegram in structured, readable format. โ What You Can Do with This Agent Get real-time Bybit prices & 24h statistics. Retrieve spot order book depth and liquidity snapshots. Analyze candlesticks (OHLCV) across multiple timeframes. View recent trades for market activity. Monitor bid/ask spreads & mid-prices with calculated values. Receive Telegram-ready reports, cleanly formatted and auto-split when long. ๐ ๏ธ Setup Steps Create a Telegram Bot Use @BotFather to create a bot and get a token. Configure in n8n Import Bybit AI Agent v1.02.json. Update the User Authentication node with your Telegram ID. Add your Telegram API credentials (bot token). Add OpenAI API key (Optional) Add Bybit API key if you want AI-enhanced formatting. Deploy and Test Activate the workflow in n8n. Send a message like BTCUSDT to your bot. Instantly receive Bybit Spot data inside Telegram. ๐บ Setup Video Tutorial Watch the full setup guide on YouTube: โก Unlock Bybit Spot Market insights in Telegram โ fast, structured, and API-key free. ๐งพ Licensing & Attribution ยฉ 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding permitted. ๐ For support: Don Jayamaha โ LinkedIn
by Snehasish Konger
How it works: This template turns rows in a Google Sheet into polished newsletter drafts in Notion using an AI writing agent. You click Execute workflow in n8n. It fetches all rows marked N8n Status = Pending, generates a draft from Newsletter Title and About the Newsletter, creates a Notion page, writes the full draft, then flips the sheet row to Done before moving to the next one. Before you start (use your own credentials): Create and select your credentials in n8n: Google Sheets (OAuth2 or Service Account) with access to the target spreadsheet. Notion (Internal Integration) with access to the target database. OpenAI (API Key) for the Chat Model. Replace any placeholders in nodes: Spreadsheet ID/URL and sheet/tab name (e.g., newsletter). Notion Database ID / Parent and any page or block IDs used by HTTP Request nodes. OpenAI model name if you prefer a different model. Give the Notion integration access to the database (Share โ Invite the integration). Do not hard-code secrets in nodes. Store them in n8n Credentials. Step-by-step: Manual Trigger Start the run with When clicking โExecute workflowโ. Fetch pending input (Google Sheets โ Get row(s) in sheet) Read the newsletter tab and pull only rows where N8n Status = Pending. Iterate (Split In Batches โ Loop Over Items) Process one sheet row at a time for stable memory use and pacing. Generate the newsletter (AI Agent + OpenAI Chat Model) AI Agent loads the โSystem Role Instructionsโ that define style, sections, and format. Pass Newsletter Title and About the Newsletter to the OpenAI Chat Model to produce the draft. Create a Notion page (Notion โ Create Page) Create a page in your Newsletter Automation database with the page title set from Newsletter Title. Prepare long content for Notion (Code) Split the AI output into \~1,800-character chunks and wrap as Notion paragraph blocks to avoid payload limits. Write content blocks to Notion (HTTP Request โ UpdateNotionBlock) Send a PATCH request to append all generated blocks so the full draft appears on the page. Mark the sheet row as done (Google Sheets โ Update row in sheet) Update N8n Status = Done for the processed Newsletter Title. Continue the loop Return to Split In Batches for the next pending row until none remain. Tools integration: Google Sheets** โ input queue and status tracking (Pending โ Done) OpenAI** โ LLM that writes the draft from provided fields Notion** โ destination database for each draft page n8n Code + HTTP Request** โ chunking and Notion API block updates Want auto-runs? Add a Cron trigger before step 2 and keep the flow unchanged.
by Don Jayamaha Jr
Instantly fetch live Gate.io Spot Market data directly in Telegram! This workflow integrates the Gate.io REST v4 API with GPT-4.1-mini-powered AI and Telegram, giving traders real-time access to price action, order books, candlesticks, and trade data. Perfect for crypto traders, analysts, and DeFi builders who need fast and reliable exchange insights. โ๏ธ How It Works A Telegram bot listens for user queries (e.g., "BTC_USDT"). The workflow securely processes the request, authenticates the user, and attaches a sessionId. The Gate AI Agent orchestrates data retrieval via Gate.io Spot Market API, including: โ Latest Price & 24h Stats (/spot/tickers) โ Order Book Depth (with best bid/ask snapshots) โ Klines (candlesticks) for OHLCV data โ Recent Trades (up to 100 latest trades) Data is optionally cleaned using Calculator (for spreads, midpoints, % changes) and Think (for formatting). An AI-powered formatter (GPT-4.1-mini) structures results into Telegram-friendly reports. The final Gate.io Spot insights are sent back instantly in HTML-formatted Telegram messages. ๐ก What You Can Do with This Agent This AI-driven Telegram bot enables you to: โ Track real-time spot prices for any Gate.io pair โ Monitor order book depth (liquidity snapshots) โ View recent trades for activity insights โ Analyze candlesticks across multiple intervals โ Compare bid/ask spreads with calculated metrics โ Get clean, structured data without raw JSON clutter ๐ ๏ธ Setup Steps Create a Telegram Bot Use @BotFather on Telegram to create a bot and obtain an API token. Configure Telegram API Credentials in n8n Add your bot token under Telegram API credentials. Replace the placeholder Telegram ID in the Authentication node with your own. Import & Deploy Workflow Load Gate AI Agent v1.02.json into n8n. Configure your OpenAI API key for . Configure your Gate api key. Save and activate the workflow. Run & Test Send a query (e.g., "BTC_USDT") to your Telegram bot. Receive instant Gate.io market insights formatted for easy reading. ๐บ Setup Video Tutorial Watch the full setup guide on YouTube: โก Unlock real-time Gate.io Spot Market insights directly in Telegram โ fast, clean, and reliable. ๐งพ Licensing & Attribution ยฉ 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding permitted. ๐ For support: Don Jayamaha โ LinkedIn