by tbphp
Overview This n8n template monitors specified GitHub repositories. When a new release is published, it automatically fetches the information, uses AI (Google Gemini by default) to summarize and translate it into Chinese, and sends a formatted notification to a designated Slack channel. Core Features: Automated Monitoring**: Checks for updates on a predefined schedule. Intelligent Processing**: Uses AI to extract key information and translate. Error Handling**: Sends an error notification if fetching RSS for a single repository fails, without affecting others. Duplicate Prevention**: Remembers the last processed release ID using Redis to ensure only new content is pushed. Prerequisites Slack**: Configure your Slack app credentials in n8n. Redis**: Have an available Redis service and configure its credentials in n8n. AI Provider (Gemini)**: Configure credentials for Google Gemini (or your chosen AI model) in n8n. Configuration Instructions After importing the template, you need to modify the following key nodes: Cron Trigger: Adjust the Rule setting to change the update check frequency (default is 0 */10 9-23 * * *, checking every 10 minutes between 9 AM and 11 PM daily). GitHub Config (Repository List - Code Node): Edit the JavaScript array within this node's code area. Modify or add the repositories you want to follow. Each repository object needs a name (custom display name) and github (format: owner/repo). Example: { "name": "n8n", // Custom display name "github": "n8n-io/n8n" // GitHub path }, { "name": "LobeChat", "github": "lobehub/lobe-chat" } // ... add more repositories Redis and Redis2 (Redis Connection): Select your configured Redis credentials in both nodes. Gemini (AI Model): Select your configured Google Gemini credentials. (Optional) Replace with a different supported AI model node and select its credentials. Information Extractor (AI Processing & Translation): Main Configuration: Review the System Prompt. By default, it asks the AI to extract information and translate it into Chinese. Modify this prompt if you need a different language or summary style. Send Message and Send Error (Slack Notifications): Select your configured Slack credentials in both Slack nodes. Set the target Channel ID for notifications. Workflow Overview Start: Cron Trigger initiates the workflow on schedule. Load Config: GitHub Config provides the list of repositories to monitor. Loop: The Loop node iterates through each repository. Fetch & Check: The RSS node attempts to fetch the repository's releases feed. If No Error checks for success: Failure: Send Error posts an error to Slack, skips this repository. Success: Continues. Check for New Release: The Redis node retrieves the last recorded Release ID for this repository. The If New node compares the latest Release ID with the recorded ID: Different IDs (New Release): Proceeds to processing. Same ID (Already Processed): Skips this repository. Process & Notify (Only for New Releases): Information Extractor (with Gemini) extracts, summarizes, and translates the content. The Code node formats the information into Slack Block Kit. Send Message sends the formatted message to Slack. The Redis2 node stores the current Release ID in Redis. End: The workflow finishes after processing all repositories. Conclusion Once configured, this template automates GitHub release monitoring, uses AI to distill key information, and delivers it efficiently to your Slack workspace.
by Jakkrapat Ampring
Main Use Case This workflow enables automated, AI-assisted replies to users messaging a LINE Official Account, while storing and referencing chat history from Google Sheets to maintain context. Ideal for businesses or support teams that want to provide smart, personalized customer interactions using AI with memory. How It Works (Step-by-Step) Connect to LINE Official Account's API A Webhook listens for incoming messages from users on LINE. When a message is received, it triggers the workflow. Prepare the Data An Edit Fields module structures incoming data (e.g. extracts user ID, message content). This ensures data is clean and usable downstream. Retrieve Chat History The user’s previous conversations are fetched from a Google Sheet. This ensures the AI has memory and can continue conversations contextually. Prepare Prompt The retrieved chat history is combined with the new message to form a complete prompt for the AI. Example format: “User previously said X. Now they said Y. How should we respond?” AI Agent: Google Gemini The formatted prompt is passed to an AI Agent (Google Gemini Chat Model). The AI generates a response based on the message + history. Tools used: Chat ModeMemory, ToolOutputParser for accurate replies. Split & Clean History The conversation history is split into smaller chunks for cleaning and storage. This ensures the Google Sheet remains readable and manageable over time. Save Chat History The cleaned new message and AI reply are saved to Google Sheets. This updates the chat history for future context. Send Reply to LINE The AI-generated reply is sent back to the user via a POST HTTP Request to the LINE Messaging API. How to Set Up Prerequisites: LINE Official Account Google Sheet to store chat history Google Gemini API or AI agent with context memory Automation platform (e.g., n8n, as this seems visually similar) Step-by-Step: Create a Webhook on LINE: Set the webhook URL to your automation service. Enable webhook events. Design Your Google Sheet: Create a sheet with columns: User ID, Timestamp, Message, AI Reply. Set Up Modules in Automation Platform: Webhook: receives user messages. Edit Fields: extract user ID and message. Google Sheets Read: fetch message history. Prompt Composer: format prompt using past history + new message. AI Agent: connect to Google Gemini for smart replies. Split & Clean: clean and chunk history if needed. Google Sheets Write: save the updated conversation. HTTP Request: send reply to LINE via Messaging API. Test Your Workflow: Send a message from LINE. Watch the full loop: receive → process → AI → store → reply. Deploy & Monitor: Ensure error handling is in place (e.g., for blank messages or failed API calls). Regularly check your Google Sheets for storage limits. (If limits reached, you can increase the history row.) 📦 Benefits Maintains context in conversations Personalized, AI-driven responses Easy history tracking via Google Sheets Fully automated and scalable
by Preston Zeller
How It Works This workflow automates the entire property lead generation process in a few simple steps: Property Search: Connects to BatchData's Property Search API with customizable parameters (location, property type, value range, equity percentage, etc.) Lead Filtering & Scoring: Processes results to identify the most promising leads based on criteria like absentee ownership, years owned, equity percentage, and tax status. Each property receives a lead score to prioritize follow-up. Skip Tracing: Automatically retrieves owner contact information (phone, email, mailing address) for each qualified property. Data Formatting: Structures all property and owner data into a clean, organized format ready for your systems. Multi-Channel Output: Generates an Excel spreadsheet with all lead details Pushes leads directly to your CRM (configurable for HubSpot, Salesforce, etc.) Sends a summary email with the spreadsheet attached The workflow can run on a daily schedule or be triggered manually as needed. All parameters are easily configurable through dedicated nodes, requiring no coding knowledge. Who's It For This workflow is perfect for: Real Estate Investors looking to find off-market properties with motivated sellers Real Estate Agents who want to generate listing leads from distressed or high-equity properties Investment Companies that need regular lead flow for acquisitions Real Estate Marketers who run targeted campaigns to property owners Wholesalers seeking to build a pipeline of potential deals Property Service Providers (roof repair, renovation contractors, etc.) who target specific property types Anyone who needs reliable, consistent lead generation for real estate without the manual work of searching, filtering, and organizing property data will benefit from this automation. About BatchData BatchData is a comprehensive property data provider that offers access to nationwide property information, owner details, and skip tracing services. Key features include: Extensive Database: Covers 150+ million properties across all 50 states Rich Property Data: Includes ownership information, tax records, sales history, valuation estimates, equity positions, and more Skip Tracing Services: Provides owner contact information including phone numbers, email addresses, and mailing addresses Distressed Property Indicators: Flags for pre-foreclosure, tax delinquency, vacancy, and other motivation factors RESTful API: Professional API for programmatic access to all property data services Regular Updates: Continuously refreshed data for accurate information BatchData's services are designed for real estate professionals who need reliable property and owner information to power their marketing and acquisition strategies. Their API-first approach makes it ideal for workflow automation tools like N8N.
by Dataki
This workflow allows you to easily evaluate and compare the outputs of two language models (LLMs) before choosing one for production. In the chat interface, both model outputs are shown side by side. Their responses are also logged into a Google Sheet, where they can be evaluated manually or automatically using a more advanced model. Use Case You're developing an AI agent, and since LLMs are non-deterministic, you want to determine which one performs best for your specific use case. This template is designed to help you compare them effectively. How It Works The user sends a message to the chat interface. The input is duplicated and sent to two different LLMs. Each model processes the same prompt independently, using its own memory context. Their answers, along with the user input and previous context, are logged to Google Sheets. You can review, compare, and evaluate the model outputs manually (or automate it later). In the chat, both responses are also shown one after the other for direct comparison. How To Use It Copy this Google Sheets template (File > Make a Copy). Set up your System Prompt and Tools in the AI Agent node to suit your use case. Start chatting! Each message will trigger both models and log their responses to the spreadsheet. Note: This version is set up for two models. If you want to compare more, you’ll need to extend the workflow logic and update the sheet. About Models You can use OpenRouter or Vertex AI to test models across providers. If you're using a node for a specific provider, like OpenAI, you can compare different models from that provider (e.g., gpt-4.1 vs gpt-4.1-mini). Evaluation in Google Sheets This is ideal for teams, allowing non-technical stakeholders (not just data scientists) to evaluate responses based on real-world needs. Advanced users can automate this evaluation using a more capable model (like o3 from OpenAI), but note that this will increase token usage and cost. Token Considerations Since each input is processed by two different models, the workflow will consume more tokens overall. Keep an eye on usage, especially if working with longer prompts or running multiple evaluations, as this can impact cost.
by Alfonso Corretti
Gmail to Vector Embeddings with PGVector and Ollama Who is this for? Everyone! Did you dream of asking an AI "what hotel did I stay in for holidays last summer?" or "what were my marks last semester like?". Dream no more, as vector similarity searches and this workflow are the foundations to make it possible (as long as the information appears in your e-mails 😅). 100% local This workflow is designed to use locally-hosted open source. Ollama as LLM provider, nomic-embed-text as the embeddings model, and pgvector as the vector database engine, on top of Postgres. But.. how?! Firstly, specify the date you created your Gmail account on, then manually run the workflow in order to bulk read all your e-mail in monthly batches. Your database is now populated! Now it's the task for other workflows to query the vector database. Activate the workflow so that new e-mail is continuously added by the Gmail Trigger upon receiving it. Structured AND Vectorized This workflow stores your e-mail activity in two ways: In a structured table In a vector embeddings table And the information in both of them can be correlated by Gmail's messages id, which is stored in the vectors table as metadata property emails_metadata.id. That way consumers can benefit from both worlds! ✨ Vector similarity searches enable semantic searches, while structured queries can retrieve more factual data like the message id, its date or who it came from. Other useful templates My template Chat with Your Email History using Telegram, Mistral and Pgvector for RAG is a ready-made solution to consume this workflow. You may also pair this workflow with my other template to Email Assistant: Convert Natural Language to SQL Queries with Phi4-mini and PostgreSQL and you'll enable RAG workflows that use both structured and vectorized databases. Customizations I suppose the e-mail provider could be changed, but then you'd have to identify an alternative id field. Message-ID would be a more standard option. There are a few opinionated choices as to what metadata to store, but those shouldn't need adjustments.
by Łukasz
Who is it for? This is automation for support project manager, which helps not only to keep developres informed but also automatically keep clients in the loop - especially useful if you are managing SLA-like agreement. It is actually simple incident management board using free Kanban board, that is extended in functionality via N8N. How It Works? Script has two entry points. The first one is incident form. When incident details are provided, automation gets incident definitions from database and pushes both information to AI. AI comparse definitions with client request, refines incident priority and pushed it in NocoDB database. Second is schedule trigger, which is responsible for regular notificaitons on task status. If task is not picked up or delivered in proper time, then emails or slack messages are being sent both to client and responsible developer. How to set up? Clone automation Create (samples below) two NocoDB tables: one with definitions and second that servers as Kanban board (mind column naming!) Set up email and slack connection You should be ready to go Different incident naming If your incident level naming is different, you need to update few nodes and few columns in NocoDB. This is because incident naming must be unified through: automation flow, incident definitions and column NocoDB select fields. So be sure that following is the same: NocoDB: Incident definitions, column "Title" NocoDB: Tasks table, single select fields: "expected category" "assigned category" N8N: Incident Form "Incident Desired Category" NocoDB Tables Incident definitions table |Title |Definition |Response time|Resolution time|Default assignee| |single line text|text|number|number|email| Tasks table |email|message|expected category|internal notes|assigned category|status|expected response|expected resolution|assignee|assignee slack| |email|text|single select|text|single select|single select|date and time|date and time|email|slack username| Use kanban board Simply set up Kanban view and stack by "status" field. What's More? That's actually it. I hope that this automation will help your support line be much more streamlined! There is actually more that you could do with this automation, but it really depends on your needs. For example, you could add Email trigger to handle incoming support requests (but remember to adjust nodes accordingly). Another thing is that you could make different notification schema, depending on your needs (for example I do imagine that you may want a day or two delay before you notify client that task is after due). Thank you, perfect! Glad I could help. Visit my profile for other automations for businesses. And if you are looking for dedicated software development, do not hesitate to reach out!
by Aitor | 1Node
Talk to Your Apps: Building a Personal Assistant MCP Server with Google Gemini Wouldn't it be cool to just tell your computer or phone to "schedule a meeting with Sarah next Tuesday at 3 PM" or "find John Doe's email address" and have it actually do it? That's the dream of a personal assistant! With n8n and the power of MCP and AI models like Google Gemini, you can actually build something pretty close to that. We've put together a workflow that shows you how you can use a natural language chat interface to interact with your other apps, like your CRM, email, and calendar. What You Need to Get Started Before you dive in, you'll need a few things: n8n:** An n8n instance (either cloud or self-hosted) to build and run your workflow. Google Gemini Access:** Access to the Google Gemini model via an API key. Credentials for Your Apps:** API keys or login details for the specific CRM, Email, and Calendar services you want to connect (like Google Sheets for CRM, Gmail, Google Calendar, etc., depending on your chosen nodes). A Chat Interface:** A way to send messages to n8n to trigger the workflow (e.g., via a chat app node or webhook). How it Works (In Simple Terms) Imagine this workflow is like a helpful assistant who sits between you and your computer. Step 1: You Talk, the AI Agent Listens It all starts when you send a message through your connected chat interface. Think of this as you speaking directly to your assistant. Step 2: The Assistant's Brain (Google Gemini) Your message goes straight to the assistant's "brain." In this case, the brain is powered by a smart AI model like Google Gemini. In our template we are using the latest Gemini 2.5 Pro. But this is totally up to you. Experiment and track which model fits the kind of tasks you will pass to the agent. Its job is to understand exactly what you're asking for. Are you asking to create something? Are you asking to find information? Are you asking to update something? The brain also uses a "memory" so it can remember what you've talked about recently, making the conversation feel more natural. We are using the default context window, which is the past 5 interactions. Step 3: The Assistant Decides What Tool to Use Once the brain understands your request, the assistant figures out the best way to help you. It looks at the request and thinks, "Okay, to do this, I need to use one of my tools." Step 4: The Assistant's Toolbox (MCP & Your Apps) Here's where the "MCP" part comes in. Think of "MCP" (Model Context Protocol) as the assistant's special toolbox. Inside this toolbox are connections to all the different apps and services you use – your CRM for contacts, your email service, and your calendar. The MCP system acts like a manager for these tools, making them available to the assistant whenever they're needed. Step 5: Using the Right Tool for the Job Based on what you asked for, the assistant picks the correct tool from the toolbox. If you asked to find a contact, it grabs the "Get Contact" node from the CRM section. If you wanted to schedule a meeting, it picks the "Create Event" node from the Calendar section. If you asked to draft an email, it uses the "Draft Email" node. Step 6: The Tool Takes Action Now, the node or set of nodes get to work! It performs the action you requested within the specific app. The CRM tool finds or adds the contact. The Email tool drafts the message. The Calendar tool creates the event. Step 7: Task Completed! And just like that, your request is handled automatically, all because you simply told your assistant what you wanted in plain language. Why This is Awesome This kind of workflow shows the power of combining AI with automation platforms like n8n. You can move beyond clicking buttons and filling out forms, and instead, interact with your digital life using natural conversation. n8n makes it possible to visually build these complex connections between your chat, the AI brain, and all your different apps. Taking it Further (Possible Enhancements) This is just the start! You could enhance this personal assistant by: Connecting more apps and services (task managers, project tools, etc.). Adding capabilities to search the web or internal documents. Implementing more sophisticated memory or context handling. Getting a notification when the AI agent is done completing each task such as in Slack or Microsoft Teams. Allowing the assistant to ask clarifying questions if needed. Building a robust prompt for the AI agent. Ready to Automate Your Workflow? Imagine the dozens of hours your team could save weekly by automating repetitive tasks through a simple, natural language interface. Need help? Feel free to contact us at 1 Node. Get instant access to a library of free resources we created.
by Juan Sanchez
🧾 Personal Invoice Processor This N8N workflow automates the extraction and organization of personal invoices in Colombia received via Gmail. It includes the following key steps: 🔁 Flow Summary Email Trigger Polls Gmail every 30 minutes for emails with .zip attachments (assumed to contain invoices). Expects ZIP file following DIAN standards. ZIP File Handling Extracts all files. Filters only PDF and XML files for processing. Data Extraction & Processing Uses LangChain Agent + OpenAI (GPT-4o-mini) to extract: Tipo de documento (Factura / Nota Crédito) Número de factura Fecha de emisión (YYYY-MM-DD) NIT emisor y receptor (sin dígito de verificación) Razón social del emisor Subtotal, IVA, Total CUFE Resumen de compra (max 20 words, formatted sentence) Validation Ensures Total = Subtotal + IVA using a calculator node. Storage Uploads the original PDF to Google Drive. Renames the file to: YYYY-MM-DD-NUMERO_FACTURA.pdf. Inserts or updates invoice details in Google Sheets using a unique Key (NIT_Emisor + Numero_Factura) to prevent duplication. > ⚙️ Designed for personal use with minimal latency tolerance and high automation reliability.
by ainabler
Overall Description & Potential << What Does This Flow Do? >> Overall, this workflow is an intelligent sales outreach automation engine that transforms raw leads from a form or a list into highly personalized, ready-to-send introductory email drafts. The process is: it starts by fetching data, enriches it with in-depth AI research to uncover "pain points," and then uses those research findings to craft an email that is relevant to the solutions you offer. This system solves a key problem in sales: the lack of time to conduct in-depth research on every single lead. By automating the research and drafting stages, the sales team can focus on higher-value activities, like engaging with "warm" prospects and handling negotiations. Using Google Sheets as the main dashboard allows the team to monitor the entire process—from lead entry, research status, and email drafts, all the way to the send link—all within a single, familiar interface. << Potential Future Enhancements >> This workflow has a very strong foundation and can be further developed into an even more sophisticated system: Full Automation (Zero-Touch): Instead of generating a manual-click link, the output from the AI Agent can be directly piped into a Gmail or Microsoft 365 Email node to send emails automatically. A Wait node could be added to create a delay of a few minutes or hours after the draft is created, preventing instant sending. Automated Follow-up Sequences: The workflow can be extended to manage follow-up emails. By using a webhook to track email opens or replies, you could build logic like: "If the intro email is not replied to within 3 days, trigger the AI Agent again to generate follow-up email #1 based on a different template, and then send it." AI-Powered Lead Scoring: After the research stage, the AI could be given the additional task of scoring leads (e.g., 1-10 or High/Medium/Low Priority) based on how well the target company's profile matches your ideal customer profile (ICP). This helps the sales team prioritize the most promising leads. Full CRM Integration: Instead of Google Sheets, the workflow could connect directly to HubSpot, Salesforce, or Pipedrive. It would pull new leads from the CRM, perform the research, draft the email, and log all activities (research results, sent emails) back to the contact's timeline in the CRM automatically. Multi-Channel Outreach: Beyond email, the AI could be instructed to draft personalized LinkedIn Connection Request messages or WhatsApp messages. The workflow could then use the appropriate APIs to send these messages, expanding your outreach beyond just email.
by Lukas Kunhardt
Intelligently Segment PDFs by Table of Contents This workflow empowers you to automatically process PDF documents, intelligently identify or generate a hierarchical Table of Contents (ToC), and then segment the entire document's content based on these ToC headings. It effectively breaks down a large PDF into its constituent sections, each paired with its corresponding heading and hierarchical level. Why It's Useful Unlock the true structure of your PDFs for granular access and advanced processing: AI Agent Tool:** A key use case is to provide this workflow as a tool to an AI agent. The agent can then use the segmented output to "read" and navigate to specific sections of a document to answer questions, extract information, or perform tasks with much greater accuracy and efficiency. Targeted Content Extraction:** Programmatically pull out specific chapters or subsections for focused analysis, summarization, reporting, or repurposing content. Enhanced RAG Systems:** Improve your Retrieval Augmented Generation (RAG) pipelines by feeding them well-defined, contextually relevant document sections instead of entire, monolithic PDFs. This leads to more precise AI-generated responses. Modular Document Processing:** Process different parts of a document using distinct logic in subsequent n8n workflows by acting on individual sections. Data Preparation:** Seamlessly convert lengthy PDFs into a structured format where each section (including its heading, level, and content in multiple formats) becomes a distinct, manageable item. How It Works Ingestion & Advanced Parsing: The workflow ingests a PDF (via a provided URL or a pre-set one for manual runs). It then utilizes Chunkr.ai to perform Optical Character Recognition (OCR) and parse the document into detailed structural elements, extracting text, HTML, and Markdown for each segment. AI-Powered Table of Contents Generation: A Google Gemini AI model analyzes the initial pages of the document (where a ToC often resides) along with section headers extracted by Chunkr as a fallback. This allows it to construct an accurate, hierarchical Table of Contents in a structured JSON format, even if the PDF lacks an explicit ToC or if it's poorly formatted. Precise Content Segmentation: Sophisticated custom code then meticulously maps the AI-generated ToC headings to their corresponding content within the parsed document from Chunkr. It intelligently determines the precise start and end of each section. Structured & Flexible Output: The primary output provides each identified section as an individual n8n item. Each item includes the heading text, its hierarchical level (e.g., 1, 1.1, 2), and the full content of that section in Text, HTML, and Markdown formats. Optionally, the workflow can also reconstruct the entire document into a single, navigable HTML file or a clean Markdown file. What You Need To run this workflow, you'll need: Input PDF:** When triggered by another workflow: A URL pointing to the PDF document. When triggered manually: The workflow uses a pre-configured sample PDF from Google Drive for demonstration (this can be customized). Chunkr.ai API Key:** Required for the initial parsing and OCR of the PDF document. You'll need to insert this into the relevant HTTP Request nodes. Google Gemini API Credentials:** Necessary for the AI model to intelligently generate the Table of Contents. This should be configured in the Google Gemini Chat Model nodes. Outputs The workflow primarily generates: Individual Document Sections:** A series of n8n items. Each item represents a distinct section of the PDF and contains: heading: The text of the section heading. headingLevel: The hierarchical level of the heading (e.g., 1 for H1, 2 for H2). sectionText: The plain text content of the section. sectionHTML: The HTML content of the section. sectionMarkdown: The Markdown content of the section. Alternatively, you can configure the workflow to output: Full Reconstructed Document:** A single HTML file representing the entire processed document. A single Markdown file representing the entire processed document. This workflow is ideal for anyone looking to deconstruct PDFs into meaningful, manageable parts for advanced automation, AI integration, or detailed content analysis.
by Don Jayamaha Jr
A powerful sub-agent that collects real-time market structure data from Binance for any trading pair — including price, volume, order book depth, and candlestick snapshots across multiple timeframes (15m, 1h, 4h, 1d). 🎥 Watch Tutorial: 🎯 Purpose This workflow powers the Quant AI system with: ✅ Real-time price feed (/ticker/price) ✅ 24-hour stats (OHLC, % change, volume via /ticker/24hr) ✅ Live order book depth (/depth) ✅ Latest candlestick data (/klines) for all major intervals All outputs are parsed and formatted using GPT and returned to the parent agent (e.g., Financial Analyst Tool) as a Telegram-optimized summary. ⚙️ Workflow Architecture | Node | Role | | ------------------------------------ | ------------------------------------------------------------ | | 🔗 Execute Workflow Trigger | Accepts input from parent workflow | | 🧠 Simple Memory | Stores session + symbol info | | 🤖 Binance SM Market Agent | Parses prompt, routes tool calls | | 💡 OpenAI Chat Model (gpt-4o-mini) | Converts raw data into a clean, readable format for Telegram | | 🌐 getCurrentPrice | Gets latest price | | 🌐 get24hrStats | Gets OHLC/volume over past 24 hours | | 🌐 getOrderBook | Gets top 100 bids and asks | | 🌐 getKlines | Gets latest 15m, 1h, 4h, and 1d candles | 📥 Input Requirements This workflow is not called directly by the user. Instead, it is triggered by another workflow, such as: { "message": "BTCUSDT", "sessionId": "539847013" } 📤 Telegram Output Example 📊 BTCUSDT Market Overview 💰 Price: $63,220 📈 24h Change: +2.3% | Volume: 45,210 BTC 📉 Order Book • Top Bid: $63,190 • Top Ask: $63,230 🕰️ Latest Candles • 15m: O: $63,000 | C: $63,220 | Vol: 320 BTC • 1h : O: $62,700 | C: $63,300 | Vol: 980 BTC • 4h : O: $61,800 | C: $63,500 | Vol: 2,410 BTC • 1d : O: $59,200 | C: $63,220 | Vol: 7,850 BTC ✅ Use Cases | Scenario | Output Provided | | ---------------------------------- | ------------------------------------------------------------ | | “Show current BTC price and trend” | Price, 24h stats, candles, and order book in one message | | “Candles for SOL” | 15m, 1h, 4h, 1d candlesticks for SOLUSDT | | Triggered by Quant AI system | Clean Telegram-ready summary with all structure tools merged | 🧩 Toolchain Breakdown | Tool Name | Endpoint | Purpose | | ----------------- | ---------------------- | ------------------------------ | | getCurrentPrice | /api/v3/ticker/price | Latest trade price | | get24hrStats | /api/v3/ticker/24hr | 24h OHLC, % change, volume | | getOrderBook | /api/v3/depth | Top 100 bids and asks | | getKlines | /api/v3/klines | 1-candle snapshot across 4 TFs | 🚀 Installation Steps Import the JSON into your n8n instance Connect your OpenAI credentials for the Chat Model node No Binance API key needed — public endpoints Trigger this tool only via: Binance SM Financial Analyst Tool Binance Spot Market Quant AI Agent 🔐 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade structure are IP-protected. No unauthorized rebranding permitted. 🔗 For support: Don Jayamaha – LinkedIn
by Don Jayamaha Jr
A professional-grade AI automation system for spot market trading insights on Binance. It analyzes multi-timeframe technical indicators, live price/order data, and crypto sentiment, then delivers fully formatted Telegram-style trading reports. 🎥 Watch Tutorial: 🧩 Required Workflows You must install and activate all of the following workflows for the system to function correctly: | ✅ Workflow Name | 📌 Function Description | | -------------------------------------------------- | -------------------------------------------------------------------------------- | | Binance Spot Market Quant AI Agent | Final AI orchestrator. Parses user prompt and generates Telegram-ready reports. | | Binance SM Financial Analyst Tool | Calls indicator tools and price/order data tools. Synthesizes structured inputs. | | Binance SM News and Sentiment Analyst Webhook Tool | Analyzes crypto sentiment, gives summary and headlines via POST webhook. | | Binance SM Price/24hrStats/OrderBook/Kline Tool | Pulls price, order book, 24h stats, and OHLCV klines for 15m–1d. | | Binance SM 15min Indicators Tool | Calculates 15m RSI, MACD, BBANDS, ADX, SMA/EMA from Binance kline data. | | Binance SM 1hour Indicators Tool | Same as above but for 1h timeframe. | | Binance SM 4hour Indicators Tool | Same as above but for 4h timeframe. | | Binance SM 1day Indicators Tool | Same as above but for 1d timeframe. | | Binance SM Indicators Webhook Tool | Technical backend. Handles all webhook logic for each timeframe tool. | ⚙️ Installation Instructions Step 1: Import Workflows Open your n8n Editor UI Import each workflow JSON file one by one Activate them or ensure they're called via Execute Workflow Step 2: Set Credentials OpenAI API Key** (GPT-4o recommended) Binance endpoints** are public (no auth required) Step 3: Configure Webhook Endpoints Deploy Binance SM Indicators Webhook Tool Ensure the following paths are reachable: /webhook/15m /webhook/1h /webhook/4h /webhook/1d Step 4: Telegram Integration Create a Telegram bot using @BotFather Add your Telegram API token to n8n credentials Replace the Telegram ID placeholder with your own Step 5: Final Trigger Trigger the Binance Spot Market Quant AI Agent manually or from Telegram The agent: Extracts the trading pair (e.g. BTCUSDT) Calls all tools for market data and sentiment Generates a clean, HTML-formatted Telegram report 💬 Telegram Report Output Format BTCUSDT Market Report Spot Strategy • Action: Buy • Entry: $63,800 | SL: $61,200 | TP: $66,500 • Rationale: MACD Crossover (1h) RSI Rebound from Oversold (15m) Sentiment: Bullish Leverage Strategy • Position: Long 3x • Entry: $63,800 • SL/TP zones same as above News Sentiment: Slightly Bullish • "Bitcoin rallies as ETF inflows surge" – CoinDesk • "Whales accumulate BTC at key support" – NewsBTC 🧠 System Overview [Telegram Trigger] → [Session + Auth Logic] → [Binance Spot Market Quant AI Agent] → [Financial Analyst Tool + News Tool] → [All Technical Indicator Tools (15m, 1h, 4h, 1d)] → [OrderBook/Price/Kline Fetcher] → [GPT-4o Reasoning] → [Split & Send Message to Telegram] 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding or resale permitted. 🔗 For support: LinkedIn – Don Jayamaha