by ainabler
Overall Description & Potential << What Does This Flow Do? >> Overall, this workflow is an intelligent sales outreach automation engine that transforms raw leads from a form or a list into highly personalized, ready-to-send introductory email drafts. The process is: it starts by fetching data, enriches it with in-depth AI research to uncover "pain points," and then uses those research findings to craft an email that is relevant to the solutions you offer. This system solves a key problem in sales: the lack of time to conduct in-depth research on every single lead. By automating the research and drafting stages, the sales team can focus on higher-value activities, like engaging with "warm" prospects and handling negotiations. Using Google Sheets as the main dashboard allows the team to monitor the entire process—from lead entry, research status, and email drafts, all the way to the send link—all within a single, familiar interface. << Potential Future Enhancements >> This workflow has a very strong foundation and can be further developed into an even more sophisticated system: Full Automation (Zero-Touch): Instead of generating a manual-click link, the output from the AI Agent can be directly piped into a Gmail or Microsoft 365 Email node to send emails automatically. A Wait node could be added to create a delay of a few minutes or hours after the draft is created, preventing instant sending. Automated Follow-up Sequences: The workflow can be extended to manage follow-up emails. By using a webhook to track email opens or replies, you could build logic like: "If the intro email is not replied to within 3 days, trigger the AI Agent again to generate follow-up email #1 based on a different template, and then send it." AI-Powered Lead Scoring: After the research stage, the AI could be given the additional task of scoring leads (e.g., 1-10 or High/Medium/Low Priority) based on how well the target company's profile matches your ideal customer profile (ICP). This helps the sales team prioritize the most promising leads. Full CRM Integration: Instead of Google Sheets, the workflow could connect directly to HubSpot, Salesforce, or Pipedrive. It would pull new leads from the CRM, perform the research, draft the email, and log all activities (research results, sent emails) back to the contact's timeline in the CRM automatically. Multi-Channel Outreach: Beyond email, the AI could be instructed to draft personalized LinkedIn Connection Request messages or WhatsApp messages. The workflow could then use the appropriate APIs to send these messages, expanding your outreach beyond just email.
by Omar Akoudad
This n8n workflow helps eCommerce businesses (especially in the Cash on Delivery space) send real-time order events to the Meta (Facebook) Conversions API, ensuring accurate event tracking and better ad attribution. Features Webhook Listener**: Accepts incoming order data (name, phone, IP, user-agent, etc.) via HTTP POST/GET. Data Normalization**: Cleans and formats first_name, last_name, phone, and event_time according to Facebook's strict specs. Data Hashing**: Securely hashes sensitive user data (SHA256), as required by Meta. Full Custom Data Suppor**t: Pass order value, currency, and more. Ideal For: Shopify, WooCommerce, custom stores (Laravel, Node, etc.) Businesses using Meta Ads and needing high-quality server-side tracking Teams without access to full dev resources, but using n8n for automation How It Works: Receive Order from your store via Webhook or API. Format & Normalize fields to match Facebook’s expected structure. Encrypt Sensitive Fields using SHA256 (name, phone, email). Send to Facebook via the Conversions API endpoint. Requirements: A Meta Business Manager account with Conversions API access Your Access Token and Pixel ID set up in n8n credentials
by tanaypant
This workflow is the third of three. You can find the other workflkows here: Incident Response Workflow - Part 1 Incident Response Workflow - Part 2 Incident Response Workflow - Part 3 We have the following nodes in the workflow: Webhook node: This trigger node listens to the event when the Resolve button is clicked. PagerDuty node: This node changes the status of the incident report from Acknowledged to Resolved in PagerDuty. Jira Software node: This node moves the incident issue to Done. Mattermost node: This node publishes a message in the auxiliary channel mentioning that the incident has been marked as resolved in PagerDuty and Jira. Mattermost node: This node publishes a message in the specified Incidents channel that the incident has been resolved by the on-call team.
by Friedemann Schuetz
Update 19-04-2025 Change from OpenAI to Claude 3.7 Sonnet module Adding the Think Tool The update enables significantly better results to be achieved. This is particularly noticeable during longer meetings! What this workflow does This workflow retrieves the Zoom meeting data from the last 24 hours. The transcript of the last meeting is then retrieved, processed, a summary is created using AI and sent to all participants by email. AI is then used to create tasks and follow-up appointments based on the content of the meeting. Important: You need a Zoom Workspace Pro account and must have activated Cloud Recording/Transcripts! This workflow has the following sequence: manual trigger (Can be replaced by a scheduled trigger or a webhook) retrieval of of Zoom meeting data filter the events of the last 24 hours retrieval of transcripts and extract of the text creating a meeting summary, format to html and send per mail create tasks and follow-up call (if discussed in the meeting) in ClickUp/Outlook (can be replaced by Gmail, Airtable, and so forth) via sub workflow Requirements: Zoom Workspace (via API and HTTP Request): Documentation Microsoft Outlook: Documentation ClickUp: Documentation AI API access (e.g. via OpenAI, Anthropic, Google or Ollama) SMTP access data (for sending the mail) You must set up the individual sub-workflows as separate workflows. Then set the “Execute workflow trigger” here. Then select the corresponding sub-workflow in the AI Agent Tools. You can select the number of domains yourself. If the data queries are not required, simply delete the corresponding tool (e.g. “Analytics_Domain_5). Feel free to contact me via LinkedIn, if you have any questions!
by Sascha
Having a seamless flow of customer data between your online store and your marketing platform is essential. By keeping your systems synchronized, you can ensure that your marketing campaigns are accurately targeted and effective. The integration between Shopify, a leading e-commerce platform, and Mautic, an open-source marketing automation system, is not available out-of-the-box. However, with a n8n workflow you can bridge this gap with. This template will help you: enhance accuracy in marketing lists by ensuring that subscription changes in Shopify are instantly updated in Mautic. improve compliance with data protection laws by respecting users' subscription preferences across platforms achieve integration without the need for additional plugins or software, minimizing complexity and potential points of failure. This template will demonstrate the follwing concepts in n8n: working with Shopify in n8n control flow with the IF node use Webhooks validate Webhooks with the Crypto node use the GraphQL node to call the Shopify Admin API The template consists of two parts: Sync Email Subscriptions from Shopify to Mautic Sync Email Subscriptions from Mautic to Shopify How to get started? Create a custom app in Shopify get the credentials needed to connect n8n to Shopify This is needed for the Shopify Trigger Create Shopify Acces Token API credentials n n8n for the Shopify trigger node Create Header Auth credentials: Use X-Shopify-Access-Token as the name and the Acces-Token from the Shopify App you created as the value. The Header Auth is neccessary for the GraphQL nodes. Enable the Mautic API under Configuration/API Settings, After the settings are saved you will have an additional entry in your settings menu to create API credentials for n8n Create Mautic credentials in n8n Please make sure to read the notes in the template. For a detailed explanation please check the corresponding video: https://youtu.be/x63rrh_yJzI
by Raquel Giugliano
This workflow automates currency rate uploads into SAP Business One via Service Layer, using flexible input sources such as JSON (API), SQL Server, Google Sheets, or manual values. It leverages logic branching, AI validation, and logging for complete control and traceability. ++⚙️ HOW IT WORKS:++ 🔹 1. Receive Data via Webhook The workflow listens on the endpoint /formulario-datos via HTTP POST. The request body should include: origen: one of JSON, SQL, GoogleSheets, or Manual Depending on the value, the flow branches accordingly. 🔹 2. Authenticate with SAP Business One A POST request is sent to SAP B1’s Login endpoint. A session cookie (B1SESSION) is retrieved and used in all subsequent API calls. 🔹 3. Switch by Origin The flow branches into four processing paths based on origen: JSON: The payload is normalized using OpenAI to extract an array of rates. Each rate is sent to SAP individually after parsing. SQL: The SQL query provided in the payload is executed on a connected Microsoft SQL Server. The results are checked by AI to validate the date format. If valid, rates are sent to SAP. GoogleSheets: Rates are pulled from a connected spreadsheet. Each entry is sent to SAP in sequence. Manual: Uses currency, rate, and rateDate directly from the webhook payload. Sends the result directly to SAP. 🔹 4. AI-Powered Enhancements (Optional but enabled) Normalize JSON: Uses OpenAI (LangChain node) to convert any messy structure into a uniform array under the key rate. Date Formatting: Another OpenAI call ensures RateDate is in yyyyMMdd format (required by SAP), converting from ISO, timestamp, or other formats. 🔹 5. Send to SAP Business One (Service Layer) All paths send a POST request to: /SBOBobService_SetCurrencyRate With a payload such as: { "Currency": "USD", "Rate": "0.92", "RateDate": "20250612" } 🔹 6. Log Results All success/failure results are appended to a Google Sheets log (LOGS_N8N) The log includes method, URL, sent payload, status code, and message. ++🛠 SETUP STEPS:++ 1️⃣ Create Required Credentials: Go to Credentials > + New Credential and configure: SAP Business One (Service Layer) Type: HTTP Request Auth or Token Base URL: https://<your-host>:50000/b1s/v1/ Provide Username, Password, and CompanyDB via variables or fields Google Sheets OAuth2 connection to a Google account with access Microsoft SQL Server SQL login credentials and host OpenAI API key with access to models like GPT-4o 2️⃣ Environment Variables (Recommended) Set these variables in n8n → Settings → Variables: SAP_URL=https://<host>:50000/b1s/v1/ SAP_USER=your_username SAP_PASSWORD=your_password SAP_COMPANY_DB=your_companyDB 3️⃣ Prepare Google Sheets Sheet 1: RATE (for charging the data) Columns: Currency, Rate, RateDate Sheet 2: LOGS_N8N (to save the logs, success or failed) Columns: workflow, method, url, json, status_code, message 4️⃣ Activate and Test Deploy the webhook and grab the URL. ++✅ BONUS++ Built-in AI assistance for input validation and structure Logs all results for compliance and audit Flexible integration paths: perfect for hybrid or transitional systems
by Dataki
This is the first version of a template for a RAG/GenAI App using WordPress content. As creating, sharing, and improving templates brings me joy 😄, feel free to reach out on LinkedIn if you have any ideas to enhance this template! How It Works This template includes three workflows: Workflow 1**: Generate embeddings for your WordPress posts and pages, then store them in the Supabase vector store. Workflow 2**: Handle upserts for WordPress content when edits are made. Workflow 3**: Enable chat functionality by performing Retrieval-Augmented Generation (RAG) on the embedded documents. Why use this template? This template can be applied to various use cases: Build a GenAI application that requires embedded documents from your website's content. Embed or create a chatbot page on your website to enhance user experience as visitors search for information. Gain insights into the types of questions visitors are asking on your website. Simplify content management by asking the AI for related content ideas or checking if similar content already exists. Useful for internal linking. Prerequisites Access to Supabase for storing embeddings. Basic knowledge of Postgres and pgvector. A WordPress website with content to be embedded. An OpenAI API key Ensure that your n8n workflow, Supabase instance, and WordPress website are set to the same timezone (or use GMT) for consistency. Workflow 1 : Initial Embedding This workflow retrieves your WordPress pages and posts, generates embeddings from the content, and stores them in Supabase using pgvector. Step 0 : Create Supabase tables Nodes : Postgres - Create Documents Table: This table is structured to support OpenAI embedding models with 1536 dimensions Postgres - Create Workflow Execution History Table These two nodes create tables in Supabase: The documents table, which stores embeddings of your website content. The n8n_website_embedding_histories table, which logs workflow executions for efficient management of upserts. This table tracks the workflow execution ID and execution timestamp. Step 1 : Retrieve and Merge WordPress Pages and Posts Nodes : WordPress - Get All Posts WordPress - Get All Pages Merge WordPress Posts and Pages These three nodes retrieve all content and metadata from your posts and pages and merge them. Important: ** **Apply filters to avoid generating embeddings for all site content. Step 2 : Set Fields, Apply Filter, and Transform HTML to Markdown Nodes : Set Fields Filter - Only Published & Unprotected Content HTML to Markdown These three nodes prepare the content for embedding by: Setting up the necessary fields for content embeddings and document metadata. Filtering to include only published and unprotected content (protected=false), ensuring private or unpublished content is excluded from your GenAI application. Converting HTML to Markdown, which enhances performance and relevance in Retrieval-Augmented Generation (RAG) by optimizing document embeddings. Step 3: Generate Embeddings, Store Documents in Supabase, and Log Workflow Execution Nodes: Supabase Vector Store Sub-nodes: Embeddings OpenAI Default Data Loader Token Splitter Aggregate Supabase - Store Workflow Execution This step involves generating embeddings for the content and storing it in Supabase, followed by logging the workflow execution details. Generate Embeddings: The Embeddings OpenAI node generates vector embeddings for the content. Load Data: The Default Data Loader prepares the content for embedding storage. The metadata stored includes the content title, publication date, modification date, URL, and ID, which is essential for managing upserts. ⚠️ Important Note : Be cautious not to store any sensitive information in metadata fields, as this information will be accessible to the AI and may appear in user-facing answers. Token Management: The Token Splitter ensures that content is segmented into manageable sizes to comply with token limits. Aggregate: Ensure the last node is run only for 1 item. Store Execution Details: The Supabase - Store Workflow Execution node saves the workflow execution ID and timestamp, enabling tracking of when each content update was processed. This setup ensures that content embeddings are stored in Supabase for use in downstream applications, while workflow execution details are logged for consistency and version tracking. This workflow should be executed only once for the initial embedding. Workflow 2, described below, will handle all future upserts, ensuring that new or updated content is embedded as needed. Workflow 2: Handle document upserts Content on a website follows a lifecycle—it may be updated, new content might be added, or, at times, content may be deleted. In this first version of the template, the upsert workflow manages: Newly added content** Updated content** Step 1: Retrieve WordPress Content with Regular CRON Nodes: CRON - Every 30 Seconds Postgres - Get Last Workflow Execution WordPress - Get Posts Modified After Last Workflow Execution WordPress - Get Pages Modified After Last Workflow Execution Merge Retrieved WordPress Posts and Pages A CRON job (set to run every 30 seconds in this template, but you can adjust it as needed) initiates the workflow. A Postgres SQL query on the n8n_website_embedding_histories table retrieves the timestamp of the latest workflow execution. Next, the HTTP nodes use the WordPress API (update the example URL in the template with your own website’s URL and add your WordPress credentials) to request all posts and pages modified after the last workflow execution date. This process captures both newly added and recently updated content. The retrieved content is then merged for further processing. Step 2 : Set fields, use filter Nodes : Set fields2 Filter - Only published and unprotected content The same that Step 2 in Workflow 1, except that HTML To Makrdown is used in further Step. Step 3: Loop Over Items to Identify and Route Updated vs. Newly Added Content Here, I initially aimed to use 'update documents' instead of the delete + insert approach, but encountered challenges, especially with updating both content and metadata columns together. Any help or suggestions are welcome! :) Nodes: Loop Over Items Postgres - Filter on Existing Documents Switch Route existing_documents (if documents with matching IDs are found in metadata): Supabase - Delete Row if Document Exists: Removes any existing entry for the document, preparing for an update. Aggregate2: Used to aggregate documents on Supabase with ID to ensure that Set Fields3 is executed only once for each WordPress content to avoid duplicate execution. Set Fields3: Sets fields required for embedding updates. Route new_documents (if no matching documents are found with IDs in metadata): Set Fields4: Configures fields for embedding newly added content. In this step, a loop processes each item, directing it based on whether the document already exists. The Aggregate2 node acts as a control to ensure Set Fields3 runs only once per WordPress content, effectively avoiding duplicate execution and optimizing the update process. Step 4 : HTML to Markdown, Supabase Vector Store, Update Workflow Execution Table The HTML to Markdown node mirrors Workflow 1 - Step 2. Refer to that section for a detailed explanation on how HTML content is converted to Markdown for improved embedding performance and relevance. Following this, the content is stored in the Supabase vector store to manage embeddings efficiently. Lastly, the workflow execution table is updated. These nodes mirros the **Workflow 1 - Step 3 nodes. Workflow 3 : An example of GenAI App with Wordpress Content : Chatbot to be embed on your website Step 1: Retrieve Supabase Documents, Aggregate, and Set Fields After a Chat Input Nodes: When Chat Message Received Supabase - Retrieve Documents from Chat Input Embeddings OpenAI1 Aggregate Documents Set Fields When a user sends a message to the chat, the prompt (user question) is sent to the Supabase vector store retriever. The RPC function match_documents (created in Workflow 1 - Step 0) retrieves documents relevant to the user’s question, enabling a more accurate and relevant response. In this step: The Supabase vector store retriever fetches documents that match the user’s question, including metadata. The Aggregate Documents node consolidates the retrieved data. Finally, Set Fields organizes the data to create a more readable input for the AI agent. Directly using the AI agent without these nodes would prevent metadata from being sent to the language model (LLM), but metadata is essential for enhancing the context and accuracy of the AI’s response. By including metadata, the AI’s answers can reference relevant document details, making the interaction more informative. Step 2: Call AI Agent, Respond to User, and Store Chat Conversation History Nodes: AI Agent** Sub-nodes: OpenAI Chat Model Postgres Chat Memories Respond to Webhook** This step involves calling the AI agent to generate an answer, responding to the user, and storing the conversation history. The model used is gpt4-o-mini, chosen for its cost-efficiency.
by Don Jayamaha Jr
A professional-grade AI automation system for spot market trading insights on Binance. It analyzes multi-timeframe technical indicators, live price/order data, and crypto sentiment, then delivers fully formatted Telegram-style trading reports. 🎥 Watch Tutorial: 🧩 Required Workflows You must install and activate all of the following workflows for the system to function correctly: | ✅ Workflow Name | 📌 Function Description | | -------------------------------------------------- | -------------------------------------------------------------------------------- | | Binance Spot Market Quant AI Agent | Final AI orchestrator. Parses user prompt and generates Telegram-ready reports. | | Binance SM Financial Analyst Tool | Calls indicator tools and price/order data tools. Synthesizes structured inputs. | | Binance SM News and Sentiment Analyst Webhook Tool | Analyzes crypto sentiment, gives summary and headlines via POST webhook. | | Binance SM Price/24hrStats/OrderBook/Kline Tool | Pulls price, order book, 24h stats, and OHLCV klines for 15m–1d. | | Binance SM 15min Indicators Tool | Calculates 15m RSI, MACD, BBANDS, ADX, SMA/EMA from Binance kline data. | | Binance SM 1hour Indicators Tool | Same as above but for 1h timeframe. | | Binance SM 4hour Indicators Tool | Same as above but for 4h timeframe. | | Binance SM 1day Indicators Tool | Same as above but for 1d timeframe. | | Binance SM Indicators Webhook Tool | Technical backend. Handles all webhook logic for each timeframe tool. | ⚙️ Installation Instructions Step 1: Import Workflows Open your n8n Editor UI Import each workflow JSON file one by one Activate them or ensure they're called via Execute Workflow Step 2: Set Credentials OpenAI API Key** (GPT-4o recommended) Binance endpoints** are public (no auth required) Step 3: Configure Webhook Endpoints Deploy Binance SM Indicators Webhook Tool Ensure the following paths are reachable: /webhook/15m /webhook/1h /webhook/4h /webhook/1d Step 4: Telegram Integration Create a Telegram bot using @BotFather Add your Telegram API token to n8n credentials Replace the Telegram ID placeholder with your own Step 5: Final Trigger Trigger the Binance Spot Market Quant AI Agent manually or from Telegram The agent: Extracts the trading pair (e.g. BTCUSDT) Calls all tools for market data and sentiment Generates a clean, HTML-formatted Telegram report 💬 Telegram Report Output Format BTCUSDT Market Report Spot Strategy • Action: Buy • Entry: $63,800 | SL: $61,200 | TP: $66,500 • Rationale: MACD Crossover (1h) RSI Rebound from Oversold (15m) Sentiment: Bullish Leverage Strategy • Position: Long 3x • Entry: $63,800 • SL/TP zones same as above News Sentiment: Slightly Bullish • "Bitcoin rallies as ETF inflows surge" – CoinDesk • "Whales accumulate BTC at key support" – NewsBTC 🧠 System Overview [Telegram Trigger] → [Session + Auth Logic] → [Binance Spot Market Quant AI Agent] → [Financial Analyst Tool + News Tool] → [All Technical Indicator Tools (15m, 1h, 4h, 1d)] → [OrderBook/Price/Kline Fetcher] → [GPT-4o Reasoning] → [Split & Send Message to Telegram] 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding or resale permitted. 🔗 For support: LinkedIn – Don Jayamaha
by Jay Hartley
Disclaimer This template only works on n8n local instances! How it Works This workflow allows you to to receive webhooks from the public web and have your local workflow catch them, without any remote proxy. It is very useful for running quick tests without exposing your dev server. All you have to do is activate the workflow and use the public address as defined below. Set up steps If you use the default key-value storage, there are only three steps: Install the @horka.tv/n8n-nodes-storage-kv community node Put your n8n workflow address in Local Webhook Address Activate the workflow and, from Executions, note down your public webhook token from the inputs to Get Latest Requests. You can now use https://webhook.site/[YOUR TOKEN] as a webhook destination, to receive webhook requests from the public web.
by Tom
This workflow shows a low code approach to creating a HTML table based on Google Sheets data. It's similar to this workflow, but allows fully customizing the HTML output. To run the workflow: Make sure you have a Google Sheet with a header row and some data in it. Grab your sheet ID: Add it to the Google Sheets node: Activate the workflow or execute it manually Visit the URL provided by the webhook node in your browser (production URL if the workflow is active, test URL if the workflow is executed manually)
by Angel Menendez
CallForge - AI-Powered Product Insights Processor from Sales Calls Automate product feedback extraction from AI-analyzed sales calls and store structured insights in Notion for data-driven product decisions. 🎯 Who is This For? This workflow is designed for: ✅ Product managers tracking customer feedback and feature requests. ✅ Engineering teams identifying usability issues and AI/ML-related mentions. ✅ Customer success teams monitoring product pain points from real sales conversations. It streamlines product intelligence gathering, ensuring customer insights are structured, categorized, and easily accessible in Notion for better decision-making. 🔍 What Problem Does This Workflow Solve? Product teams often struggle to capture, categorize, and act on valuable feedback from sales calls. With CallForge, you can: ✔ Automatically extract and categorize product feedback from AI-analyzed sales calls. ✔ Track AI/ML-related mentions to gauge customer demand for AI-driven features. ✔ Identify feature requests and pain points for product development prioritization. ✔ Store structured feedback in Notion, reducing manual tracking and increasing visibility across teams. This workflow eliminates manual feedback tracking, allowing product teams to focus on innovation and customer needs. 📌 Key Features & Workflow Steps 🎙️ AI-Powered Product Feedback Processing This workflow processes AI-generated sales call insights and organizes them in Notion databases: Triggers when AI sales call data is received. Detects product-related feedback (feature requests, bug reports, usability issues). Extracts key product insights, categorizing feedback based on customer needs. Identifies AI/ML-related mentions, tracking customer interest in AI-driven solutions. Aggregates feedback and categorizes it by sentiment (positive, neutral, negative). Logs insights in Notion, making them accessible for product planning discussions. 📊 Notion Database Integration Product Feedback** → Logs feature requests, usability issues, and bug reports. AI Use Cases** → Tracks AI-related discussions and customer interest in machine learning solutions. 🛠 How to Set Up This Workflow 1. Prepare Your AI Call Analysis Data Ensure AI-generated sales call insights are available. Compatible with Gong, Fireflies.ai, Otter.ai, and other AI transcription tools. 2. Connect Your Notion Database Set up Notion databases for: 🔹 Product Feedback (logs feature requests and bug reports). 🔹 AI Use Cases (tracks AI/ML mentions and customer demand). 3. Configure n8n API Integrations Connect your Notion API key** in n8n under “Notion API Credentials.” Set up webhook triggers** to receive AI-generated sales insights. Test the workflow** using a sample AI sales call analysis. 🔧 How to Customize This Workflow 💡 Modify Notion Data Structure – Adjust fields to align with your product team's workflow. 💡 Refine AI Data Processing Rules – Customize how feature requests and pain points are categorized. 💡 Integrate with Slack or Email – Notify teams when recurring product issues emerge. 💡 Expand with Project Management Tools – Sync insights with Jira, Trello, or Asana to create product tickets automatically. ⚙️ Key Nodes Used in This Workflow 🔹 If Nodes – Detect if product feedback, AI mentions, or feature requests exist in AI data. 🔹 Notion Nodes – Create and update structured feedback entries in Notion. 🔹 Split Out & Aggregate Nodes – Process multiple insights and consolidate AI-generated data. 🔹 Wait Nodes – Ensure smooth sequencing of API calls and database updates. 🚀 Why Use This Workflow? ✔ Eliminates manual sales call review for product teams. ✔ Provides structured, AI-driven insights for feature planning and prioritization. ✔ Tracks AI/ML mentions to assess demand for AI-powered solutions. ✔ Improves product development strategies by leveraging real customer insights. ✔ Scalable for teams using n8n Cloud or self-hosted deployments. This workflow empowers product teams by transforming sales call data into actionable intelligence, optimizing feature planning, bug tracking, and AI/ML strategy. 🚀
by Don Jayamaha Jr
This workflow powers the Binance Spot Market Quant AI Agent, acting as the Financial Market Analyst. It fuses real-time market structure data (price, volume, kline) with multiple timeframe technical indicators (15m, 1h, 4h, 1d) and returns a structured trading outlook—perfect for intraday and swing traders who want actionable analysis in Telegram. 🔗 Requires the following sub-workflows to function: • Binance SM 15min Indicators Tool • Binance SM 1hour Indicators Tool • Binance SM 4hour Indicators Tool • Binance SM 1day Indicators Tool • Binance SM Price/24hStats/Kline Tool ⚙️ How It Works Triggered via webhook (typically by the Quant AI Agent). Extracts user symbol + timeframe from input (e.g., "DOGE outlook today"). Calls all linked sub-workflows to retrieve indicators + live price data. Merges the data and formats a clean trading report using GPT-4o-mini. Returns HTML-formatted message suitable for Telegram delivery. 📥 Sample Input { "message": "SOLUSDT", "sessionId": "654321123" } ✅ Telegram Output Format 📊 SOLUSDT Market Snapshot 💰 Price: $156.75 📉 24h Stats: High $160.10 | Low $149.00 | Volume: 1.1M SOL 🧪 4h Indicators: • RSI: 58.2 (Neutral-Bullish) • MACD: Crossover Up • BB: Squeezing Near Upper Band • ADX: 25.7 (Rising Trend) 📈 Resistance: $163 📉 Support: $148 🔍 Use Cases | Scenario | Outcome | | ------------------------------- | --------------------------------------------------------- | | User asks for “BTC outlook” | Returns 1h + 4h + 1d indicators + live price + key levels | | Telegram bot prompt: “DOGE now” | Returns short-term 15m + 1h analysis snapshot | | Strategy trigger inside n8n | Enables other workflows to consume structured signal data | 🎥 Watch Tutorial: 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding or redistribution permitted. 🔗 For support: LinkedIn – Don Jayamaha