by Don Jayamaha Jr
A sentiment intelligence sub-agent for the Binance Spot Market Quant AI Agent. It aggregates crypto news from major sources, filters by token keyword (e.g., BTC, ETH), and produces a Telegram-ready summary including market sentiment and top headlines—powered by GPT-4o. 🎥 Live Demo: 🛠️ Workflow Function This tool performs the following steps: | 🔧 Step | 📌 Description | | ------------------------ | ----------------------------------------------------------------------------- | | Webhook Input | Accepts { "message": "symbol" } via HTTP POST | | Crypto Keyword Extractor | GPT model extracts the valid crypto symbol (e.g., "SOL", "DOGE", "ETH") | | RSS News Aggregators | Pulls latest headlines from 9+ crypto sources (CoinDesk, Cointelegraph, etc.) | | Merge & Filter Articles | Keeps only articles containing the specified token | | Prompt Builder | Creates prompt for GPT with filtered headlines | | GPT-4o Summarizer | Summarizes news into 3-part response: Summary, Sentiment, Headline Links | | Telegram Formatter | Converts GPT output into a Telegram-friendly message | | Response Handler | Returns formatted message to the caller via webhook | 📥 Webhook Trigger Format { "message": "ETH" } This triggers a full execution of the workflow and returns output like: 📣 ETH Sentiment: Neutral • BlackRock’s tokenized fund expands to Ethereum mainnet (CoinDesk) • Ethereum fees remain high, analysts call for L2 migration (NewsBTC) • Vitalik warns about centralized risks in staking (Cointelegraph) 📚 Installation Guide 1. Import & Enable Load the .json into your n8n Editor Enable webhook trigger in the top-right corner Ensure it's reachable via POST /webhook/custom-path 2. Required Credentials OpenAI API Key** (GPT-4o capable) No API keys required for RSS feeds 3. Connect to Quant Agent Add an HTTP Request node in your main AI agent Point to this workflow's webhook with body { "message": "symbol" } Capture the response to include in your Telegram output 🔍 Real Use Cases | Scenario | Result | | ---------------------------------- | ---------------------------------------------------------------- | | BTC Sentiment before a key event | Returns 8–12 filtered articles with bullish/neutral/bearish tone | | Daily pulse for altcoins like DOGE | Shows relevant headlines, helpful for intraday trading setups | | Telegram chatbot integration | Enables user to query sentiment via /sentiment ETH | | Macro context for Quant AI outputs | Adds emotional/news context to technical-based trade decisions | 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding or resale permitted. 🔗 For support: LinkedIn – Don Jayamaha
by Don Jayamaha Jr
This workflow acts as a central API gateway for all technical indicator agents in the Binance Spot Market Quant AI system. It listens for incoming webhook requests and dynamically routes them to the correct timeframe-based indicator tool (15m, 1h, 4h, 1d). Designed to power multi-timeframe analysis at scale. 🎥 Watch Tutorial: 🎯 What It Does Accepts requests via webhook with a token symbol and timeframe Forwards requests to the correct internal technical indicator tool Returns a clean JSON payload with RSI, MACD, BBANDS, EMA, SMA, and ADX Can be used directly or as a microservice by other agents 🛠️ Input Format Webhook endpoint: POST /webhook/indicators Body format: { "symbol": "DOGEUSDT", "timeframe": "15m" } 🔄 Routing Logic | Timeframe | Routed To | | --------- | -------------------------------- | | 15m | Binance SM 15min Indicators Tool | | 1h | Binance SM 1hour Indicators Tool | | 4h | Binance SM 4hour Indicators Tool | | 1d | Binance SM 1day Indicators Tool | 🔎 Use Cases | Use Case | Description | | -------------------------------------------------- | ------------------------------------------------------ | | 🔗 Used by Binance Financial Analyst Tool | Automatically triggers all indicator tools in parallel | | 🤖 Integrated in Binance Quant AI System | Supports reasoning, signal generation, and summaries | | ⚙️ Can be called independently for raw data access | Useful for dashboards or advanced analytics | 📤 Output Example { "symbol": "DOGEUSDT", "timeframe": "15m", "rsi": 56.7, "macd": "Bearish Crossover", "bbands": "Stable", "ema": "Price above EMA", "adx": 19.4 } ✅ Prerequisites Make sure all the following workflows are installed and operational: Binance SM 15min Indicators Tool Binance SM 1hour Indicators Tool Binance SM 4hour Indicators Tool Binance SM 1day Indicators Tool OpenAI credentials (for any agent using LLM formatting) 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company All architectural routing logic and endpoint structuring is IP-protected. No unauthorized rebranding or resale permitted. 🔗 Need help? Connect on LinkedIn – Don Jayamaha
by Samir Saci
Tags*: AI Agent, Supply Chain, Logistics, Circular Economy, Route Planning, Transportation, GPS API Context Hi! I’m Samir — a Supply Chain Engineer and Data Scientist based in Paris, and founder of LogiGreen Consulting. I help logistics teams reduce operational workload and errors by combining AI automation, route optimization APIs, and workflow automation. This workflow is part of a circular economy project, where stores return reusable packaging (bins, crates, containers) to a central warehouse. > Let's circular economies with AI-powered automation using n8n! 📬 For business inquiries, you can add find me on LinkedIn Who is this template for? This workflow is designed for logistics teams participating in circular economy loops. Let us imagine your transportation company receives this pickup request: The two AI Agent nodes connected to Openroute Service API will process the information and reply with the detailed route plan. The results include driving time and the optimal sequence of stops generated by the multi-stop optimization endpoint of the API. How does it work? This workflow automates the end-to-end processing of multi-stop pickup requests for reusable packaging: 📨 Gmail Trigger listens for collection request emails 🧠 AI Agent parses the email into structured data (store ID, address, date) 📍 Each stop is geocoded into GPS coordinates 🗺️ OpenRouteService optimizes the stop sequence using truck-specific routing 📄 A second AI Agent formats a confirmation email in HTML with the ordered pickup plan 📧 The reply is sent back with all details including duration and route Steps: 💌 Trigger on a new Gmail message 🧠 Extract data using AI Agent (e.g., stores, addresses, times) 📑 Store raw and processed data in Google Sheets 📍 Enrich with GPS coordinates 🚚 Optimize route using OpenRouteService (truck profile) 📄 Format the confirmation using an AI Agent 📬 Send reply to requester with route and timing What do I need to get started? You’ll need: A Gmail account to receive collection requests A Google Sheet to store and review data A free OpenRouteService API key Access to OpenAI for using AI Agent nodes Sample pickup request emails to test Next Steps 🗒️ Use the sticky notes inside the n8n canvas to: Plug in your Gmail and OpenRouteService credentials Try with a sample store collection email Validate the confirmation format and route accuracy This template was built using n8n v1.93.0 Submitted: June 7, 2025
by Raquel Giugliano
++HOW IT WORKS:++ This workflow automates the processing of invoices sent via Telegram. It extracts the data using LlamaIndex OCR, logs it in Google Sheets, and optionally pushes the structured data to SAP Business One 🔹 1. Receive Invoice via Telegram:** A user sends a PDF of an invoice through Telegram A Telegram Trigger node listens for incoming messages and captures the file and metadata The document is downloaded and prepared for OCR 🔹 2. OCR with LlamaIndex: The file is uploaded to the LlamaIndex OCR API. The workflow polls the API until the processing status returns SUCCESS Once ready, the parsed content is fetched in Markdown format 🔹 3. Data Extraction via LLM (editable): The Markdown content is sent to a language model (LLM) using LangChain A Structured Output Parser transforms the result into a clean, structured editable JSON 🔹 4. Save to Google Sheets: The structured JSON is split into: Header (main invoice metadata) Detail (individual line items) Each part is stored in a dedicated tab within a connected Google Sheets file 🔹 5. Ask for SAP Confirmation: The bot replies to the user via Telegram: "Do you want to send the data to SAP?" If the user clicks "Yes", the next automation path is triggered. 🔹 6. Push Data to SAP B1: A connection is made to SAP Business One's Service Layer API Header and detail data are fetched from Google Sheets The invoice structure is rebuilt as required by SAP (DocumentLines, CardCode, etc.) A POST request creates the Purchase Invoice in SAP A confirmation message with the created DocEntry is sent back to the user on Telegram ++SET UP STEPS:++ Follow these steps to properly configure the workflow before execution: 1️⃣ Create Required Credentials: Go to Credentials > + New Credential and create the following: Telegram API (set your bot token get it from BotFather) Google Sheets OpenAI 2️⃣ Set Up Environment Variables (Optional but Recommended): LLAMAINDEX_API_KEY SAP_USER SAP_PASSWORD SAP_COMPANY_DB SAP_URL 3️⃣ Prepare Google Sheets: Ensure your Google Spreadsheet has the following: ➤ Sheet 1: Header ➤ Sheet 2: Details Contains columns for invoice lines
by Joseph LePage
🤖 This n8n workflow creates an intelligent Telegram bot that processes multiple types of messages and provides automated responses using AI capabilities. The bot serves as a personal assistant that can handle text, voice messages, and images through a sophisticated processing pipeline. Core Components Message Reception and Validation 📥 🔄 Implements webhook-based message reception for real-time processing 🔐 Features a robust user validation system that verifies sender credentials 🔀 Supports both testing and production webhook endpoints for development flexibility Message Processing Pipeline ⚡ 🔄 Uses a smart router to detect and categorize incoming message types 📝 Processes three main message formats: 💬 Text messages 🎤 Voice recordings 📸 Images with captions AI Integration 🧠 🤖 Leverages OpenAI's GPT-4 for message classification and processing 🗣️ Incorporates voice transcription capabilities for audio messages 👁️ Features image analysis using GPT-4 Vision API for processing visual content Technical Architecture Webhook Management 🔌 🌐 Maintains separate endpoints for testing and production environments 📊 Implements automatic webhook status monitoring ⚡ Provides real-time webhook configuration updates Error Handling ⚠️ 🔍 Features comprehensive error detection and reporting 🔄 Implements fallback mechanisms for unprocessable messages 💬 Provides user feedback for failed operations Message Classification System 📋 🏷️ Categorizes incoming messages into tasks and general conversation 🔀 Implements separate processing paths for different message types 🧩 Maintains context awareness across message processing Security Features User Authentication 🔒 ✅ Validates user credentials against predefined parameters 👤 Implements first name, last name, and user ID verification 🚫 Restricts access to authorized users only Response System Intelligent Responses 💡 🤖 Generates contextual responses based on message classification
by Ysqander
Extract data from any PDF or image invoice dropped in Google Drive directly into Google Sheets – powered by AI OCR. Free, fully modifiable n8n workflow. Optional add-ons for pro features. 🚀 What this template does Stop typing invoice data by hand. Drop a PDF or phone-snapshot into your Invoices Inbox folder in Google Drive and this n8n workflow will: Auto-OCR the document with the Mistral OCR API Match any fields you list in Row 1 of your Google Sheet (totally schema-agnostic) Append a clean, structured row – every time Works with both PDFs and images, in any language supported by Mistral. Template JSON included, ready to import into self-hosted or n8n Cloud 👀 Who’s this for? Freelancers & agencies processing client invoices Small finance teams on Google Workspace Anyone self-hosting n8n who wants an AI OCR flow without glue-code No coding skills required – but flow tweaking is possible for power users. 🛠 Upcoming PRO Add-Ons I am also working on PRO add-ons for this template: Add-On #1 – Error Handling & Alerts (ships Jul 2025)• Flags missing fields, branches to Email/Slack notification; prevents silent failures Add-On #2 – Auto-Currency Converter (ships Jul 2025)• Detects invoice currency symbol/code → converts Total into your base currency via a free FX API Add-On #3 – VAT / GST Breakdown (ships Jul 2025)• Extracts VAT number, net, tax rate, tax amount, gross – ready for EU/UK/AU filings To pre-order these please see: https://ysqander.gumroad.com/l/N8N-AI-Workflow-Invoice-Data-Extraction-LITE
by Dataki
Important Notes: Check Legal Regulations: This workflow involves scraping, so ensure you comply with the legal regulations in your country before getting started. Better safe than sorry! Workflow Description: 😮💨 Tired of struggling with XPath, CSS selectors, or DOM specificity when scraping ? This AI-powered solution is here to simplify your workflow! With a vision-based AI Agent, you can extract data effortlessly without worrying about how the DOM is structured. This workflow leverages a vision-based AI Agent, integrated with Google Sheets, ScrapingBee, and the Gemini-1.5-Pro model, to extract structured data from webpages. The AI Agent primarily uses screenshots for data extraction but switches to HTML scraping when necessary, ensuring high accuracy. Key Features: Google Sheets Integration**: Manage URLs to scrape and store structured results. ScrapingBee**: Capture full-page screenshots and retrieve HTML data for fallback extraction. AI-Powered Data Parsing**: Use Gemini-1.5-Pro for vision-based scraping and a Structured Output Parser to format extracted data into JSON. Token Efficiency**: HTML is converted to Markdown to optimize processing costs. This template is designed for e-commerce scraping but can be customized for various use cases.
by Ezema Kingsley Chibuzo
🧠 What It Does This n8n workflow collects leads from Google Maps, scrapes their websites via direct HTTP requests, and extracts valid email addresses — all while mimicking real user behavior to improve scraping reliability. It rotates User-Agent headers, introduces randomized delays, and refines URLs by removing only query parameters and fragments to preserve valid page paths (like social media links). The workflow blends Apify actors, raw HTTP requests, HTML-to-Markdown conversion, and smart email extraction to deliver clean, actionable lead data — ready to be sent to Airtable, Google Sheets, or any CRM. Perfect for lean, scalable B2B lead generation using n8n’s native logic and no external scrapers. 💡Why this workflow Modest lead scrapers rely on heavy tools or APIs like Firecrawl. This workflow: Uses lightweight HTTP requests (with randomized user-agents) to scrape websites. Adds natural wait times to avoid rate limits and IP bans. Avoid full-page crawlers, yet still pulls emails effectively. Works great for freelancers, marketers, or teams targeting niche B2B leads. Designed for stealth and resilience. 👤 Who it’s for Lead generation freelancers or consultants. B2B marketers looking to extract real contact info. Small businesses doing targeted outreach. Developers who want a fast, low-footprint scraper. Anyone who wants email + website leads from Google Maps. ⚙️ How It Works 1. 📥 Form Submission (Lead Input) A Form Trigger collects: Keyword Location No. of Leads (defaults to 10) This makes the workflow dynamic and user-friendly — ready for multiple use cases and teams. 2. 📊 Scrape Business Info (via Apify) Apify’s Google Maps Actor searches for matching businesses. The Dataset Node fetches all relevant business details. A Set Node parses key fields like name, phone, website, and category. A Limit Node ensures the workflow only processes the desired number of leads. 3. 🔁 First Loop – Visit & Scrape Website Each business website is processed in a loop. A Code Node cleans the website URL by removing only query parameters/fragments — keeping full paths like /contact. A HTTP Request Node fetches the raw HTML of the site: Uses randomized User-Agent headers (5 variants) to mimic real devices and browsers. This makes requests appear more human and reduces the risk of detection or blocking. HTML is converted to Markdown using the Markdown Node, making it easier to scan for text patterns. A Wait Node introduces a random delay between 2-7 seconds: Helps avoid triggering rate limits, Reduces likelihood of being flagged as a bot. A Merge Node combines scraped markdown + lead info for use in the second loop. 4. 🔁 Second Loop – Extract Emails In this second loop, the markdown data is processed. A Code Node applies regex to extract the first valid email address. If no email is found, "N/A" is returned. A brief 1 second Wait Node simulates realistic browsing time. Another Merge Node attaches the email result to the original lead data. 5. ✅ Filter, Clean & Store A Filter Node removes all entries with "N/A" or invalid email results. A Set Node ensures only required fields (like website, email, and company name) are passed forward. The clean leads are saved to Airtable (or optionally, Google Sheets) using an upsert-style insert to avoid duplicates. 🛡️ Anti-Flagging Design This workflow is optimized for stealth: No scraping tools or headless browsers (like Puppeteer or Firecrawl). Direct HTTP requests with rotating User-Agents. Randomized wait intervals (2-7s). Only non-intrusive parsing — no automation footprints. 🛠 How to Set It Up Open n8n (Cloud or Self-Hosted). Install Apify node search Apify and click on Install. Do this before importing your file. Import the provided .json file into your n8n editor. Set up the required credentials: 🔑 Apify API Key** (used for Google Maps scraping) 🔑 Airtable API Key** (or connect Google Sheets instead) Recommended Prepare your Airtable base or Google Sheet with fields like: Email, Website, Phone, Company Name. Review the Set node if you'd like to collect more fields from Apify (e.g., Ratings, Categories, etc.). 🔁 Customization Tips The Apify scraper returns rich business data. By default, this workflow collects name, phone, and website — but you can add more in the "Grab Desired Fields" node. Need safer scraping at scale? Swap the HTTP Request for Firecrawl’s Single URL scraper (or any headless service like Browserless, Oxylabs, Bright Date, or ScrapingBee) — they handle rendering and IP rotation. Want to extract from internal pages (like /contact or /about)? Use Firecrawl’s async crawl mode — just note it takes longer. For speed and efficiency, this built-in HTTP + Markdown setup is usually the fastest way to grab emails.
by Jeyson Orozco
Description This template creates a nightly backup of all n8n workflows and saves them to a Google Drive folder. Each night, the previous night's backups are moved to an “n8n_old” folder and renamed with the corresponding date. Backups older than a specified age are automatically deleted (this feature is active for 30 days, you can remove it if you don't want the backups to be deleted). Prerequisites Google Drive account and credentials Get from the following link. Link n8n version from v 1.63.4 to 1.70.1 or higher N8n api key Guide from the following link. Link A destination folder for backups: “n8n_old” “n8n_backups” (if it doesn't exist, create it) Configuration Update all Google Drive nodes with your credentials. Edit the Schedule Trigger node with the desired time to run the backup. If you want to automatically purge old backups. Edit the “PURGE DAYS” node to specify the age of the backups you want to delete. Enable the “PURGE DAYS” node and the 3 subsequent nodes. Enable the workflow to run on the specified schedule. Last updated January 2025
by Raquel Giugliano
This minimal utility workflow connects to the SAP Business One Service Layer API to verify login credentials and return the session ID. It's ideal for testing access or using as a sub-workflow to retrieve the B1SESSION token for other operations. ++⚙️ HOW IT WORKS:++ 🔹 1. Trigger Manually The workflow is initiated using a Manual Trigger. Ideal for testing or debugging credentials before automation. 🔹 2. Set SAP Login Data The Set Login Data node defines four key input variables: sap_url: Base URL of the SAP B1 Service Layer (e.g. https://sap-server:50000/b1s/v1/) sap_username: SAP B1 username sap_password: SAP B1 password sap_companydb: SAP B1 Company DB name 🔹 3. Connect to SAP A HTTP Request node performs a POST to the Login endpoint. The body is structured as: { "UserName": "your_sap_username", "Password": "your_sap_password", "CompanyDB": "your_sap_companydb" } If successful, the response contains a SessionId which is essential for authenticated requests. 🔹 4. Return Session or Error The response is branched: On success → the sessionID is extracted and returned. On failure → the error message and status code are stored separately. ++🛠 SETUP STEPS:++ 1️⃣ Create SAP Service Layer Credentials Although this workflow uses manual inputs (via Set), it's best to define your connection details as environment variables for reuse: SAP_URL=https://your-sap-host:50000/b1s/v1/ SAP_USER=your_sapuser SAP_PASSWORD=your_password SAP_COMPANY_DB=your_companyDB Alternatively, update the Set Login Data node directly with your values. 2️⃣ Run the Workflow Click "Execute Workflow" in n8n. Watch the response from SAP: If successful: sessionID will be available in the Success node. If failed: statusCode and errorMessage will be available in the Failed node. ++✅ USE CASES:++ 🔄 Reusable Login Module Export this as a reusable sub-workflow for other SAP-integrated flows. 🔐 Credential Testing Tool Validate new environments, test credentials before deployment.
by Harshil Agrawal
This workflow stores responses form responses of Typeform in Airtable. The workflow also sends the response to a channel on Slack. You will have to configure the Set node if your form uses different fields.
by Jenny
Vector Database as a Big Data Analysis Tool for AI Agents Workflows from the webinar "Build production-ready AI Agents with Qdrant and n8n". This series of workflows shows how to build big data analysis tools for production-ready AI agents with the help of vector databases. These pipelines are adaptable to any dataset of images, hence, many production use cases. Uploading (image) datasets to Qdrant Set up meta-variables for anomaly detection in Qdrant Anomaly detection tool KNN classifier tool For anomaly detection The first pipeline to upload an image dataset to Qdrant. 2. This is the second pipeline to set up cluster (class) centres & cluster (class) threshold scores needed for anomaly detection. The third is the anomaly detection tool, which takes any image as input and uses all preparatory work done with Qdrant to detect if it's an anomaly to the uploaded dataset. For KNN (k nearest neighbours) classification The first pipeline to upload an image dataset to Qdrant. The second is the KNN classifier tool, which takes any image as input and classifies it on the uploaded to Qdrant dataset. To recreate both You'll have to upload crops and lands datasets from Kaggle to your own Google Storage bucket, and re-create APIs/connections to Qdrant Cloud (you can use Free Tier cluster), Voyage AI API & Google Cloud Storage. [This workflow] Setting Up Cluster (Class) Centres & Cluster (Class) Threshold Scores for Anomaly Detection Preparatory workflow to set cluster centres and cluster threshold scores so anomalies can be detected based on these thresholds. Here, we're using two approaches to set up these centres: the "distance matrix approach" and the "multimodal embedding model approach".