by Jonathan
Task: Create a simple API endpoint using the Webhook and Respond to Webhook nodes Why: You can prototype or replace a backend process with a single workflow Main use cases: Replace backend logic with a workflow
by Don Jayamaha Jr
This workflow acts as a central API gateway for all technical indicator agents in the Binance Spot Market Quant AI system. It listens for incoming webhook requests and dynamically routes them to the correct timeframe-based indicator tool (15m, 1h, 4h, 1d). Designed to power multi-timeframe analysis at scale. ๐ฅ Watch Tutorial: ๐ฏ What It Does Accepts requests via webhook with a token symbol and timeframe Forwards requests to the correct internal technical indicator tool Returns a clean JSON payload with RSI, MACD, BBANDS, EMA, SMA, and ADX Can be used directly or as a microservice by other agents ๐ ๏ธ Input Format Webhook endpoint: POST /webhook/indicators Body format: { "symbol": "DOGEUSDT", "timeframe": "15m" } ๐ Routing Logic | Timeframe | Routed To | | --------- | -------------------------------- | | 15m | Binance SM 15min Indicators Tool | | 1h | Binance SM 1hour Indicators Tool | | 4h | Binance SM 4hour Indicators Tool | | 1d | Binance SM 1day Indicators Tool | ๐ Use Cases | Use Case | Description | | -------------------------------------------------- | ------------------------------------------------------ | | ๐ Used by Binance Financial Analyst Tool | Automatically triggers all indicator tools in parallel | | ๐ค Integrated in Binance Quant AI System | Supports reasoning, signal generation, and summaries | | โ๏ธ Can be called independently for raw data access | Useful for dashboards or advanced analytics | ๐ค Output Example { "symbol": "DOGEUSDT", "timeframe": "15m", "rsi": 56.7, "macd": "Bearish Crossover", "bbands": "Stable", "ema": "Price above EMA", "adx": 19.4 } โ Prerequisites Make sure all the following workflows are installed and operational: Binance SM 15min Indicators Tool Binance SM 1hour Indicators Tool Binance SM 4hour Indicators Tool Binance SM 1day Indicators Tool OpenAI credentials (for any agent using LLM formatting) ๐งพ Licensing & Attribution ยฉ 2025 Treasurium Capital Limited Company All architectural routing logic and endpoint structuring is IP-protected. No unauthorized rebranding or resale permitted. ๐ Need help? Connect on LinkedIn โ Don Jayamaha
by Raquel Giugliano
++HOW IT WORKS:++ This workflow automates the processing of invoices sent via Telegram. It extracts the data using LlamaIndex OCR, logs it in Google Sheets, and optionally pushes the structured data to SAP Business One ๐น 1. Receive Invoice via Telegram:** A user sends a PDF of an invoice through Telegram A Telegram Trigger node listens for incoming messages and captures the file and metadata The document is downloaded and prepared for OCR ๐น 2. OCR with LlamaIndex: The file is uploaded to the LlamaIndex OCR API. The workflow polls the API until the processing status returns SUCCESS Once ready, the parsed content is fetched in Markdown format ๐น 3. Data Extraction via LLM (editable): The Markdown content is sent to a language model (LLM) using LangChain A Structured Output Parser transforms the result into a clean, structured editable JSON ๐น 4. Save to Google Sheets: The structured JSON is split into: Header (main invoice metadata) Detail (individual line items) Each part is stored in a dedicated tab within a connected Google Sheets file ๐น 5. Ask for SAP Confirmation: The bot replies to the user via Telegram: "Do you want to send the data to SAP?" If the user clicks "Yes", the next automation path is triggered. ๐น 6. Push Data to SAP B1: A connection is made to SAP Business One's Service Layer API Header and detail data are fetched from Google Sheets The invoice structure is rebuilt as required by SAP (DocumentLines, CardCode, etc.) A POST request creates the Purchase Invoice in SAP A confirmation message with the created DocEntry is sent back to the user on Telegram ++SET UP STEPS:++ Follow these steps to properly configure the workflow before execution: 1๏ธโฃ Create Required Credentials: Go to Credentials > + New Credential and create the following: Telegram API (set your bot token get it from BotFather) Google Sheets OpenAI 2๏ธโฃ Set Up Environment Variables (Optional but Recommended): LLAMAINDEX_API_KEY SAP_USER SAP_PASSWORD SAP_COMPANY_DB SAP_URL 3๏ธโฃ Prepare Google Sheets: Ensure your Google Spreadsheet has the following: โค Sheet 1: Header โค Sheet 2: Details Contains columns for invoice lines
by Joseph LePage
๐ค This n8n workflow creates an intelligent Telegram bot that processes multiple types of messages and provides automated responses using AI capabilities. The bot serves as a personal assistant that can handle text, voice messages, and images through a sophisticated processing pipeline. Core Components Message Reception and Validation ๐ฅ ๐ Implements webhook-based message reception for real-time processing ๐ Features a robust user validation system that verifies sender credentials ๐ Supports both testing and production webhook endpoints for development flexibility Message Processing Pipeline โก ๐ Uses a smart router to detect and categorize incoming message types ๐ Processes three main message formats: ๐ฌ Text messages ๐ค Voice recordings ๐ธ Images with captions AI Integration ๐ง ๐ค Leverages OpenAI's GPT-4 for message classification and processing ๐ฃ๏ธ Incorporates voice transcription capabilities for audio messages ๐๏ธ Features image analysis using GPT-4 Vision API for processing visual content Technical Architecture Webhook Management ๐ ๐ Maintains separate endpoints for testing and production environments ๐ Implements automatic webhook status monitoring โก Provides real-time webhook configuration updates Error Handling โ ๏ธ ๐ Features comprehensive error detection and reporting ๐ Implements fallback mechanisms for unprocessable messages ๐ฌ Provides user feedback for failed operations Message Classification System ๐ ๐ท๏ธ Categorizes incoming messages into tasks and general conversation ๐ Implements separate processing paths for different message types ๐งฉ Maintains context awareness across message processing Security Features User Authentication ๐ โ Validates user credentials against predefined parameters ๐ค Implements first name, last name, and user ID verification ๐ซ Restricts access to authorized users only Response System Intelligent Responses ๐ก ๐ค Generates contextual responses based on message classification
by Dataki
Important Notes: Check Legal Regulations: This workflow involves scraping, so ensure you comply with the legal regulations in your country before getting started. Better safe than sorry! Workflow Description: ๐ฎโ๐จ Tired of struggling with XPath, CSS selectors, or DOM specificity when scraping ? This AI-powered solution is here to simplify your workflow! With a vision-based AI Agent, you can extract data effortlessly without worrying about how the DOM is structured. This workflow leverages a vision-based AI Agent, integrated with Google Sheets, ScrapingBee, and the Gemini-1.5-Pro model, to extract structured data from webpages. The AI Agent primarily uses screenshots for data extraction but switches to HTML scraping when necessary, ensuring high accuracy. Key Features: Google Sheets Integration**: Manage URLs to scrape and store structured results. ScrapingBee**: Capture full-page screenshots and retrieve HTML data for fallback extraction. AI-Powered Data Parsing**: Use Gemini-1.5-Pro for vision-based scraping and a Structured Output Parser to format extracted data into JSON. Token Efficiency**: HTML is converted to Markdown to optimize processing costs. This template is designed for e-commerce scraping but can be customized for various use cases.
by Ezema Kingsley Chibuzo
๐ง What It Does This n8n workflow collects leads from Google Maps, scrapes their websites via direct HTTP requests, and extracts valid email addresses โ all while mimicking real user behavior to improve scraping reliability. It rotates User-Agent headers, introduces randomized delays, and refines URLs by removing only query parameters and fragments to preserve valid page paths (like social media links). The workflow blends Apify actors, raw HTTP requests, HTML-to-Markdown conversion, and smart email extraction to deliver clean, actionable lead data โ ready to be sent to Airtable, Google Sheets, or any CRM. Perfect for lean, scalable B2B lead generation using n8nโs native logic and no external scrapers. ๐กWhy this workflow Modest lead scrapers rely on heavy tools or APIs like Firecrawl. This workflow: Uses lightweight HTTP requests (with randomized user-agents) to scrape websites. Adds natural wait times to avoid rate limits and IP bans. Avoid full-page crawlers, yet still pulls emails effectively. Works great for freelancers, marketers, or teams targeting niche B2B leads. Designed for stealth and resilience. ๐ค Who itโs for Lead generation freelancers or consultants. B2B marketers looking to extract real contact info. Small businesses doing targeted outreach. Developers who want a fast, low-footprint scraper. Anyone who wants email + website leads from Google Maps. โ๏ธ How It Works 1. ๐ฅ Form Submission (Lead Input) A Form Trigger collects: Keyword Location No. of Leads (defaults to 10) This makes the workflow dynamic and user-friendly โ ready for multiple use cases and teams. 2. ๐ Scrape Business Info (via Apify) Apifyโs Google Maps Actor searches for matching businesses. The Dataset Node fetches all relevant business details. A Set Node parses key fields like name, phone, website, and category. A Limit Node ensures the workflow only processes the desired number of leads. 3. ๐ First Loop โ Visit & Scrape Website Each business website is processed in a loop. A Code Node cleans the website URL by removing only query parameters/fragments โ keeping full paths like /contact. A HTTP Request Node fetches the raw HTML of the site: Uses randomized User-Agent headers (5 variants) to mimic real devices and browsers. This makes requests appear more human and reduces the risk of detection or blocking. HTML is converted to Markdown using the Markdown Node, making it easier to scan for text patterns. A Wait Node introduces a random delay between 2-7 seconds: Helps avoid triggering rate limits, Reduces likelihood of being flagged as a bot. A Merge Node combines scraped markdown + lead info for use in the second loop. 4. ๐ Second Loop โ Extract Emails In this second loop, the markdown data is processed. A Code Node applies regex to extract the first valid email address. If no email is found, "N/A" is returned. A brief 1 second Wait Node simulates realistic browsing time. Another Merge Node attaches the email result to the original lead data. 5. โ Filter, Clean & Store A Filter Node removes all entries with "N/A" or invalid email results. A Set Node ensures only required fields (like website, email, and company name) are passed forward. The clean leads are saved to Airtable (or optionally, Google Sheets) using an upsert-style insert to avoid duplicates. ๐ก๏ธ Anti-Flagging Design This workflow is optimized for stealth: No scraping tools or headless browsers (like Puppeteer or Firecrawl). Direct HTTP requests with rotating User-Agents. Randomized wait intervals (2-7s). Only non-intrusive parsing โ no automation footprints. ๐ How to Set It Up Open n8n (Cloud or Self-Hosted). Install Apify node search Apify and click on Install. Do this before importing your file. Import the provided .json file into your n8n editor. Set up the required credentials: ๐ Apify API Key** (used for Google Maps scraping) ๐ Airtable API Key** (or connect Google Sheets instead) Recommended Prepare your Airtable base or Google Sheet with fields like: Email, Website, Phone, Company Name. Review the Set node if you'd like to collect more fields from Apify (e.g., Ratings, Categories, etc.). ๐ Customization Tips The Apify scraper returns rich business data. By default, this workflow collects name, phone, and website โ but you can add more in the "Grab Desired Fields" node. Need safer scraping at scale? Swap the HTTP Request for Firecrawlโs Single URL scraper (or any headless service like Browserless, Oxylabs, Bright Date, or ScrapingBee) โ they handle rendering and IP rotation. Want to extract from internal pages (like /contact or /about)? Use Firecrawlโs async crawl mode โ just note it takes longer. For speed and efficiency, this built-in HTTP + Markdown setup is usually the fastest way to grab emails.
by Jeyson Orozco
Description This template creates a nightly backup of all n8n workflows and saves them to a Google Drive folder. Each night, the previous night's backups are moved to an โn8n_oldโ folder and renamed with the corresponding date. Backups older than a specified age are automatically deleted (this feature is active for 30 days, you can remove it if you don't want the backups to be deleted). Prerequisites Google Drive account and credentials Get from the following link. Link n8n version from v 1.63.4 to 1.70.1 or higher N8n api key Guide from the following link. Link A destination folder for backups: โn8n_oldโ โn8n_backupsโ (if it doesn't exist, create it) Configuration Update all Google Drive nodes with your credentials. Edit the Schedule Trigger node with the desired time to run the backup. If you want to automatically purge old backups. Edit the โPURGE DAYSโ node to specify the age of the backups you want to delete. Enable the โPURGE DAYSโ node and the 3 subsequent nodes. Enable the workflow to run on the specified schedule. Last updated January 2025
by Raquel Giugliano
This minimal utility workflow connects to the SAP Business One Service Layer API to verify login credentials and return the session ID. It's ideal for testing access or using as a sub-workflow to retrieve the B1SESSION token for other operations. ++โ๏ธ HOW IT WORKS:++ ๐น 1. Trigger Manually The workflow is initiated using a Manual Trigger. Ideal for testing or debugging credentials before automation. ๐น 2. Set SAP Login Data The Set Login Data node defines four key input variables: sap_url: Base URL of the SAP B1 Service Layer (e.g. https://sap-server:50000/b1s/v1/) sap_username: SAP B1 username sap_password: SAP B1 password sap_companydb: SAP B1 Company DB name ๐น 3. Connect to SAP A HTTP Request node performs a POST to the Login endpoint. The body is structured as: { "UserName": "your_sap_username", "Password": "your_sap_password", "CompanyDB": "your_sap_companydb" } If successful, the response contains a SessionId which is essential for authenticated requests. ๐น 4. Return Session or Error The response is branched: On success โ the sessionID is extracted and returned. On failure โ the error message and status code are stored separately. ++๐ SETUP STEPS:++ 1๏ธโฃ Create SAP Service Layer Credentials Although this workflow uses manual inputs (via Set), it's best to define your connection details as environment variables for reuse: SAP_URL=https://your-sap-host:50000/b1s/v1/ SAP_USER=your_sapuser SAP_PASSWORD=your_password SAP_COMPANY_DB=your_companyDB Alternatively, update the Set Login Data node directly with your values. 2๏ธโฃ Run the Workflow Click "Execute Workflow" in n8n. Watch the response from SAP: If successful: sessionID will be available in the Success node. If failed: statusCode and errorMessage will be available in the Failed node. ++โ USE CASES:++ ๐ Reusable Login Module Export this as a reusable sub-workflow for other SAP-integrated flows. ๐ Credential Testing Tool Validate new environments, test credentials before deployment.
by Jenny
Vector Database as a Big Data Analysis Tool for AI Agents Workflows from the webinar "Build production-ready AI Agents with Qdrant and n8n". This series of workflows shows how to build big data analysis tools for production-ready AI agents with the help of vector databases. These pipelines are adaptable to any dataset of images, hence, many production use cases. Uploading (image) datasets to Qdrant Set up meta-variables for anomaly detection in Qdrant Anomaly detection tool KNN classifier tool For anomaly detection The first pipeline to upload an image dataset to Qdrant. 2. This is the second pipeline to set up cluster (class) centres & cluster (class) threshold scores needed for anomaly detection. The third is the anomaly detection tool, which takes any image as input and uses all preparatory work done with Qdrant to detect if it's an anomaly to the uploaded dataset. For KNN (k nearest neighbours) classification The first pipeline to upload an image dataset to Qdrant. The second is the KNN classifier tool, which takes any image as input and classifies it on the uploaded to Qdrant dataset. To recreate both You'll have to upload crops and lands datasets from Kaggle to your own Google Storage bucket, and re-create APIs/connections to Qdrant Cloud (you can use Free Tier cluster), Voyage AI API & Google Cloud Storage. [This workflow] Setting Up Cluster (Class) Centres & Cluster (Class) Threshold Scores for Anomaly Detection Preparatory workflow to set cluster centres and cluster threshold scores so anomalies can be detected based on these thresholds. Here, we're using two approaches to set up these centres: the "distance matrix approach" and the "multimodal embedding model approach".
by Dick
Send a simple JSON array via HTTP POST and get an Excel file. The default filename is Export.xlsx. By adding the (optional) request ?filename=xyz you can specify the filename. NOTE: do not forget to change the webhook path!
by tanaypant
This is Workflow 1 in the blog tutorial Database activity monitoring and alerting. Prerequisites A Postgres database set up and credentials. Basic knowledge of JavaScript and SQL. Nodes Cron node starts the workflow every minute. Function node generates sensor data (sensor id (preset), a randomly generated value, timestamp, and notification (preset as false) ) Postgres node inserts the data into a Postgres database. You can create the database for this workflow with the following SQL statement: CREATE TABLE n8n (id SERIAL, sensor_id VARCHAR, value INT, time_stamp TIMESTAMP, notification BOOLEAN);
by Harshil Agrawal
This workflow demonstrates the use of the HTTP Request node to upload binary files for form-data-multipart type. This example workflow updates the Twitter banner. HTTP Request node: This node fetches an image from Unsplash. Replace this node with any other node to fetch the image file. HTTP Request1 node: This node uploads the Twitter Profile Banner. The Twitter API requires OAuth 1.0 authentication. Follow the Twitter documentation to learn how to configure the authentication.