by Simon Mayerhofer
This workflow allows you to batch update/insert Airtable rows in groups of 10, significantly reducing the number of API calls and increasing performance. 🚀 How It Works Copy the 3 Nodes Copy the three nodes inside the red note box into your workflow. Set Your Fields In the Set Fields node, define the fields you want to update. ➤ Only include fields that match column names in your Airtable table. ➤ Make sure the field names are spelled exactly as they appear in Airtable. ➤ Make sure the field type are correctly set. So numbers columns in Airtable need numbers type set as the field. Configure the Airtable Batch Node Enter your Airtable Base ID The part with app... in the URL: airtable\.com / app8pqOLeka1Cglwg / tblnXZOdy8VtkAAJD/... Enter your Airtable Table ID The part with tbl... in the URL: airtable\.com / app8pqOLeka1Cglwg / tblXXZOdy8VtkAAJD /... Set Matching Fields (fieldsToMergeOn) Provide a string array that tells Airtable how to match existing rows. Examples: Match by one field (e.g. TaskID): {{["TaskID"]}} Match by multiple fields (e.g. firstname and lastname): {{["firstname", "lastname"]}} Choose the Mode (mode field) Available options: upsert: Update if a record exists, otherwise insert a new one insert: Always insert as new records update: Only update existing records (you must provide a field named id)
by Jan Oberhauser
Simpe API which queries the received country code via GraphQL and returns it. Example URL: https://n8n.exampl.ecom/webhook/1/webhook/webhook?code=DE Receives country code from an incoming HTTP Request Reads data via GraphQL Converts the data to JSON Constructs return string
by jason
This workflow will gather data every minute from the GitHub (https://github.com), Docker (https://www.docker.com/), npm (https://www.npmjs.com/) and Product Hunt (https://www.producthunt.com/) website APIs and display select information on a Smashing (https://smashing.github.io/) dashboard. For convenience sake, the dashboard piece can be easily downloaded as a docker container (https://hub.docker.com/r/tephlon/n8n_dashboard) and installed into your docker environment.
by Harshil Agrawal
This workflow allows you to receive updates when a customer is subscribed to a list in GetResponse and add them to a base in Airtable. GetResponse Trigger node: This node triggers the workflow when a customer is added to a list. Based on your use-case, you can select a different event. Set node: The Set node is uded here to ensure that only the data that we set in this node gets passed on to the next nodes in the workflow. For this workflow, we set the name and email of the customer. Airtable node: The data from the Set node is added to a table in Airtable. Based on your use-case, you may want to add the infromation about the customer to a CRM instead of a table in Airtable. Replace the Airtable node with the node of the CRM where you want to add the data.
by Wolfgang Renner
🧠 Business Card Scanner – Automate Contact Extraction This workflow automates the process of extracting contact details from business cards (PDF or image) and saving them directly into an n8n Data Table. No more manual data entry — just upload a card and let AI do the rest. ⚙️ How It Works Upload the business card via a web form (PDF or image). The uploaded file is converted to Base64 for processing. The Base64 data is sent to the Mistral OCR API, which extracts text from the image. The OCR output is parsed into JSON. An AI Agent (OpenAI GPT-4o-mini) interprets the extracted text and converts it into structured business card information (e.g., name, company, email, phone). The Structured Output Parser validates and aligns the data with a predefined schema. The workflow upserts (inserts or updates) the contact details into an n8n Data Table named business cards, using the email address as the unique identifier. ✅ Result: Seamless digitization of business cards into structured, searchable contact data. 🧩 Prerequisites Before importing the workflow, make sure you have the following: n8n Instance with access to the Data Table feature OpenAI Platform account and API key (configured in n8n) Mistral AI account and API key (configured in n8n) 🛠️ Setup Steps Import the Workflow Download and import the JSON file into your n8n instance. Create a Data Table Name it business_cards (or adjust the workflow accordingly). Add the following fields: firstname name company jobdescription phone mobil email street postcode place web Configure API Credentials Mistral OCR API → Add your API key under HTTP Bearer Auth. OpenAI API → Add your API key under OpenAI Credentials. Model: gpt-4o-mini (recommended for speed and low cost). Activate the Web Form Trigger Enable the trigger node to make the business card upload form accessible via a public URL. Test the Workflow Upload a sample business card. Confirm that extracted contact data automatically appears in your Data Table. 💡 Example JSON Output { "firstname": "Anna", "name": "Müller", "company": "NextGen Tech GmbH", "jobdescription": "Head of Marketing", "email": "anna.mueller@nextgen.tech", "phone": "+49 821 1234567", "mobil": "+49 170 9876543", "street": "Schillerstraße 12", "postcode": "86150", "place": "Augsburg", "web": "https://nextgen.tech" }
by Jonathan
Task: Create a simple API endpoint using the Webhook and Respond to Webhook nodes Why: You can prototype or replace a backend process with a single workflow Main use cases: Replace backend logic with a workflow
by Don Jayamaha Jr
This workflow acts as a central API gateway for all technical indicator agents in the Binance Spot Market Quant AI system. It listens for incoming webhook requests and dynamically routes them to the correct timeframe-based indicator tool (15m, 1h, 4h, 1d). Designed to power multi-timeframe analysis at scale. 🎥 Watch Tutorial: 🎯 What It Does Accepts requests via webhook with a token symbol and timeframe Forwards requests to the correct internal technical indicator tool Returns a clean JSON payload with RSI, MACD, BBANDS, EMA, SMA, and ADX Can be used directly or as a microservice by other agents 🛠️ Input Format Webhook endpoint: POST /webhook/indicators Body format: { "symbol": "DOGEUSDT", "timeframe": "15m" } 🔄 Routing Logic | Timeframe | Routed To | | --------- | -------------------------------- | | 15m | Binance SM 15min Indicators Tool | | 1h | Binance SM 1hour Indicators Tool | | 4h | Binance SM 4hour Indicators Tool | | 1d | Binance SM 1day Indicators Tool | 🔎 Use Cases | Use Case | Description | | -------------------------------------------------- | ------------------------------------------------------ | | 🔗 Used by Binance Financial Analyst Tool | Automatically triggers all indicator tools in parallel | | 🤖 Integrated in Binance Quant AI System | Supports reasoning, signal generation, and summaries | | ⚙️ Can be called independently for raw data access | Useful for dashboards or advanced analytics | 📤 Output Example { "symbol": "DOGEUSDT", "timeframe": "15m", "rsi": 56.7, "macd": "Bearish Crossover", "bbands": "Stable", "ema": "Price above EMA", "adx": 19.4 } ✅ Prerequisites Make sure all the following workflows are installed and operational: Binance SM 15min Indicators Tool Binance SM 1hour Indicators Tool Binance SM 4hour Indicators Tool Binance SM 1day Indicators Tool OpenAI credentials (for any agent using LLM formatting) 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company All architectural routing logic and endpoint structuring is IP-protected. No unauthorized rebranding or resale permitted. 🔗 Need help? Connect on LinkedIn – Don Jayamaha
by Ezema Kingsley Chibuzo
🧠 What It Does This n8n workflow collects leads from Google Maps, scrapes their websites via direct HTTP requests, and extracts valid email addresses — all while mimicking real user behavior to improve scraping reliability. It rotates User-Agent headers, introduces randomized delays, and refines URLs by removing only query parameters and fragments to preserve valid page paths (like social media links). The workflow blends Apify actors, raw HTTP requests, HTML-to-Markdown conversion, and smart email extraction to deliver clean, actionable lead data — ready to be sent to Airtable, Google Sheets, or any CRM. Perfect for lean, scalable B2B lead generation using n8n’s native logic and no external scrapers. 💡Why this workflow Modest lead scrapers rely on heavy tools or APIs like Firecrawl. This workflow: Uses lightweight HTTP requests (with randomized user-agents) to scrape websites. Adds natural wait times to avoid rate limits and IP bans. Avoid full-page crawlers, yet still pulls emails effectively. Works great for freelancers, marketers, or teams targeting niche B2B leads. Designed for stealth and resilience. 👤 Who it’s for Lead generation freelancers or consultants. B2B marketers looking to extract real contact info. Small businesses doing targeted outreach. Developers who want a fast, low-footprint scraper. Anyone who wants email + website leads from Google Maps. ⚙️ How It Works 1. 📥 Form Submission (Lead Input) A Form Trigger collects: Keyword Location No. of Leads (defaults to 10) This makes the workflow dynamic and user-friendly — ready for multiple use cases and teams. 2. 📊 Scrape Business Info (via Apify) Apify’s Google Maps Actor searches for matching businesses. The Dataset Node fetches all relevant business details. A Set Node parses key fields like name, phone, website, and category. A Limit Node ensures the workflow only processes the desired number of leads. 3. 🔁 First Loop – Visit & Scrape Website Each business website is processed in a loop. A Code Node cleans the website URL by removing only query parameters/fragments — keeping full paths like /contact. A HTTP Request Node fetches the raw HTML of the site: Uses randomized User-Agent headers (5 variants) to mimic real devices and browsers. This makes requests appear more human and reduces the risk of detection or blocking. HTML is converted to Markdown using the Markdown Node, making it easier to scan for text patterns. A Wait Node introduces a random delay between 2-7 seconds: Helps avoid triggering rate limits, Reduces likelihood of being flagged as a bot. A Merge Node combines scraped markdown + lead info for use in the second loop. 4. 🔁 Second Loop – Extract Emails In this second loop, the markdown data is processed. A Code Node applies regex to extract the first valid email address. If no email is found, "N/A" is returned. A brief 1 second Wait Node simulates realistic browsing time. Another Merge Node attaches the email result to the original lead data. 5. ✅ Filter, Clean & Store A Filter Node removes all entries with "N/A" or invalid email results. A Set Node ensures only required fields (like website, email, and company name) are passed forward. The clean leads are saved to Airtable (or optionally, Google Sheets) using an upsert-style insert to avoid duplicates. 🛡️ Anti-Flagging Design This workflow is optimized for stealth: No scraping tools or headless browsers (like Puppeteer or Firecrawl). Direct HTTP requests with rotating User-Agents. Randomized wait intervals (2-7s). Only non-intrusive parsing — no automation footprints. 🛠 How to Set It Up Open n8n (Cloud or Self-Hosted). Install Apify node search Apify and click on Install. Do this before importing your file. Import the provided .json file into your n8n editor. Set up the required credentials: 🔑 Apify API Key** (used for Google Maps scraping) 🔑 Airtable API Key** (or connect Google Sheets instead) Recommended Prepare your Airtable base or Google Sheet with fields like: Email, Website, Phone, Company Name. Review the Set node if you'd like to collect more fields from Apify (e.g., Ratings, Categories, etc.). 🔁 Customization Tips The Apify scraper returns rich business data. By default, this workflow collects name, phone, and website — but you can add more in the "Grab Desired Fields" node. Need safer scraping at scale? Swap the HTTP Request for Firecrawl’s Single URL scraper (or any headless service like Browserless, Oxylabs, Bright Date, or ScrapingBee) — they handle rendering and IP rotation. Want to extract from internal pages (like /contact or /about)? Use Firecrawl’s async crawl mode — just note it takes longer. For speed and efficiency, this built-in HTTP + Markdown setup is usually the fastest way to grab emails.
by Jeyson Orozco
Description This template creates a nightly backup of all n8n workflows and saves them to a Google Drive folder. Each night, the previous night's backups are moved to an “n8n_old” folder and renamed with the corresponding date. Backups older than a specified age are automatically deleted (this feature is active for 30 days, you can remove it if you don't want the backups to be deleted). Prerequisites Google Drive account and credentials Get from the following link. Link n8n version from v 1.63.4 to 1.70.1 or higher N8n api key Guide from the following link. Link A destination folder for backups: “n8n_old” “n8n_backups” (if it doesn't exist, create it) Configuration Update all Google Drive nodes with your credentials. Edit the Schedule Trigger node with the desired time to run the backup. If you want to automatically purge old backups. Edit the “PURGE DAYS” node to specify the age of the backups you want to delete. Enable the “PURGE DAYS” node and the 3 subsequent nodes. Enable the workflow to run on the specified schedule. Last updated January 2025
by Raquel Giugliano
This minimal utility workflow connects to the SAP Business One Service Layer API to verify login credentials and return the session ID. It's ideal for testing access or using as a sub-workflow to retrieve the B1SESSION token for other operations. ++⚙️ HOW IT WORKS:++ 🔹 1. Trigger Manually The workflow is initiated using a Manual Trigger. Ideal for testing or debugging credentials before automation. 🔹 2. Set SAP Login Data The Set Login Data node defines four key input variables: sap_url: Base URL of the SAP B1 Service Layer (e.g. https://sap-server:50000/b1s/v1/) sap_username: SAP B1 username sap_password: SAP B1 password sap_companydb: SAP B1 Company DB name 🔹 3. Connect to SAP A HTTP Request node performs a POST to the Login endpoint. The body is structured as: { "UserName": "your_sap_username", "Password": "your_sap_password", "CompanyDB": "your_sap_companydb" } If successful, the response contains a SessionId which is essential for authenticated requests. 🔹 4. Return Session or Error The response is branched: On success → the sessionID is extracted and returned. On failure → the error message and status code are stored separately. ++🛠 SETUP STEPS:++ 1️⃣ Create SAP Service Layer Credentials Although this workflow uses manual inputs (via Set), it's best to define your connection details as environment variables for reuse: SAP_URL=https://your-sap-host:50000/b1s/v1/ SAP_USER=your_sapuser SAP_PASSWORD=your_password SAP_COMPANY_DB=your_companyDB Alternatively, update the Set Login Data node directly with your values. 2️⃣ Run the Workflow Click "Execute Workflow" in n8n. Watch the response from SAP: If successful: sessionID will be available in the Success node. If failed: statusCode and errorMessage will be available in the Failed node. ++✅ USE CASES:++ 🔄 Reusable Login Module Export this as a reusable sub-workflow for other SAP-integrated flows. 🔐 Credential Testing Tool Validate new environments, test credentials before deployment.
by Harshil Agrawal
This workflow stores responses form responses of Typeform in Airtable. The workflow also sends the response to a channel on Slack. You will have to configure the Set node if your form uses different fields.
by Jenny
Vector Database as a Big Data Analysis Tool for AI Agents Workflows from the webinar "Build production-ready AI Agents with Qdrant and n8n". This series of workflows shows how to build big data analysis tools for production-ready AI agents with the help of vector databases. These pipelines are adaptable to any dataset of images, hence, many production use cases. Uploading (image) datasets to Qdrant Set up meta-variables for anomaly detection in Qdrant Anomaly detection tool KNN classifier tool For anomaly detection The first pipeline to upload an image dataset to Qdrant. 2. This is the second pipeline to set up cluster (class) centres & cluster (class) threshold scores needed for anomaly detection. The third is the anomaly detection tool, which takes any image as input and uses all preparatory work done with Qdrant to detect if it's an anomaly to the uploaded dataset. For KNN (k nearest neighbours) classification The first pipeline to upload an image dataset to Qdrant. The second is the KNN classifier tool, which takes any image as input and classifies it on the uploaded to Qdrant dataset. To recreate both You'll have to upload crops and lands datasets from Kaggle to your own Google Storage bucket, and re-create APIs/connections to Qdrant Cloud (you can use Free Tier cluster), Voyage AI API & Google Cloud Storage. [This workflow] Setting Up Cluster (Class) Centres & Cluster (Class) Threshold Scores for Anomaly Detection Preparatory workflow to set cluster centres and cluster threshold scores so anomalies can be detected based on these thresholds. Here, we're using two approaches to set up these centres: the "distance matrix approach" and the "multimodal embedding model approach".