by Angel Menendez
This workflow is triggered by a parent workflow initiated via a Slack shortcut. Upon activation, it collects input from a modal window in Slack and initiates a vulnerability scan using the Qualys API. Key Features Trigger:** Launched by a parent workflow through a Slack shortcut with modal input. API Integration:** Utilizes the Qualys API for vulnerability scanning. Data Conversion:** Converts XML scan results to JSON for further processing. Loop Mechanism:** Continuously checks the scan status until completion. Slack Notifications:** Posts scan summary and detailed results to a specified Slack channel. Workflow Nodes Start VM Scan in Qualys: Initiates the scan with specified parameters. Convert XML to JSON: Converts the scan results from XML format to JSON. Fetch Scan Results: Retrieves scan results from Qualys. Check if Scan Finished: Verifies whether the scan is complete. Loop Mechanism: Handles the repetitive checking of the scan status. Slack Notifications: Posts updates and results to Slack. Relevant Links Qualys API Documentation Qualys Platform Documentation Parent workflow link Link to Report Generator Subworkflow
by Luciano Gutierrez
Google Calendar AI Agent with Dynamic Scheduling Version: 1.0.0 n8n Version: 1.88.0+ Author: Koresolucoes License: MIT Description An AI-powered workflow to automate Google Calendar operations using dynamic parameters and MCP (Model Control Plane) integration. Enables event creation, availability checks, updates, and deletions with timezone-aware scheduling [[1]][[2]][[8]]. Key Features: 📅 Full Calendar CRUD: Create, read, update, and delete events in Google Calendar. ⏰ Availability Checks: Verify time slots using AVALIABILITY_CALENDAR node with timezone support (e.g., America/Sao_Paulo). 🤖 AI-Driven Parameters: Use $fromAI() to inject dynamic values like Start_Time, End_Time, and Description [[3]][[4]]. 🔗 MCP Integration: Connects to an MCP server for centralized AI agent control [[5]][[6]]. Use Cases Automated Scheduling: Book appointments based on AI-recommended time slots. Meeting Coordination: Sync calendar events with CRM/task management systems. Resource Management: Check room/equipment availability before event creation. Instructions 1. Import Template Go to n8n > Templates > Import from File and upload this workflow. 2. Configure Credentials Add Google Calendar OAuth2 credentials under Settings > Credentials. Ensure the calendar ID matches your target (e.g., ODONTOLOGIA group calendar). 3. Set Up Dynamic Parameters Use $fromAI('Parameter_Name') in nodes like CREATE_CALENDAR to inject AI-generated values (e.g., event descriptions). 4. Activate & Test Enable the workflow and send test requests to the webhook path /mcp/:tool/calendar. Tags Google Calendar Automation MCP AI Agent Scheduling CRUD Screenshots License This template is licensed under the MIT License. Notes: Extend multi-tenancy by adding :userId to the webhook path (e.g., /mcp/:userId/calendar) [[7]]. For timezone accuracy, always specify options.timezone in availability checks [[8]]. Refer to n8n’s Google Calendar docs for advanced field mappings.
by Wolfgang Renner
🧠 Business Card Scanner – Automate Contact Extraction This workflow automates the process of extracting contact details from business cards (PDF or image) and saving them directly into an n8n Data Table. No more manual data entry — just upload a card and let AI do the rest. ⚙️ How It Works Upload the business card via a web form (PDF or image). The uploaded file is converted to Base64 for processing. The Base64 data is sent to the Mistral OCR API, which extracts text from the image. The OCR output is parsed into JSON. An AI Agent (OpenAI GPT-4o-mini) interprets the extracted text and converts it into structured business card information (e.g., name, company, email, phone). The Structured Output Parser validates and aligns the data with a predefined schema. The workflow upserts (inserts or updates) the contact details into an n8n Data Table named business cards, using the email address as the unique identifier. ✅ Result: Seamless digitization of business cards into structured, searchable contact data. 🧩 Prerequisites Before importing the workflow, make sure you have the following: n8n Instance with access to the Data Table feature OpenAI Platform account and API key (configured in n8n) Mistral AI account and API key (configured in n8n) 🛠️ Setup Steps Import the Workflow Download and import the JSON file into your n8n instance. Create a Data Table Name it business_cards (or adjust the workflow accordingly). Add the following fields: firstname name company jobdescription phone mobil email street postcode place web Configure API Credentials Mistral OCR API → Add your API key under HTTP Bearer Auth. OpenAI API → Add your API key under OpenAI Credentials. Model: gpt-4o-mini (recommended for speed and low cost). Activate the Web Form Trigger Enable the trigger node to make the business card upload form accessible via a public URL. Test the Workflow Upload a sample business card. Confirm that extracted contact data automatically appears in your Data Table. 💡 Example JSON Output { "firstname": "Anna", "name": "Müller", "company": "NextGen Tech GmbH", "jobdescription": "Head of Marketing", "email": "anna.mueller@nextgen.tech", "phone": "+49 821 1234567", "mobil": "+49 170 9876543", "street": "Schillerstraße 12", "postcode": "86150", "place": "Augsburg", "web": "https://nextgen.tech" }
by Miquel Colomer
This workflow is useful if you have lots of tasks running daily. MySQL node (or the database used to save data shown in n8n - could be Mongo, Postgres, ... -) remove old entries from execution_entity table that contains the history of the executed workflows. If you have multiple tasks executed every minute, 1024 rows will be created every day (60 minutes x 24 hours) per every task. This will increase the table size fastly. SQL query deletes entries older than 30 days taking stoppedAt column as a reference for date calculations. You only have to setup Mysql connection properly and config cron to execute once per day in a low traffic hour, this way
by Simon Mayerhofer
This workflow allows you to batch update/insert Airtable rows in groups of 10, significantly reducing the number of API calls and increasing performance. 🚀 How It Works Copy the 3 Nodes Copy the three nodes inside the red note box into your workflow. Set Your Fields In the Set Fields node, define the fields you want to update. ➤ Only include fields that match column names in your Airtable table. ➤ Make sure the field names are spelled exactly as they appear in Airtable. ➤ Make sure the field type are correctly set. So numbers columns in Airtable need numbers type set as the field. Configure the Airtable Batch Node Enter your Airtable Base ID The part with app... in the URL: airtable\.com / app8pqOLeka1Cglwg / tblnXZOdy8VtkAAJD/... Enter your Airtable Table ID The part with tbl... in the URL: airtable\.com / app8pqOLeka1Cglwg / tblXXZOdy8VtkAAJD /... Set Matching Fields (fieldsToMergeOn) Provide a string array that tells Airtable how to match existing rows. Examples: Match by one field (e.g. TaskID): {{["TaskID"]}} Match by multiple fields (e.g. firstname and lastname): {{["firstname", "lastname"]}} Choose the Mode (mode field) Available options: upsert: Update if a record exists, otherwise insert a new one insert: Always insert as new records update: Only update existing records (you must provide a field named id)
by Jan Oberhauser
Simpe API which queries the received country code via GraphQL and returns it. Example URL: https://n8n.exampl.ecom/webhook/1/webhook/webhook?code=DE Receives country code from an incoming HTTP Request Reads data via GraphQL Converts the data to JSON Constructs return string
by jason
This workflow will gather data every minute from the GitHub (https://github.com), Docker (https://www.docker.com/), npm (https://www.npmjs.com/) and Product Hunt (https://www.producthunt.com/) website APIs and display select information on a Smashing (https://smashing.github.io/) dashboard. For convenience sake, the dashboard piece can be easily downloaded as a docker container (https://hub.docker.com/r/tephlon/n8n_dashboard) and installed into your docker environment.
by Harshil Agrawal
This workflow allows you to receive updates when a customer is subscribed to a list in GetResponse and add them to a base in Airtable. GetResponse Trigger node: This node triggers the workflow when a customer is added to a list. Based on your use-case, you can select a different event. Set node: The Set node is uded here to ensure that only the data that we set in this node gets passed on to the next nodes in the workflow. For this workflow, we set the name and email of the customer. Airtable node: The data from the Set node is added to a table in Airtable. Based on your use-case, you may want to add the infromation about the customer to a CRM instead of a table in Airtable. Replace the Airtable node with the node of the CRM where you want to add the data.
by Jonathan
Task: Create a simple API endpoint using the Webhook and Respond to Webhook nodes Why: You can prototype or replace a backend process with a single workflow Main use cases: Replace backend logic with a workflow
by Don Jayamaha Jr
This workflow acts as a central API gateway for all technical indicator agents in the Binance Spot Market Quant AI system. It listens for incoming webhook requests and dynamically routes them to the correct timeframe-based indicator tool (15m, 1h, 4h, 1d). Designed to power multi-timeframe analysis at scale. 🎥 Watch Tutorial: 🎯 What It Does Accepts requests via webhook with a token symbol and timeframe Forwards requests to the correct internal technical indicator tool Returns a clean JSON payload with RSI, MACD, BBANDS, EMA, SMA, and ADX Can be used directly or as a microservice by other agents 🛠️ Input Format Webhook endpoint: POST /webhook/indicators Body format: { "symbol": "DOGEUSDT", "timeframe": "15m" } 🔄 Routing Logic | Timeframe | Routed To | | --------- | -------------------------------- | | 15m | Binance SM 15min Indicators Tool | | 1h | Binance SM 1hour Indicators Tool | | 4h | Binance SM 4hour Indicators Tool | | 1d | Binance SM 1day Indicators Tool | 🔎 Use Cases | Use Case | Description | | -------------------------------------------------- | ------------------------------------------------------ | | 🔗 Used by Binance Financial Analyst Tool | Automatically triggers all indicator tools in parallel | | 🤖 Integrated in Binance Quant AI System | Supports reasoning, signal generation, and summaries | | ⚙️ Can be called independently for raw data access | Useful for dashboards or advanced analytics | 📤 Output Example { "symbol": "DOGEUSDT", "timeframe": "15m", "rsi": 56.7, "macd": "Bearish Crossover", "bbands": "Stable", "ema": "Price above EMA", "adx": 19.4 } ✅ Prerequisites Make sure all the following workflows are installed and operational: Binance SM 15min Indicators Tool Binance SM 1hour Indicators Tool Binance SM 4hour Indicators Tool Binance SM 1day Indicators Tool OpenAI credentials (for any agent using LLM formatting) 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company All architectural routing logic and endpoint structuring is IP-protected. No unauthorized rebranding or resale permitted. 🔗 Need help? Connect on LinkedIn – Don Jayamaha
by Ezema Kingsley Chibuzo
🧠 What It Does This n8n workflow collects leads from Google Maps, scrapes their websites via direct HTTP requests, and extracts valid email addresses — all while mimicking real user behavior to improve scraping reliability. It rotates User-Agent headers, introduces randomized delays, and refines URLs by removing only query parameters and fragments to preserve valid page paths (like social media links). The workflow blends Apify actors, raw HTTP requests, HTML-to-Markdown conversion, and smart email extraction to deliver clean, actionable lead data — ready to be sent to Airtable, Google Sheets, or any CRM. Perfect for lean, scalable B2B lead generation using n8n’s native logic and no external scrapers. 💡Why this workflow Modest lead scrapers rely on heavy tools or APIs like Firecrawl. This workflow: Uses lightweight HTTP requests (with randomized user-agents) to scrape websites. Adds natural wait times to avoid rate limits and IP bans. Avoid full-page crawlers, yet still pulls emails effectively. Works great for freelancers, marketers, or teams targeting niche B2B leads. Designed for stealth and resilience. 👤 Who it’s for Lead generation freelancers or consultants. B2B marketers looking to extract real contact info. Small businesses doing targeted outreach. Developers who want a fast, low-footprint scraper. Anyone who wants email + website leads from Google Maps. ⚙️ How It Works 1. 📥 Form Submission (Lead Input) A Form Trigger collects: Keyword Location No. of Leads (defaults to 10) This makes the workflow dynamic and user-friendly — ready for multiple use cases and teams. 2. 📊 Scrape Business Info (via Apify) Apify’s Google Maps Actor searches for matching businesses. The Dataset Node fetches all relevant business details. A Set Node parses key fields like name, phone, website, and category. A Limit Node ensures the workflow only processes the desired number of leads. 3. 🔁 First Loop – Visit & Scrape Website Each business website is processed in a loop. A Code Node cleans the website URL by removing only query parameters/fragments — keeping full paths like /contact. A HTTP Request Node fetches the raw HTML of the site: Uses randomized User-Agent headers (5 variants) to mimic real devices and browsers. This makes requests appear more human and reduces the risk of detection or blocking. HTML is converted to Markdown using the Markdown Node, making it easier to scan for text patterns. A Wait Node introduces a random delay between 2-7 seconds: Helps avoid triggering rate limits, Reduces likelihood of being flagged as a bot. A Merge Node combines scraped markdown + lead info for use in the second loop. 4. 🔁 Second Loop – Extract Emails In this second loop, the markdown data is processed. A Code Node applies regex to extract the first valid email address. If no email is found, "N/A" is returned. A brief 1 second Wait Node simulates realistic browsing time. Another Merge Node attaches the email result to the original lead data. 5. ✅ Filter, Clean & Store A Filter Node removes all entries with "N/A" or invalid email results. A Set Node ensures only required fields (like website, email, and company name) are passed forward. The clean leads are saved to Airtable (or optionally, Google Sheets) using an upsert-style insert to avoid duplicates. 🛡️ Anti-Flagging Design This workflow is optimized for stealth: No scraping tools or headless browsers (like Puppeteer or Firecrawl). Direct HTTP requests with rotating User-Agents. Randomized wait intervals (2-7s). Only non-intrusive parsing — no automation footprints. 🛠 How to Set It Up Open n8n (Cloud or Self-Hosted). Install Apify node search Apify and click on Install. Do this before importing your file. Import the provided .json file into your n8n editor. Set up the required credentials: 🔑 Apify API Key** (used for Google Maps scraping) 🔑 Airtable API Key** (or connect Google Sheets instead) Recommended Prepare your Airtable base or Google Sheet with fields like: Email, Website, Phone, Company Name. Review the Set node if you'd like to collect more fields from Apify (e.g., Ratings, Categories, etc.). 🔁 Customization Tips The Apify scraper returns rich business data. By default, this workflow collects name, phone, and website — but you can add more in the "Grab Desired Fields" node. Need safer scraping at scale? Swap the HTTP Request for Firecrawl’s Single URL scraper (or any headless service like Browserless, Oxylabs, Bright Date, or ScrapingBee) — they handle rendering and IP rotation. Want to extract from internal pages (like /contact or /about)? Use Firecrawl’s async crawl mode — just note it takes longer. For speed and efficiency, this built-in HTTP + Markdown setup is usually the fastest way to grab emails.
by Jeyson Orozco
Description This template creates a nightly backup of all n8n workflows and saves them to a Google Drive folder. Each night, the previous night's backups are moved to an “n8n_old” folder and renamed with the corresponding date. Backups older than a specified age are automatically deleted (this feature is active for 30 days, you can remove it if you don't want the backups to be deleted). Prerequisites Google Drive account and credentials Get from the following link. Link n8n version from v 1.63.4 to 1.70.1 or higher N8n api key Guide from the following link. Link A destination folder for backups: “n8n_old” “n8n_backups” (if it doesn't exist, create it) Configuration Update all Google Drive nodes with your credentials. Edit the Schedule Trigger node with the desired time to run the backup. If you want to automatically purge old backups. Edit the “PURGE DAYS” node to specify the age of the backups you want to delete. Enable the “PURGE DAYS” node and the 3 subsequent nodes. Enable the workflow to run on the specified schedule. Last updated January 2025