by Ehsan
How it works This template creates a fully automated "hands-off" pipeline for processing financial documents. It's perfect for small businesses, freelancers, or operations teams who want to stop manually entering invoice and receipt data. When you drop a new image/multiple images file into a specific Google Drive folder, this workflow automatically: Triggers and downloads the new file. Performs OCR on the file using a local AI model (Nanonets-OCR-s) to extract all the raw text. Cleans & Structures the raw text using a second local AI model (command-r7b). This step turns messy text into a clean, predictable JSON object. Saves the structured data (like InvoiceNumber, TotalAmount, IssueDate, etc.) to a new record in your Airtable base. Moves the original file to a "Done" or "Failed" folder to keep your inbox clean and organized. Requirements Google Drive Account:** For triggering the workflow and storing files. Airtable Account:** To store the final, structured data. Ollama (Local AI):** This template is designed to run locally for free. You must have Ollama running and pull two models from your terminal: ollama pull benhaotang/Nanonets-OCR-s:F16 ollama pull command-r7b:7b-12-2024-q8_0 How to set up Setup should take about 10-15 minutes. The workflow contains 7 sticky notes that will guide you step-by-step. Airtable: Use the link in the main sticky note ([1]) to duplicate the Airtable base to your own account. Ollama: Make sure you have pulled the two required models listed above. Credentials: You will need to add three credentials in n8n: Your Google Drive (OAuth2) credentials. Your Airtable (Personal Access Token) credentials. Your Ollama credentials. (To do this, create an "OpenAI API" credential, set the Base URL to your local server (e.g., http://localhost:11434/v1), and use ollama as the API Key). Follow the Notes: Click through the workflow and follow the numbered sticky notes ([1] to [6]) to connect your credentials and select your folders for each node. How to customize the workflow Use Cloud AI:** This template is flexible! You can easily swap the local Ollama models for a cloud provider (like OpenAI's GPT-4o or Anthropic's Claude 3). Just change the credentials and model name in the two AI nodes (OpenAI Chat Model and Data Cleaner). Add More Fields:** To extract more data (e.g., SupplierVATNumber), simply add the new field to the prompt in the Data Cleaner node and map it in the AirTable - Create a record1 node.
by Robert Breen
This workflow fetches live financial data from SerpApi and generates a daily market recap using OpenAI. ⚙️ Setup Instructions 1️⃣ Set Up SerpApi Connection Create a free account at SerpApi Copy your API Key from the SerpApi dashboard In n8n → Credentials → New → SerpApi Paste your API Key → Save In the workflow, select your SerpApi credential in the Finance Search node. 2️⃣ Set Up OpenAI Connection Go to OpenAI Platform Navigate to OpenAI Billing Add funds to your billing account Copy your API key into the OpenAI credentials in n8n 🧠 How it works SerpApi Finance Search** → pulls market data (example: S&P 500, ticker ^GSPC) OpenAI Model** → summarizes into a daily report with a paragraph + key bullet points 📬 Contact Need help customizing (e.g., pulling multiple tickers, exporting to Google Sheets, or sending Slack/Email updates)? 📧 robert@ynteractive.com 🔗 Robert Breen 🌐 ynteractive.com
by Robert Breen
🏡 AI-Powered Real Estate Lead Qualifier (n8n Workflow) Description This n8n workflow automates lead engagement and qualification for real estate buyers. When someone submits a form on your real estate website, the system instantly responds via SMS, starting a conversation powered by an AI Agent. The AI asks pre-qualifying questions (like budget, location, and timeline), logs the entire conversation, and then summarizes and sends the lead info to a real estate agent. Chat history is stored in a PostgreSQL database (via Supabase) and tied to each buyer's phone number. Key Features 📩 Instant SMS Response: Follows up immediately after form submission. 🤖 AI Chat-Based Qualification: Conversational agent gathers buyer needs and preferences. 🗂 Supabase Chat Memory: Stores chat history tied to the buyer’s phone number to maintain context. 📊 Lead Summary & Agent Handoff: Summarizes conversation and logs it to Google Sheets for the agent. 🔁 Customizable AI Questions: Easily edit the questions asked by the AI to suit your process. 🤝 Connect with Me Description I’m Robert Breen, founder of Ynteractive — a consulting firm that helps businesses automate operations using n8n, AI agents, and custom workflows. I’ve helped clients build everything from intelligent chatbots to complex sales automations, and I’m always excited to collaborate or support new projects. If you found this workflow helpful or want to talk through an idea, I’d love to hear from you. Links 🌐 Website: https://www.ynteractive.com 📺 YouTube: @ynteractivetraining 💼 LinkedIn: https://www.linkedin.com/in/robert-breen 📬 Email: rbreen@ynteractive.com
by vinci-king-01
How it works This workflow automatically discovers and analyzes backlinks for any website, providing comprehensive SEO insights and competitive intelligence using AI-powered analysis. Key Steps Website Input - Accepts target URLs via webhook or manual input for backlink analysis. Backlink Discovery - Scrapes and crawls the web to find all backlinks pointing to the target website. AI-Powered Analysis - Uses GPT-4 to analyze backlink quality, relevance, and SEO impact. Data Processing & Categorization - Cleans, validates, and automatically categorizes backlinks by type, authority, and relevance. Database Storage - Saves processed backlink data to PostgreSQL database for ongoing analysis and reporting. API Response - Returns structured summary with backlink counts, domain authority scores, and SEO insights. Set up steps Setup time: 8-12 minutes Configure OpenAI credentials - Add your OpenAI API key for AI-powered backlink analysis. Set up PostgreSQL database - Connect your PostgreSQL database and create the required table structure. Configure webhook endpoint - The workflow provides a /analyze-backlinks endpoint for URL submissions. Customize analysis parameters - Modify the AI prompt to include your preferred SEO metrics and analysis criteria. Test the workflow - Submit a sample website URL to verify the backlink discovery and analysis process. Set up database table - Ensure your PostgreSQL database has a backlinks table with appropriate columns. Features Comprehensive backlink discovery**: Finds all backlinks pointing to target websites AI-powered analysis**: GPT-4 analyzes backlink quality, relevance, and SEO impact Automatic categorization**: Backlinks categorized by type (dofollow/nofollow), authority level, and relevance Data validation**: Cleans and validates backlink data with error handling Database storage**: PostgreSQL integration for data persistence and historical tracking API responses**: Clean JSON responses with backlink summaries and SEO insights Competitive intelligence**: Analyzes competitor backlink profiles and identifies link building opportunities Authority scoring**: Calculates domain authority and page authority metrics for each backlink
by Arkadiusz
Workflow Description: Turn a simple text idea into production-ready icons in seconds. With this workflow, you input a subject (e.g., “Copy”, “Banana”, “Slack Mute”), select a style (Flat, 3D, Cartoon, etc.), and off it goes. Here’s what happens: A form trigger collects your icon subject, style and optional background. The workflow uses an LLM to construct an optimised prompt. An image-generation model (OpenAI image API) renders a transparent-background, 400×400 px PNG icon. The icon is automatically uploaded to Google Drive, and both a download link and thumbnail are generated. A styled completion card displays the result and gives you a “One More Time” option. Perfect for designers, developers, no-code creators, UI builders and even home-automation geeks (yes, you can integrate it with Home Assistant or Stream Deck!). It saves you the manual icon-hunt grind and gives consistent visual output across style variants. 🔧 Setup Requirements: n8n instance (self-hosted or cloud) OpenAI API access (image generation enabled) Google Drive credentials (write access to a folder) (Optional) Modify to integrate Slack, Teams or other file-storage destinations ✅ Highlights & Benefits: Fully automated prompt creation → consistent icon quality Transparent background PNGs size-ready for UI use Saves icons to Drive + gives immediate link/thumbnail Minimal setup, high value for creative/automation workflows Easily extendable (add extra sizes, style presets, share via chat/bot) ⚠️ Notes & Best-Practices: Check your OpenAI image quota and costs - image generation may incur usage. Confirm Google Drive folder permissions to avoid upload failures. If you wish a different resolution or format (e.g., SVG), clone the image node and adjust parameters.
by Kean
Text-to-Image Generator with OpenAI What It Is This is an automated text-to-image generation system that converts simple subject descriptions into AI-generated photos using OpenAI's image generation technology. Setup The system works through a streamlined workflow: You input a subject or brief description into a designated note field The system automatically expands your simple subject into a detailed, comprehensive prompt This enhanced prompt is sent to OpenAI's image generator Once the image is created, it is automatically saved to your Google Drive for easy access and storage You receive a notification in Slack to view it
by Balakrishnan C
Personal AI Assistant on Telegram Who It’s For: This workflow is designed for developers, founders, community managers, and automation enthusiasts who want to bring a personal AI assistant directly into their Telegram chat. It lets users interact naturally—either through text or voice messages—and get instant, AI-powered replies without switching apps or opening dashboards. ⚡ What It Does / How It Works 📥 Message Trigger: A Telegram Trigger node listens for incoming messages. 🧭 Smart Routing: A Switch node decides if the user sent a text or voice message. 🗣️ Voice to Text: If it’s voice, the workflow uses OpenAI Whisper Transcription to convert it into text. 🧠 AI Processing: The text is passed to an AI Agent powered by GPT-4.1-mini to understand intent and craft a response. 💬 Reply: The bot sends a clean, structured, and polite answer back to the user on Telegram. 🧠 Memory: A buffer memory node keeps short-term conversation history for a more contextual, human-like experience. 🧰 How to Set It Up: Telegram Integration Create a bot via @BotFather on Telegram. https://telegram.me/BotFather Add your Telegram API Key to n8n credentials. Connect the Telegram Trigger and Send a Message nodes. OpenAI Setup Get your API key from platform.openai.com. https://platform.openai.com/api-keys Configure the OpenAI Chat Model and Transcribe a Recording nodes with GPT-4.1-mini. Workflow Logic Use the Switch node to detect message type (text or voice). Route voice messages through transcription before sending them to the AI agent. Add Simple Memory to maintain short conversational context. Go Live Activate the workflow. Send a message or voice note to your bot. Get instant replies from your personal AI assistant. 🚀 🛡️ Requirements: n8n (self-hosted or cloud) Telegram Bot API key OpenAI API key (for GPT-4.1-mini) Basic understanding of n8n nodes and connections 🌟 Why Use This Workflow: ✅ Hands-free experience: Just talk or type. 🧠 AI-powered responses: Natural language understanding with GPT. ⚡ Real-time interaction: Fast replies via Telegram. 🔁 Memory-aware conversations: Feels more human. 🧩 Modular design: Easily extend to other AI tools or platforms.
by Robert Breen
Create a Fall 2025 course schedule for each student based on what they’ve already completed, catalog prerequisites, and term availability (Fall/Both). Reads students from Google Sheets → asks an AI agent to select exactly 5 courses (target 15–17 credits, no duplicates, prereqs enforced) → appends each student’s schedule to a schedule tab. 🧠 Summary Trigger:* Manual — *“When clicking ‘Execute workflow’” I/O:** Google Sheets in → OpenAI decisioning → Google Sheets out Ideal for:** Registrars, advisors, degree-planning prototypes ✅ What this template does Reads: StudentID, Name, Program, Year, CompletedCourses (pipe-separated CourseIDs) from **Sheet1 Decides: AI **Scheduling Agent chooses 5 courses per student following catalog rules and prerequisites Writes: Appends StudentID + Schedule strings to **schedule worksheet Credits target**: 15–17 total per term Catalog rules** (enforced in the agent prompt): Use Fall or Both courses for Fall 2025 Enforce AND prereqs (e.g., CS-102|CS-103 means both) Priority: Major Core → Major Elective → Gen Ed (include Gen Ed if needed) No duplicates or already-completed courses Prefer 200-level progression when prereqs allow ⚙️ Setup (only 2 steps) 1) Connect Google Sheets (OAuth2) In n8n → Credentials → New → Google Sheets (OAuth2), sign in and grant access In the Google Sheets nodes, select your spreadsheet and these tabs: Sheet1 (input students) schedule (output) > Example spreadsheet (replace with your own): > - Input: .../edit#gid=0 > - Output: .../edit#gid=572766543 2) Connect OpenAI (API Key) In n8n → Credentials → New → OpenAI API, paste your key In the OpenAI Chat Model node, select that credential and a chat model (e.g., gpt-4o) 📥 Required input (Sheet1) Columns**: StudentID, Name, Program, Year, CompletedCourses CompletedCourses**: pipe-separated IDs (e.g., GEN-101|GEN-103|CS-101) Program* names should match those referenced in the embedded catalog (e.g., *Computer Science BS, Business Administration BBA, etc.) 📤 Output (schedule tab) Columns**: StudentID Schedule → a selected course string (written one row per course after splitting) 🧩 Nodes in this template Manual Trigger* → *Get Student Data (Google Sheets)* → *Scheduling Agent (OpenAI)** → Split Schedule → Set Fields → Clear sheet → Append Schedule (Google Sheets) 🛠 Configuration tips If you rename tabs, update both Get Student Data and Append Schedule nodes Keep CompletedCourses consistent (use | as the delimiter) To store rationale as well, add a column to the output and map it from the agent’s JSON 🧪 Test quickly 1) Add 2–3 sample student rows with realistic CompletedCourses 2) Run the workflow and verify: 5 course rows per student in schedule Course IDs respect prereqs & Fall/Both availability Credits sum ~15–17 🧯 Troubleshooting Sheets OAuth error:** Reconnect “Google Sheets (OAuth2)” and re-select the spreadsheet & tabs Empty schedules:** Ensure CompletedCourses uses | and that programs/courses align with the provided catalog names Prereq violations:** Check that students actually have all AND-prereqs in CompletedCourses OpenAI errors (401/429):** Verify API key, billing, and rate-limit → retry with lower concurrency 🔒 Privacy & data handling Student rows are sent to OpenAI for decisioning. Remove or mask any fields you don’t want shared. Google Sheets retains input/output. Use spreadsheet sharing controls to limit access. 💸 Cost & performance OpenAI**: Billed per token; cost scales with student count and prompt size Google Sheets**: Free within normal usage limits Runtime**: Typically seconds to a minute depending on rows and rate limits 🧱 Limitations & assumptions Works for Fall 2025 only (as written). For Spring, update availability rules in the agent prompt Assumes catalog in the agent system message is your source of truth Assumes Program names match catalog variants (case/spacing matter for clarity) 🧩 Customization ideas Add a Max Credits column to cap term credits per student Include Rationale in the sheet for advisor review Add a “Gen Ed needed?” flag per student to force at least one Gen Ed selection Export to PDF or email the schedules to advisors/students 🧾 Version & maintenance n8n version:** Tested on recent n8n Cloud builds (2025) Community nodes:** Not required Maintenance:** Update the embedded catalog and offerings each term; keep prerequisites accurate 🗂 Tags & category Category:** Education / Student Information Systems Tags:** scheduling, registrar, google-sheets, openai, prerequisites, degree-planning, catalog, fall-term 🗒 Changelog v1.0.0** — Initial release: Sheets in/out, Fall 2025 catalog rules, prereq enforcement, 5-course selection, credits target 📬 Contact Need help customizing this (e.g., cohort logic, program-specific rules, adding rationale to the sheet, or emailing PDFs)? 📧 rbreen@ynteractive.com 🔗 Robert Breen — https://www.linkedin.com/in/robert-breen-29429625/ 🌐 ynteractive.com — https://ynteractive.com
by Cheng Siong Chin
How It Works This workflow automates enterprise policy compliance monitoring using AI agents to ensure organizational adherence to regulatory and internal policies. Designed for compliance officers, legal teams, and risk managers, it solves the challenge of manually reviewing vast policy documents and execution logs for violations.The system fetches policy records on schedule, routes them to specialized AI agents (OpenAI for compliance assessment and escalation logic), validates outputs, and logs all actions for audit trails. Email notifications alert stakeholders when violations occur. By automating detection and escalation, organizations reduce compliance risks, accelerate response times, and maintain comprehensive audit documentation—critical for regulated industries like finance, healthcare, and manufacturing. Setup Steps Connect Schedule Trigger (set monitoring frequency: hourly/daily) Configure Fetch Policy Records node with your policy database/API credentials Add OpenAI API key to Compliance Agent and Escalation Logic nodes Connect Email node with SMTP credentials for alert notifications Link Final Execution Log to your audit storage system Test workflow with sample policy violations to verify routing logic Prerequisites OpenAI API account with GPT-4 access, policy database/API access Use Cases Financial services regulatory compliance (KYC/AML), healthcare HIPAA monitoring Customization Modify AI prompts for industry-specific regulations, adjust routing thresholds for violation severity Benefits Reduces compliance review time by 90%, eliminates human oversight gaps
by Cheng Siong Chin
How It Works This workflow automates financial transaction monitoring, fraud detection, and regulatory compliance using OpenAI GPT-4 across coordinated specialist agents. It targets compliance officers, fraud analysts, and fintech operations teams managing high transaction volumes where manual review is too slow to catch emerging fraud patterns and compliance breaches in time. On schedule, the system fetches pending transactions from Airtable and routes them through a Transaction Signal Agent that classifies each by risk level—High, Medium, Low, or Unclassified. A Compliance Agent then coordinates three specialist agents: Investigation, Risk Scoring, and Reporting. Airtable stores all compliance records throughout. Results merge and update transaction records directly, giving compliance teams a fully automated, audit-ready pipeline that flags fraud, scores risk, and generates regulatory reports without manual intervention. Setup Steps Import workflow JSON into your n8n instance. Add OpenAI API credentials. Set Schedule Trigger frequency aligned to your transaction processing cycle. Update Workflow Configuration node with risk thresholds and compliance rule parameters. Connect Airtable credentials and configure base/table IDs for Fetch Pending Transactions. Prerequisites n8n (cloud or self-hosted), OpenAI API key (GPT-4), Airtable account with configured base and appropriate table schema Use Cases Compliance teams automating AML screening and suspicious transaction flagging across high transaction volumes Customization Replace OpenAI GPT-4 with Anthropic Claude or NVIDIA NIM in any agent node Benefits Automates end-to-end fraud detection and compliance reporting, eliminating manual transaction reviews
by Qandil
What it does A conversational AI agent that connects to WAFtester via MCP (Model Context Protocol) for interactive Web Application Firewall security testing. Type natural language requests — the agent picks the right tools, runs the tests, and explains the results. About WAFtester WAFtester is an open-source CLI for testing Web Application Firewalls. It ships 27 MCP tools, 2,800+ attack payloads across 18 categories (SQLi, XSS, SSRF, SSTI, command injection, XXE, and more), detection signatures for 26 WAF vendors and 9 CDNs, and enterprise-grade assessment with F1/MCC scoring and letter grades (A+ through F). GitHub: github.com/waftester/waftester Docs: Installation | Examples | Commands Who it's for Security engineers** running ad-hoc WAF assessments Penetration testers** who want AI-assisted reconnaissance and bypass discovery DevSecOps teams** validating WAF coverage before and after deployments API security teams** testing OpenAPI/Swagger specs against WAF rules How it works The workflow has four nodes: Chat Trigger — Opens an n8n chat interface where you type requests in plain English AI Agent — Receives your message, reasons about which tools to call, and orchestrates the testing workflow OpenAI Chat Model — Provides the LLM reasoning layer (GPT-4o recommended; swappable for Anthropic, Ollama, etc.) WAFtester MCP — Connects to the WAFtester server via SSE and exposes all 27 tools to the agent The agent follows a standard WAF testing workflow: detect_waf — Fingerprint the WAF vendor and CDN protecting the target discover — Map the attack surface (endpoints, parameters, technologies) from robots.txt, sitemaps, JavaScript, and Wayback Machine learn — Generate a prioritized test plan based on discovery results scan — Fire 2,800+ attack payloads and measure detection vs. bypass rates bypass — Systematic mutation matrix testing to find WAF evasion techniques assess — Generate a formal security grade with F1, precision, MCC, and false positive rate Long-running operations (scan, assess, bypass, discover, discover_bypasses, event_crawl, scan_spec) run asynchronously — the agent polls for results automatically. Key capabilities | Capability | Details | |---|---| | WAF detection | Fingerprint 26 WAF vendors and 9 CDNs from response headers, cookies, and error pages | | Payload scanning | 2,800+ payloads across 18 attack categories | | Bypass discovery | Mutation matrix with 40+ tamper techniques to find WAF evasions | | Enterprise assessment | F1 score, precision, MCC, false positive rate, and A+ through F grading | | API spec testing | Validate, plan, and scan OpenAPI/Swagger/Postman specs | | Headless crawling | Click-driven DOM crawling via headless browser for JS-rendered endpoints | | Knowledge resources | 12 built-in resources covering WAF signatures, evasion techniques, OWASP mappings, and config defaults | Example prompts "What WAF is protecting https://example.com?" "Scan https://example.com for SQLi and XSS" "Find WAF bypasses for https://example.com" "Run a full security assessment of https://example.com" "Validate my API spec at https://example.com/openapi.json" "Discover the attack surface of https://example.com" How to set up Start WAFtester MCP server: docker run -p 8080:8080 ghcr.io/waftester/waftester:latest mcp --http :8080 Add OpenAI credentials in n8n: Settings → Credentials → New → OpenAI API Select the credential in the OpenAI Chat Model node Activate the workflow and open the chat interface Alternatively, use the included docker-compose.yml to run both n8n and WAFtester together with docker compose up -d. Requirements | Requirement | Details | |---|---| | WAFtester MCP server | Docker image (ghcr.io/waftester/waftester:latest) or binary install for macOS, Linux, Windows | | LLM API key | OpenAI (default), or swap the model node for Anthropic, Ollama, Azure OpenAI, or any LangChain-compatible provider | | Authorization | Only test targets you have explicit written permission to test | Links WAFtester website GitHub repository Installation guide Full examples Docker Hub
by José Ramón Villaverde
Who is this workflow for? This workflow is designed for technical teams, automation owners, process auditors, and anyone who needs to document n8n workflows in a consistent, fast, and professional way—without manual work. What problem does it solve? / Use case Documenting n8n workflows is often slow and error-prone: you need to review nodes, configurations, connections, internal logic, and embedded code. This workflow automates that process by generating a complete technical report, structured and ready to share, ensuring documentation is: consistent easy to update easy to review accessible in Google Drive What this workflow does Lets you select an n8n workflow to document Extracts its structure, nodes, connections, and settings Normalizes the JSON to remove noise and keep what matters Generates a HTML technical report using an LLM (OpenAI GPT-4.1) Detects whether Code nodes exist If Code nodes exist, analyzes their logic and adds an extra technical section Creates a final Google Docs document inside a Google Drive folder High-level flow Manual workflow start Target workflow selection Workflow normalization (cleanup + structure) Main report generation with OpenAI Code node extraction Condition: Code nodes exist or not (Optional) technical analysis of Code nodes with OpenAI Merge main report + code analysis Create the final document in Google Docs Setup OpenAI Configure OpenAI credentials (API Key). The workflow uses OpenAI nodes to: generate the main report analyze Code nodes (if any) Google Drive Configure Google Drive OAuth2 credentials with write permissions Define a destination folder using a folder ID (e.g., YOUR_FOLDER_ID) The workflow uploads the final HTML as a Google Docs document n8n (access to the target workflow) If your instance requires it, configure access to read internal workflows. The workflow fetches the selected workflow using an n8n Get Workflow node. How to customize this workflow Report format:** adjust the prompt in the Generate Report node to change sections, style, or level of detail Output folder:** replace YOUR_FOLDER_ID with the real Drive folder ID Documentation strategy:** generate shorter reports for small workflows split large reports if the workflow is very big Automated triggering:** replace the manual start with a webhook trigger if you want to use an external application Final outcome A system that automatically generates a professional technical Google Docs document containing: workflow description global configuration node-by-node breakdown routing logic (conditions and branches) required credentials operational requirements risks and observations additional Code node analysis (if applicable) Do you want this workflow in Spanish? 📧 jrvillaverde@virodria.es 🔗 https://www.linkedin.com/in/ramonvillaverde