by Cheng Siong Chin
How It Works The webhook receives incoming profiles and extracts relevant demographic, financial, and credential data. The workflow then queries the programs database to identify suitable options, while the AI generates personalized recommendations based on eligibility and preferences. A formal recommendation letter is created, followed by a drafted outreach email tailored to coordinators. Parsers extract structured data from the letters and emails, a Slack summary is prepared for internal visibility, and the final response is sent to the appropriate recipients. Setup Steps Configure AI agents by adding OpenAI credentials and setting prompts for the Program Matcher, Letter Writer, and Email Drafter. Connect the programs database (Airtable or PostgreSQL) and configure queries to retrieve matching program data. Set up the webhook by defining the trigger endpoint and payload structure for incoming profiles. Configure JSON parsers to extract relevant information from profiles, letters, and emails. Add the Slack webhook URL and define the summary format for generated communications. Prerequisites OpenAI API key Financial programs database Slack workspace with webhook User profile structure (income, GPA, demographics) Use Cases Universities automating 500+ annual applicant communications Scholarship foundations personalizing outreach at scale Customization Add multilingual support for international applicants Include PDF letter generation with signatures Benefits Reduces communication time from 30 to 2 minutes per applicant, ensures consistent professional quality
by Connor Provines
[Meta] Multi-Format Documentation Generator for N8N Creators (+More) One-Line Description Transform n8n workflow JSON into five ready-to-publish documentation formats including technical guides, social posts, and marketplace submissions. Detailed Description What it does: This workflow takes an exported n8n workflow JSON file and automatically generates a complete documentation package with five distinct formats: technical implementation guide, LinkedIn post, Discord community snippet, detailed use case narrative, and n8n Creator Commons submission documentation. All outputs are compiled into a single Google Doc for easy access and distribution. Who it's for: n8n creators** preparing workflows for the template library or community sharing Automation consultants** documenting client solutions across multiple channels Developer advocates** creating content about automation workflows for different audiences Teams** standardizing workflow documentation for internal knowledge bases Key Features: Parallel AI generation** - Creates all five documentation formats simultaneously using Claude, saving 2+ hours of manual writing Automatic format optimization** - Each output follows platform-specific best practices (LinkedIn character limits, Discord casual tone, n8n marketplace guidelines) Single Google Doc compilation** - All documentation consolidated with clear section separators and automatic workflow name detection JSON upload interface** - Simple form-based trigger accepts workflow exports without technical setup Smart content adaptation** - Same workflow data transformed into technical depth for developers, engaging narratives for social media, and searchable descriptions for marketplaces Ready-to-publish outputs** - No editing required—each format follows platform submission guidelines and style requirements How it works: User uploads exported n8n workflow JSON through a web form interface Five AI agents process the workflow data in parallel, each generating format-specific documentation (technical guide, LinkedIn post, Discord snippet, use case story, marketplace listing) All outputs merge into a formatted document with section headers and separators Google Docs creates a new document with auto-generated title from workflow name and timestamp Final document populates with all five documentation formats, ready for copying to respective platforms Setup Requirements Prerequisites: Anthropic API** (Claude AI) - Powers all documentation generation; requires paid API access or credits Google Docs API** - Creates and updates documentation; free with Google Workspace account n8n instance** - Cloud or self-hosted with AI agent node support (v1.0+) Estimated Setup Time: 20-25 minutes (15 minutes for API credentials, 5-10 minutes for testing with sample workflow) Installation Notes API costs**: Each workflow documentation run uses ~15,000-20,000 tokens across five parallel AI calls (approximately $0.30-0.50 per generation at current Claude pricing) Google Docs folder**: Update the folderId parameter in the "Create a document" node to your target folder—default points to a specific folder that won't exist in your Drive Testing tip**: Use a simple 3-5 node workflow for your first test to verify all AI agents complete successfully before processing complex workflows Wait node purpose**: The 5-second wait between document creation and content update prevents Google Docs API race conditions—don't remove this step Form URL**: After activation, save the form trigger URL for easy access—bookmark it or share with team members who need to generate documentation Customization Options Swappable integrations: Replace Google Docs with Notion, Confluence, or file system storage by swapping final nodes Switch from Claude to GPT-4, Gemini, or other LLMs by changing the language model node (may require prompt adjustments) Add Slack/email notification nodes after completion to alert when documentation is ready Adjustable parameters: Modify AI prompts in each agent node to match your documentation style preferences or add company-specific guidelines Add/remove documentation formats by duplicating or deleting agent nodes and updating merge configuration Change document formatting in the JavaScript code node (section separators, headers, metadata) Extension possibilities: Add automatic posting to LinkedIn/Discord by connecting their APIs after doc generation Create version history tracking by appending to existing docs instead of creating new ones Build approval workflow by adding human-in-the-loop steps before final document creation Generate visual diagrams by adding Mermaid chart generation from workflow structure Create multi-language versions by adding translation nodes after English generation Category Development Tags documentation n8n content-generation ai claude google-docs workflow automation-publishing Use Case Examples Marketplace contributors**: Generate complete n8n template submission packages in minutes instead of hours of manual documentation writing across multiple format requirements Agency documentation**: Automation consultancies can deliver client workflows with professional documentation suite—technical guides for client IT teams, social posts for client marketing, and narrative case studies for portfolio Internal knowledge base**: Development teams standardize workflow documentation across projects, ensuring every automation has consistent technical details, use case examples, and setup instructions for team onboarding
by Robert Breen
This workflow introduces beginners to one of the most fundamental concepts in n8n: looping over items. Using a simple use case—generating LinkedIn captions for content ideas—it demonstrates how to split a dataset into individual items, process them with AI, and collect the output for review or export. ✅ Key Features 🧪 Create Dummy Data**: Simulate a small dataset of content ideas. 🔁 Loop Over Items**: Process each row independently using the SplitInBatches node. 🧠 AI Caption Creation**: Automatically generate LinkedIn captions using OpenAI. 🧰 Tool Integration**: Enhance AI output with creativity-injection tools. 🧾 Final Output Set**: Collect the original idea and generated caption. 🧰 What You’ll Need ✅ An OpenAI API key ✅ The LangChain nodes enabled in your n8n instance ✅ Basic knowledge of how to trigger and run workflows in n8n 🔧 Step-by-Step Setup 1️⃣ Run Workflow Node**: Manual Trigger (Run Workflow) Purpose**: Manually start the workflow for testing or learning. 2️⃣ Create Random Data Node**: Create Random Data (Code) What it does**: Simulates incoming data with multiple content ideas. Code**: return [ { json: { row_number: 2, id: 1, Date: '2025-07-30', idea: 'n8n rises to the top', caption: '', complete: '' } }, { json: { row_number: 3, id: 2, Date: '2025-07-31', idea: 'n8n nodes', caption: '', complete: '' } }, { json: { row_number: 4, id: 3, Date: '2025-08-01', idea: 'n8n use cases for marketing', caption: '', complete: '' } } ]; 3️⃣ Loop Over Items Node**: Loop Over Items (SplitInBatches) Purpose**: Sends one record at a time to the next node. Why It Matters**: Loops in n8n are created using this node when you want to iterate over multiple items. 4️⃣ Create Captions with AI Node**: Create Captions (LangChain Agent) Prompt**: idea: {{ $json.idea }} System Message**: You are a helpful assistant creating captions for a LinkedIn post. Please create a LinkedIn caption for the idea. Model**: GPT-4o Mini or GPT-3.5 Credentials Required**: OpenAI Credential Go to: OpenAI API Keys Create a key and add it in n8n under credentials as “OpenAi account” 5️⃣ Inject Creativity (Optional) Node**: Tool: Inject Creativity (LangChain Tool) Purpose**: Demonstrates optional LangChain tools that can enhance or manipulate input/output. Why It’s Cool**: A great way to show chaining tools to AI agents. 6️⃣ Output Table Node**: Output Table (Set) Purpose**: Combines original ideas and generated captions into final structure. Fields**: idea: ={{ $('Create Random Data').item.json.idea }} output: ={{ $json.output }} 💡 Educational Value This workflow demonstrates: Creating dynamic inputs with the Code node Using SplitInBatches to simulate looping Sending dynamic prompts to an AI model Using Set to structure the output data Beginners will understand how item-level processing works in n8n and how powerful looping combined with AI can be. 📬 Need Help or Want to Customize This? Robert Breen Automation Consultant | AI Workflow Designer | n8n Expert 📧 robert@ynteractive.com 🌐 ynteractive.com 🔗 LinkedIn 🏷️ Tags n8n loops OpenAI LangChain workflow training beginner LinkedIn automation caption generator
by Linearloop Team
🖥️ Automated Website Uptime Monitor with Email Alerts & GitHub Status Page Update This n8n workflow continuously monitors your website’s availability, sends email alerts when the server goes down, and automatically updates a status page (index.html) in your GitHub repository to reflect the live status. 📌 Good to Know The workflow checks your website every 2 minutes (interval configurable). If the website is down (503, bad response, or error) → it sends an email alert and updates the GitHub-hosted status page to show Down. If the website is up (200) → it updates the GitHub-hosted status page to show Up. The email notification includes an HTML-formatted alert page. You can use GitHub Pages to host the status page publicly. ℹ️ What is GitHub Pages? GitHub Pages is a free hosting service provided by GitHub that lets you publish static websites (HTML, CSS, JS) directly from a GitHub repository. You can use it to make your index.html status page publicly accessible with a URL like: ⚡ How to Set Up GitHub Pages for Your Status Page Create a new repository on GitHub (recommended name: status). Add a blank index.html file (n8n workflow will later update this file). Go to your repository → Settings → Pages. Under Source, select the branch (main or master) and folder (/root). Save changes. Your status page will now be live at: https://<USERNAME>.github.io/status ✅ Prerequisites An n8n instance (self-hosted or cloud). A GitHub account & repository (to host the status page). A Gmail account (or any email service supported by n8n – example uses Gmail). Access to the target website URL you want to monitor. ⚙️ How it Works Schedule Trigger → Runs every 2 minutes. HTTP Request → Pings your website URL. Switch Node → Evaluates the response status (200 OK vs error/503). Code Node → Generates a dynamic HTML status page (Up/Down). GitHub Repo & File → Github Repo Name Should be https://github.com/<OWNER_NAME>/status (recommended) & Must have(required) a blank file named as index.html before triggering this flow. GitHub Node → Updates/commits the index.html file in your repository. Gmail Node → Sends an email alert if the site is down. 🚀 How to Use Import the workflow JSON into your n8n instance. Configure credentials for: GitHub (Personal Access Token with repo permissions). Gmail (or your preferred email service). Replace the following: https://app.yourdomain.com/health → with your own website URL. example@gmail.com → with your email address (or distribution list). GitHub repo details → with your repository where index.html will live. Deploy the workflow. (Optional) Enable GitHub Pages on your repo to serve index.html as a live status page. 🛠 Requirements n8n v1.0+ GitHub personal access token Gmail API credentials (or SMTP/email service of your choice) 🎨 Customising this Workflow Interval** → Change schedule from 2 minutes to any desired frequency. Email Content** → Modify HTML alert template in the Gmail node. Status Page Styling** → Edit the HTML/CSS in the Code node to match your branding. Error Handling** → Extend Switch node for other status codes (e.g., 404, 500). Multiple Websites** → Duplicate HTTP Request + Switch nodes for multiple URLs. 👤 Who Can Use It? DevOps & SRE Engineers** → For automated uptime monitoring. Freelancers/Developers** → To monitor client websites. Startups & SMEs** → For a free, lightweight status page without paid tools. Educators/Students** → As a hands-on learning project with n8n. 🌟 Key Features 🔄 Automated uptime checks (configurable interval). 📧 Email notifications on downtime. 📝 Dynamic HTML status page generation. 🌍 GitHub Pages integration for public visibility. ⚡ Lightweight & cost-effective (no paid monitoring tool needed). 🔗 Tools Integration n8n** – Orchestration & automation. GitHub** – Version control + hosting of status page. Gmail** – Email notifications. HTTP Request** – Website availability check. 📈 Example Use Cases Personal website monitoring with public status page. Monitoring SaaS apps & notifying support teams. Internal company services uptime dashboard.
by Jitesh Dugar
Eliminate weeks of waiting and mountains of paperwork with intelligent expense automation that processes reimbursements in 72 hours instead of 2–3 weeks — delivering 90% reduction in manual processing time. What This Workflow Does Transforms your expense reimbursement process from bureaucratic nightmare to seamless automation: 📝 Captures Expenses – Jotform intake with receipt upload and expense details ⚙️ Policy Validation – Automatically validates against company rules (categories, amount limits) 🚦 Smart Routing – Intelligent approval workflow based on expense amount: < $100 → Auto-approve instantly (compliant expenses only) $100–$500 → Manager approval via Slack notification $500+ → Finance Director approval via Slack notification 🚫 Violation Detection – Flags policy violations with clear rejection reasons 📊 Audit Trail – Complete expense history logged to Google Sheets ✉️ Automated Communication – Professional approval/rejection emails automatically sent Key Features Policy Compliance Engine – Configurable rules for expense categories and amount limits Three-Tier Approval System – Auto-approve, manager review, and director approval paths Real-Time Violation Flagging – Instant detection of non-compliant expenses Comprehensive Audit Logging – Every expense tracked with timestamps and approver details Professional Email Templates – Branded communication for every outcome Slack Integration – Real-time notifications with expense context for quick decisions Zero Manual Processing – Seamless automation from submission to reimbursement Perfect For Finance Teams – Processing 50–200+ expense reports monthly Growing Startups – Scaling operations without adding finance headcount Remote-First Companies – Distributed teams needing async approval workflows Compliance-Focused Organizations – Requiring complete audit trails and policy enforcement SMBs & Enterprises – Companies spending 10–20 hours/week on manual expense processing What You’ll Need Required Integrations Jotform – Expense submission form (free tier works) Create your form for free on Jotform using this link Google Sheets – Audit trail and expense database Gmail – Automated approval/rejection email communication Slack – Manager and Director approval notifications Optional Enhancements QuickBooks/Xero – Automatic expense posting for approved items Google Cloud Vision – OCR for automatic receipt data extraction OpenAI – AI-powered receipt parsing and merchant detection Payment APIs – Direct deposit or check issuance automation Quick Start Import Template – Copy JSON and import into n8n Create Jotform – Build form with fields: Employee name, email, ID, amount, category, merchant, date, description, receipt upload Add Credentials – Jotform, Google Sheets, Gmail, Slack Configure Google Sheet – Replace YOUR_GOOGLE_SHEET_ID with your spreadsheet ID Set Slack Channels – Update manager and director channel IDs in Slack nodes Customize Policies – Edit “Validate Policy” node with your company’s rules: Category limits (meals: $75, travel: $500, office supplies: $200, etc.) Auto-approve threshold (default: $100) Manager approval threshold (default: $500) Test Workflow – Submit test expenses for all scenarios (auto-approve, manager, director, rejection) Deploy & Share – Activate workflow and distribute Jotform link to employees Customization Options 1.Adjust Approval Thresholds – Modify auto-approve limits and escalation amounts 2.Add Approval Levels – Insert additional routing nodes for VP or C-suite approvals 3.Department-Based Routing – Route to different managers based on department 4.Receipt OCR Integration – Add Google Vision + OpenAI for receipt data extraction 5.Accounting System Sync – Connect QuickBooks/Xero for automatic expense posting 6.Duplicate Detection – Flag potential duplicate submissions 7.Budget Monitoring – Add monthly/quarterly budget checks 8.Multi-Currency Support – Add conversion & validation for international expenses 9.Mobile-Optimized Forms – Enhance Jotform for easy phone camera uploads 10.Custom Email Branding – Update templates with your company’s logo and styling Expected Results ⏱️ 72-hour reimbursement vs 2–3 weeks 📉 90% reduction in manual processing time 🧾 100% audit compliance with timestamps & approvers 🗂️ Zero lost receipts – all stored digitally 🧠 Instant policy enforcement – violations caught automatically 😀 Happier employees – fast and transparent reimbursement 🕒 10–15 hours saved weekly for finance teams 🏆 Use Cases 🧑💻 Technology Companies Process developer or engineering expenses (software, conferences) with auto-approval under $100. 💼 Sales Organizations Handle high-volume travel expenses — auto-approve meals under $75, route hotels/flights for approval, flag entertainment violations. 🧾 Consulting Firms Manage client reimbursables with project-based routing and full audit trails for client invoicing. 🏥 Healthcare Organizations Track medical reimbursements with department-specific approvals and compliance documentation. 🌍 Remote-First Teams Process global expenses 24/7 with async Slack approvals and instant notifications. Pro Tips Start Conservative – Begin with $50 auto-approve limit, raise later Monthly Policy Reviews – Adjust limits based on expense trends Employee Training – Include policy link in all automated emails Enhanced Slack Approvals – Use Block Kit for approve/reject buttons Receipt Quality Standards – Enforce minimum image resolution Backup Approvers – Add fallback if manager unavailable Executive Dashboard – Connect Sheets → Looker/Tableau Tax Categorization – Align with tax reporting for year-end Benchmark Data – Track average processing time & approval rates Learning Resources This workflow demonstrates: Multi-condition routing with nested IF nodes Policy enforcement using JavaScript logic Audit logging with Google Sheets append/update Async Slack approvals with messaging nodes Email automation using dynamic HTML templates Data normalization for varied Jotform inputs Error handling for invalid submissions Perfect for learning enterprise-grade n8n automation patterns 🎯 Workflow Structure Visualization 📝 Jotform Submission ↓ 🧾 Parse Form Data (Normalize fields) ↓ ⚙️ Validate Against Policy (Check rules) ↓ 🚫 Check Violations? ├─ YES → Set Rejection → Log to Sheets → 📧 Send Rejection Email └─ NO → Route Auto-Approve? ├─ YES (< $100) → ✅ Auto Approve → Log to Sheets → 📧 Send Approval Email └─ NO → Route Manager? ├─ YES ($100-$500) → 📱 Slack Manager → Log to Sheets → ⏳ Await Approval └─ NO ($500+) → 📱 Slack Director → Log to Sheets → ⏳ Await Approval Compliance & Security Features 🧾 Complete Audit Trail – Every expense logged with timestamps 🛡️ Policy Enforcement – Non-compliant submissions blocked early 🔒 Data Privacy – PII secured via n8n credential system ☁️ Receipt Storage – SOC 2–compliant Jotform cloud 👥 Role-Based Access – Slack channel permissions enforced ⚖️ Separation of Duties – Multi-level approval reduces fraud 🚀 Advanced Features to Add 🧠 Receipt OCR with AI – Google Vision + OpenAI for merchant/amount extraction 💵 Accounting Integration – QuickBooks/Xero for GL posting 🏦 Payment Automation – ACH/direct deposit API integration 📱 Mobile App Interface – On-the-go submissions 📈 Budget Monitoring – Real-time spending alerts 📊 Expense Analytics – Automated monthly summaries 🧾 Vendor Management – Flag new vendors for approval 🚗 Mileage Calculator – IRS-compliant reimbursement 💳 Corporate Card Sync – Match credit card transactions 🌐 Per Diem Automation – Geo-based per diem calculation Ready to Transform Your Expense Process? Import this template and start processing reimbursements in hours instead of weeks. Your finance team and employees will thank you! 🎉 Questions or customization needs? The workflow includes detailed sticky notes explaining each section and decision point.
by Rahul Joshi
Description: Streamline your lead management process with this AI-driven n8n automation template. The workflow fetches opportunities from HighLevel (GHL), enriches them with contact details, and uses Azure OpenAI GPT-4o-mini to analyze each lead’s intent (e.g., Demo Request, Support Query, or Partnership Inquiry). It then automatically routes the lead to the right internal team via email, ensuring instant follow-up and zero delays in response time. Perfect for sales, support, and partnership teams who want to save time on manual triage and ensure every inquiry reaches the correct department within seconds. ✅ What This Template Does (Step-by-Step) ⚡ Manual or Scheduled Trigger Run the workflow manually for on-demand classification or schedule it to execute periodically. 📥 Fetch Opportunities from HighLevel Retrieves all opportunities from your GHL CRM, serving as the starting dataset for AI-powered intent detection. 👤 Fetch Detailed Contact Information Enriches each opportunity with full contact details such as name, email, and message notes. 🧠 AI-Powered Lead Classification Uses Azure OpenAI GPT-4o-mini via the LangChain AI Agent to analyze the lead’s message and determine the intent. Possible outputs include: 🎯 Demo Request 🛠️ Support Query 🤝 Partnership Inquiry 🧾 Post-Processing of AI Response JavaScript logic parses and formats the AI’s output into actionable data for conditional routing. 🔀 Intelligent Routing to Relevant Teams Demo Requests → demo@company.com Support Queries → support@company.com Partnership Inquiries → partnership@company.com Each email includes full contact info and original message context. 📧 Instant Team Notifications Sends neatly formatted emails from a centralized sender (noreply@company.com) to ensure smooth handoff and accountability. 🧠 Key Features 🤖 AI intent classification using Azure OpenAI GPT-4o-mini 🔀 Automated lead routing via email 📋 Structured data enrichment from HighLevel ⚙️ Smart conditional logic for 3 lead categories 📩 End-to-end automation from CRM intake to response 💼 Use Cases 📞 Automatically route demo requests to the sales team 🛠️ Send support-related queries directly to helpdesk 🤝 Forward partnership inquiries to business development 💡 Reduce response delays and manual triage errors 📦 Required Integrations HighLevel (GHL) – for opportunity and contact data Azure OpenAI – for AI-driven lead classification SMTP / Gmail – for team routing email notifications 🎯 Why Use This Template? ✅ Automates manual lead sorting and tagging ✅ Ensures every inquiry reaches the right team ✅ Increases response speed and lead conversion ✅ Scalable AI logic adaptable to any organization
by Jitesh Dugar
1. Who's It For Ad agencies needing automated lead capture. Sales teams fighting fraud and scoring leads. B2B SaaS companies nurturing prospects. Marketing pros boosting sales pipelines. 2. How It Works Captures leads via Webhook from forms. Validates emails with Verifi Email node. Checks IP for fraud using IP Lookup. Scores leads (0-100) with Function node. Logs data in Google Sheets. Alerts sales via Slack for high scores. Sends welcome email via Gmail. Tracks email opens for engagement. Follows up after 24 hours if unopened. Updates engagement scores. Generates weekly report (leads, scores, avg.). Emails report to sales head. Offers: fraud-proofing, AI scoring, nurturing, reporting. 3. How to Set Up 1.* Link form to *Webhook** (POST to https://[your-n8n-url]/webhook/lead-capture). 2.* Install *Verifi Email** node (npm install n8n-nodes-verifiemail) on self-hosted n8n. 3.* Add credentials: *Verifi Email, **Slack, Gmail, Google Sheets. 4.* Set up *Set User Config** (e.g., score, channel, email). 5.* Adjust *Weekly Report** cron (default: Mondays 00:00 IST). 6.** Test with sample data (e.g., {"email": "test@example.com", "ip": "8.8.8.8"}). Requirements Self-hosted n8n (for Verifi Email). Credentials: Verifi Email key, Slack token, Gmail, Google Sheets. Node.js* and *npm** for installation. Form to send data to Webhook. Core Features Fraud Detection**: Email and IP validation. Lead Scoring**: AI-driven quality assessment. Automated Nurturing**: Personalized emails. Real-Time Alerts**: Slack notifications. Weekly Reporting**: Performance insights. Use Cases & Applications Sales Teams**: Streamline lead follow-ups. Marketing**: Enhance campaign tracking. B2B SaaS**: Automate prospect nurturing. Agencies**: Deliver client-ready reports. Key Benefits Efficiency**: Automates manual tasks. Accuracy**: Reduces fraud with validation. Scalability**: Handles multiple leads. Insight**: Weekly performance data. Customization Options Adjust scoring in Function node. Edit email templates in Gmail. Add attachments via File node. Change cron schedule. Integrate CRM with HTTP Request. Important Disclaimers For educational use only. Validate with your risk tolerance. Seek professional advice before use. Account for market volatility.
by Jitesh Dugar
1. Who's It For Conference organizers managing 500+ attendee tech/business events. Trade show managers needing networking automation. Professional associations running industry gatherings. Startup/investor event planners for demo days and mixers. Corporate event teams organizing all-hands and offsites. Continuing education coordinators for professional development. 2. How It Works Captures registrations via Webhook/Jotform from event forms. Extracts attendee data (name, email, company, goals, interests). Profiles attendees with AI Agent (GPT-4o) for persona classification. Scores engagement, influence, connection value (0-100 each). Identifies networking objectives and ideal connections. Recommends personalized sessions with relevance scoring. Generates 5 conversation starters per attendee. Routes by type: VIP/Speaker/Sponsor → Team alert + VIP email. First-timers get buddy assignment and orientation guide. Standard attendees receive personalized confirmation. Logs all data to Google Sheets with scores and personas. Tracks: registration ID, persona, scores, goals, dietary needs. Offers: AI profiling, smart routing, personalized emails, analytics. 3. How to Set Up 1. Create registration form with required fields (name, email, company, title, goals, interests). 2. Import workflow JSON to n8n via Workflows → Import. 3. Add credentials: OpenAI API, Gmail OAuth2, Google Sheets. 4. Configure Webhook Trigger or Jotform Trigger node. 5. Copy webhook URL and add to form platform (POST method). 6. Customize AI Agent prompt with your event details (name, dates, sessions). 7. Update email templates with branding and event information. 8. Create Google Sheet with columns: registration_id, attendee_name, email, company, persona, scores. 9. Set team alert email in "Alert Event Team (VIP)" node. 10. Test with sample registration to verify flow. 11. Activate workflow and monitor executions. Requirements n8n instance (cloud or self-hosted). Credentials: OpenAI API key, Gmail OAuth2, Google Sheets access. Event registration form (Jotform, Typeform, Google Forms, etc.). Google Sheet for attendee database. Email account for sending confirmations and alerts. Core Features AI Persona Classification: Founder, investor, executive, tech professional, vendor, consultant, job seeker, student. Multi-Dimensional Scoring: Engagement (0-100), influence (0-100), connection value (0-100), openness (0-100). Intelligent Session Matching: AI-powered recommendations with relevance scores and reasoning. Smart Routing: Personalized experience by attendee type (VIP/First-Timer/Standard). Conversation Starters: 5 personalized ice-breakers per attendee. Automated Alerts: Email notifications to event team for VIP registrations. Database Logging: Complete attendee profiles stored in Google Sheets. Welcome Automation: Personalized emails with event details and tips. Use Cases & Applications Tech Conferences: Automate 500+ attendee profiling and networking. Trade Shows: Match exhibitors with qualified prospects. Professional Events: Connect members based on complementary goals. Investor Meetups: Pair founders with relevant investors. Corporate Events: Facilitate internal networking and team building. Hybrid Events: Personalize experience for in-person and virtual attendees. Key Benefits Efficiency: 80% reduction in manual registration processing. Personalization: 100% customized experience at scale. Networking ROI: 3x more meaningful connections vs random networking. Attendee Satisfaction: 90% satisfaction with personalized agendas. Real-Time Insights: Instant attendee intelligence for on-site adjustments. Revenue Impact: Higher ticket sales, sponsor retention, lower refunds. Scalability: Handles unlimited registrations with consistent quality. Data-Driven: Measurable networking outcomes and ROI tracking. Customization Options Adjust AI scoring criteria in AI Agent prompt. Edit email templates with your branding and messaging. Add custom attendee fields (company size, budget, timeline). Modify persona classifications for your industry. Change routing logic for different attendee segments. Integrate CRM via HTTP Request node (HubSpot, Salesforce). Add post-event follow-up sequences. Build networking matchmaking based on compatibility scores. Create custom reports with additional metrics. Add SMS notifications via Twilio integration. Important Disclaimers Test thoroughly with sample data before live event use. Verify AI profiling accuracy aligns with your event needs. Ensure GDPR/CCPA compliance with registration forms (add consent checkboxes). Monitor OpenAI API costs based on registration volume (~$0.10-0.15 per attendee). Protect attendee privacy - use secure credentials and access controls. Review and moderate AI-generated content for appropriateness. Backup attendee data regularly from Google Sheets. Set up error notifications to catch workflow failures. Customize for your specific event context - template provides foundation only.
by Jitesh Dugar
Tired of juggling maintenance calls, lost requests, and slow vendor responses? This workflow streamlines the entire property maintenance process — from tenant request to vendor dispatch — powered by AI categorization and automated communication. Cut resolution time from 5–7 days to under 24 hours and boost tenant satisfaction by 85% with zero manual follow-up. What This Workflow Does Transforms chaotic maintenance management into seamless automation: 📝 Captures Requests – Tenants submit issues via JotForm with unit number, issue description, urgency, and photos. 🤖 AI Categorization – OpenAI (GPT-4o-mini) analyzes and classifies issues (plumbing, HVAC, electrical, etc.). ⚙️ Smart Prioritization – Flags emergencies (leak, electrical failure) and assigns priority. 📬 Vendor Routing – Routes issue to the correct contractor or vendor based on AI category. 📧 Automated Communication – Sends acknowledgment to tenant and work order to vendor via Gmail. 📊 Audit Trail Logging – Optionally logs requests in Google Sheets for performance tracking and reporting. Key Features 🧠 AI-Powered Categorization – Intelligent issue type and priority detection. 🚨 Emergency Routing – Automatically escalates critical issues. 📤 Automated Work Orders – Sends detailed emails with property and tenant info. 📈 Google Sheets Logging – Transparent audit trail for compliance and analytics. 🔄 End-to-End Automation – From form submission to vendor dispatch in seconds. 💬 Sticky Notes Included – Every section annotated for easy understanding. Perfect For Property management companies Real estate agencies and facility teams Smart building operators Co-living and rental startups Maintenance coordinators managing 50–200+ requests monthly What You’ll Need Required Integrations: JotForm – Maintenance request form Create your form for free on JotForm using this link OpenAI (GPT-4o-mini) – Categorization and prioritization Gmail – Automated email notifications (Optional) Google Sheets – Logging and performance tracking Quick Start Import Template – Copy JSON into n8n and import. Create JotForm – Include fields: Tenant name, email, unit number, issue description, urgency, photo upload. Add Credentials – Configure JotForm, Gmail, and OpenAI credentials. Set Vendor Emails – Update “Send to Contractor” Gmail node with vendor email IDs. Test Workflow – Submit sample maintenance requests for AI categorization and routing. Activate Workflow – Go live and let your tenants submit maintenance issues. Expected Results ⏱️ 24-hour average resolution time (vs 5–7 days). 😀 85% higher tenant satisfaction with instant communication. 📉 Zero lost requests – every issue logged automatically. 🧠 AI-driven prioritization ensures critical issues handled first. 🕒 10+ hours saved weekly for property managers. Pro Tips 🧾 Add Google Sheets logging for a complete audit trail. 🔔 Include keywords like “leak,” “no power,” or “urgent” in AI prompts for faster emergency detection. 🧰 Expand vendor list dynamically using a Google Sheet lookup. 🧑🔧 Add follow-up automation to verify task completion from vendors. 📊 Create dashboards for monthly maintenance insights. Learning Resources This workflow demonstrates: AI categorization using OpenAI’s Chat Model (GPT-4o-mini) Multi-path routing logic (emergency vs. normal) Automated communication via Gmail Optional data logging in Google Sheets Annotated workflow with Sticky Notes for learning clarity
by Oneclick AI Squad
This automated n8n workflow distributes school notices to stakeholders (students, parents, and staff) via WhatsApp, email, and other channels. It streamlines the process of scheduling, validating, and sending notices while updating distribution status. System Architecture Notice Distribution Pipeline**: Daily Notice Check - 9 AM: Triggers the workflow daily at 9 AM via Cron. Read Notices getAll worksheet: Retrieves notice data from a spreadsheet. Validation Flow**: Validate Notice Data: Validates and formats notice data. Distribution Flow**: Process Notice Distribution: Prepares notices for multiple channels. Prepare Email Content: Generates personalized email content. Send Email Notice: Delivers emails to recipients. Prepare WhatsApp Content: Formats notices for WhatsApp. Send WhatsApp Notice: Sends notices via WhatsApp Business API. Status Update**: Update Notice Status: Updates the distribution status in the spreadsheet. Implementation Guide Import Workflow**: Import the JSON file into n8n. Configure Cron Node**: Set to trigger daily at 9 AM (e.g., 0 9 * * *). Set Up Credentials**: Configure SMTP and WhatsApp Business API credentials. Prepare Spreadsheet**: Create a Google Sheet with notice_id, recipient_name, email, phone, notice_text, distribution_date, and status columns. Test Workflow**: Run manually to verify notice distribution and status updates. Adjust Thresholds**: Modify validation rules or content formatting as needed. Technical Dependencies Cron Service**: For scheduling the workflow. Google Sheets API**: For reading and updating notice data. SMTP Service**: For email notifications (e.g., Gmail, Outlook). WhatsApp Business API**: For sending WhatsApp messages. n8n**: For workflow automation and integration. Database & Sheet Structure Notice Tracking Sheet** (e.g., Notices): Columns: notice_id, recipient_name, email, phone, notice_text, distribution_date, status Example: | notice_id | recipient_name | email | phone | notice_text | distribution_date | status | |-----------|----------------|-------------------|-------------|------------------------------|-------------------|-----------| | 001 | John Doe | john@example.com | +1234567890 | School closed tomorrow | 2025-08-07 | Pending | | 002 | Jane Smith | jane@example.com | +0987654321 | Parent-teacher meeting | 2025-08-08 | Sent | Customization Possibilities Adjust Cron Schedule**: Change to hourly or weekly as needed. Add Channels**: Integrate additional notification channels (e.g., Slack, SMS). Customize Content**: Modify email and WhatsApp message templates. Enhance Validation**: Add rules for data validation (e.g., email format). Dashboard Integration**: Connect to a dashboard tool for real-time status tracking. Notes The workflow assumes a Google Sheet as the data source. Replace spreadsheet_id and range with your actual values. Ensure WhatsApp Business API is properly set up with a verified phone number and token. Test the workflow with a small dataset to confirm delivery and status updates.
by Khairul Muhtadin
This Workflow auto-ingests Google Drive documents, parses them with LlamaIndex, and stores Azure OpenAI embeddings in an in-memory vector store—cutting manual update time from ~30 minutes to under 2 minutes per doc. Why Use This Workflow? Cost Reduction: Eliminates pays monthly fee on cloud just for store knowledge Ideal For Knowledge Managers / Documentation Teams:** Automatically keep product docs and SOPs in sync when source files change on Google Drive. Support Teams:** Ensure the searchable KB is always up-to-date after doc edits, speeding agent onboarding and resolution time. Developer / AI Teams:** Populate an in-memory vector store for experiments, rapid prototyping, or local RAG demos. How It Works Trigger: Google Drive Trigger watches a specific document or folder for updates. Data Collection: The updated file is downloaded from Google Drive. Processing: The file is uploaded to LlamaIndex cloud via an HTTP Request to create a parsing job. Intelligence Layer: Workflow polls LlamaIndex job status (Wait + Monitor loop). If parsing status equals SUCCESS, the result is retrieved as markdown. Output & Delivery: Parsed markdown is loaded into LangChain's Default Data Loader, passed to Azure OpenAI embeddings (deployment "3small"), then inserted into an in-memory vector store. Storage & Logging: Vector store holds embeddings in memory (good for prototyping). Optionally persist to an external vector DB for production. Setup Guide Prerequisites | Requirement | Type | Purpose | |-------------|------|---------| | n8n instance | Essential | Execute and import the workflow — use the n8n instance | | Google Drive OAuth2 | Essential | Watch and download documents from Google Drive | | LlamaIndex Cloud API | Essential | Parse and convert documents to structured markdown | | Azure OpenAI Account | Essential | Generate embeddings (deployment configured to model name "3small") | | Persistent Vector DB (e.g., Pinecone) | Optional | Persist embeddings for production-scale search | Installation Steps Import the workflow JSON into your n8n instance: open your n8n instance and import the file. Configure credentials: Azure OpenAI: Provide Endpoint, API Key and set deployment name. LlamaIndex API: Create an HTTP Header Auth credential in n8n. Header Name: Authorization. Header Value: Bearer YOUR_API_KEY. Google Drive OAuth2: Create OAuth 2.0 credentials in Google Cloud Console, enable Drive API, and configure the Google Drive OAuth2 credential in n8n. Update environment-specific values: Replace the workflow's Google Drive fileId with the GUID or folder ID you want to watch (do not commit public IDs). Customize settings: Polling interval (Wait node): adjust for faster or slower job status checks. Target file or folder: toggled on the Google Drive Trigger node. Embedding model: change Azure OpenAI deployment if needed. Test execution: Save changes and trigger a sample file update on Drive. Verify each node runs and the vector store receives embeddings. Technical Details Core Nodes | Node | Purpose | Key Configuration | |------|---------|-------------------| | Knowledge Base Updated Trigger (Google Drive Trigger) | Triggers on file/folder changes | Set trigger type to specific file or folder; configure OAuth2 credential | | Download Knowledge Document (Google Drive) | Downloads file binary | Operation: download; ensure OAuth2 credential is selected | | Parse Document via LlamaIndex (HTTP Request) | Uploads file to LlamaIndex parsing endpoint | POST multipart/form-data to /parsing/upload; use HTTP Header Auth credential | | Monitor Document Processing (HTTP Request) | Polls parsing job status | GET /parsing/job/{{jobId}}; check status field | | Check Parsing Completion (If) | Branches on job status | Condition: {{$json.status}} equals SUCCESS | | Retrieve Parsed Content (HTTP Request) | Fetches parsed markdown result | GET /parsing/job/{{jobId}}/result/markdown | | Default Data Loader (LangChain) | Loads parsed markdown into document format | Use as document source for embeddings | | Embeddings Azure OpenAI | Generates embeddings for documents | Credentials: Azure OpenAI; Model/Deployment: 3small | | Insert Data to Store (vectorStoreInMemory) | Stores documents + embeddings | Use memory store for prototyping; switch to DB for persistence | Workflow Logic On Drive change, the file binary is downloaded and sent to LlamaIndex. Workflow enters a monitor loop: Monitor Document Processing fetches job status, If node checks status. If not SUCCESS, Wait node delays before re-check. When parsing completes, the workflow retrieves markdown, loads documents, creates embeddings via Azure OpenAI, and inserts data into an in-memory vector store. Customization Options Basic Adjustments: Poll Delay: Set Wait node (default: every minute) to balance speed vs. API quota. Target Scope: Switch the trigger from a single file to a folder to auto-handle many docs. Embedding Model: Swap Azure deployment for a different model name as needed. Advanced Enhancements: Persistent Vector DB Integration: Replace vectorStoreInMemory with Pinecone or Milvus for production search. Notification: Add Slack or email nodes to notify when parsing completes or fails. Summarization: Add an LLM summarization step to generate chunk-level summaries. Scaling option: Batch uploads and chunking to reduce embedding calls; use a queue (Redis or n8n queue patterns) and horizontal workers for high throughput. Performance & Optimization | Metric | Expected Performance | Optimization Tips | |--------|----------------------|-------------------| | Execution time (per doc) | ~10s–2min (depends on file size & LlamaIndex processing) | Chunk large docs; run embeddings in batches | | API calls (per doc) | 3–8 (upload, poll(s), retrieve, embedding calls) | Increase poll interval; consolidate requests | | Error handling | Retries via Wait loop and If checks | Add exponential backoff, failure notifications, and retry limits | Troubleshooting | Problem | Cause | Solution | |---------|-------|----------| | Authentication errors | Invalid/missing credentials | Reconfigure n8n Credentials; do not paste API keys directly into nodes | | File not found | Incorrect fileId or permissions | Verify Drive fileId and OAuth scopes; share file with the service account if needed | | Parsing stuck in PENDING | LlamaIndex processing delay or rate limit | Increase Wait node interval, monitor LlamaIndex dashboard, add retry limits | | Embedding failures | Model/deployment mismatch or quota limits | Confirm Azure deployment name (3small) and subscription quotas | Created by: khmuhtadin Category: Knowledge Management Tags: google-drive, llamaindex, azure-openai, embeddings, knowledge-base, vector-store Need custom workflows? Contact us
by Frederik Duchi
This n8n template demonstrates how to automatically process feedback on tasks and procedures using an AI agent. Employees provide feedback after completing a task, which is then analyzed by the AI to suggest improvements to the underlying procedures. Improvements can be to update how to execute a single tasks or to split or merge tasks within a procedure. The management reviews decides whether to implement those improvements. This makes it easy to close the loop between execution, feedback, and continuous process improvement. Use cases are many: Marketing (improve the process of approving advertising content) Finance (optimize the process of expense reimbursement) Operations (refine the process of equipment maintenance) Good to know The automation is based on the Baserow template for handling Standard Operating Procedures. However, it can also be implemented in other databases. Baserow authentication is done through a database token. Check the documentation on how to create such a token. Tasks are inserted using the HTTP request node instead of a dedicated Baserow node. This is to support batch import instead of importing records one by one. Requirements Baserow account (cloud or self-hosted) The Baserow template for handling Standard Operating Procedures or a similar database with the following tables and fields: Procedures table with general procedure information like to name or description . Procedures steps table with all the steps associated with a procedure. Tasks table that contains the actual tasks based on the procedure steps. must have a field to capture Feedback must have a boolean field to indicate if the feedback has been processed or not. This to avoid that the same feedback keeps getting used. Improvement suggestions table to store the suggestions that were made by the AI agent. How it works Set table and field ids** Stores the ids of the involved Baserow database and tables, together with the information to make requests to the Baserow API Feedback processing agent** The prompt contains a small instruction to check the feedback and suggest improvements to the procedures. The system message is much more extensive to provide as much details and guidance to the agent as possible. It contains the following sections: Role: giving the agent a clear professional perspective Goals: allowing the agent to focus on clarity, efficiency and actionable improvements. Instructions: guiding the agent to a step-by-step flow Output: showing the agent the expected format and details Notes: setting guardrails for the agent to make justified and practical suggestions. The agent uses the following nodes: OpenAI Chat Model (Model): the template uses by default the gpt-4.1 model from OpenAI. But you can replace this with any LLM. current_procedures (Tool): provides information about all available procedures to the agent current_procedure steps (Tool): provides information about every step in the procedures to the agent tasks_feedback (Tool): provides the feedback of the employees to the agent. Required output schema (Output parser): forces the agent to use a JSON schema that matches the Improvement suggestions table structure for the output. This allows to easily add them to the database in the next step. Create improvement suggestions** Calls the API endpoint /api/database/rows/table/{table_id}/batch/ to insert multiple records at once in the Improvement suggestions table. The inserted records is the output generated by the AI agent. Check the Baserow API documentation for further details. Get non-processed feedback** Gets all records from the Tasks table that contain feedback but that are not marked as processed yet. Set feedback to processed** Updates the boolean field for each task to true to indicate that the feedback has been processed Aggregate records for input** Aggregates the data from the previous nodes as an array in a property named items. This matches perfect with the Baserow API to insert new records in batch. Update tasks to processed feedback** Calls the API endpoint /api/database/rows/table/{table_id}/batch/ to update multiple records at once in the Tasks table. The updated records will have their processed field set to true. Check the Baserow API documentation for further details. How to use The Manual Trigger node is provided as an example, but you can replace it with other triggers such as a webhook The included Baserow SOP template works perfectly as a base schema to try out this workflow. Set the corresponding ids in the Configure settings and ids node. Check if the field names for the filters in the tasks_feedback tool node matches with the ones in your Tasks table. Check if the field names for the filters in the Get non-processed feedback node matches with the ones in your Tasks table. Check if the property name in the Set feedback to processed node matches with the ones in your Tasks table. Customising this workflow You can add a new workflow that updates the procedures based on the acceptance or rejection by the management There is a lot of customization possible in the system prompt. For example: change the goal to prioritize security, cost savings or customer experience