by Guillaume Duvernay
Never worry about losing your n8n workflows again. This template provides a powerful, automated backup system that gives you the peace of mind of version control without the complexity of Git. On a schedule you define, it intelligently scans your n8n instance for new workflow versions and saves them as downloadable snapshots in a clean and organized Airtable base. But it’s more than just a backup. This workflow uses AI to automatically generate a concise summary of what each workflow does and even documents the changes between versions. The result is a fully searchable, self-documenting library of all your automations, making it the perfect "single source of truth" for your team or personal projects. Who is this for? Self-hosted n8n users:** This is an essential insurance policy to protect your critical automations from server issues or data loss. n8n developers & freelancers:** Maintain a complete version history for client projects, allowing you to easily review changes and restore previous versions. Teams using n8n:** Create a central, browseable, and documented repository of all team workflows, making collaboration and handovers seamless. Any n8n user who values their work:** Protect your time and effort with an easy-to-use, "set it and forget it" backup solution. What problem does this solve? Prevents catastrophic data loss:** Provides a simple, automated way to back up your most critical assets—your workflows. Creates "no-code" version control:** Offers the benefits of version history (like Git) but in a user-friendly Airtable interface, allowing you to browse and download any previous snapshot. Automates documentation:** Who has time to document every change? The AI summary and changelog features mean you always have up-to-date documentation, even if you forget to write it yourself. Improves workflow discovery:** Your Airtable base becomes a searchable and browseable library of all your workflows and their purposes, complete with AI-generated summaries. How it works Scheduled check: On a recurring schedule (e.g., daily), the workflow fetches a list of all workflows from your n8n instance. Detect new versions: It compares the current version ID of each workflow with the snapshot IDs already saved in your Airtable base. It only proceeds with new, unsaved versions. Generate AI documentation: For each new snapshot, the workflow performs two smart actions: AI Changelog: It compares the new workflow JSON with the previously saved version and uses AI to generate a one-sentence summary of what’s changed. AI Summary: It periodically re-analyzes the entire workflow to generate a fresh, high-level summary of its purpose, ensuring the main description stays up-to-date. Store in Airtable: It saves everything neatly in the provided two-table Airtable base: A Workflows table holds the main record and the AI summary. A linked Snapshots table stores the version-specific details, the AI changelog, and the actual .json backup file as an attachment. Setup Duplicate the Airtable base: Before you start, click here to duplicate the Airtable Base template into your own Airtable account. Configure the workflow: Connect your n8n API credentials to the n8n nodes. Connect your Airtable credentials and map the nodes to the base you just duplicated. Connect your AI provider credentials to the OpenAI Chat Model nodes. Important: In the Store workflow file into Airtable (HTTP Request) node, you must replace <AIRTABLE-BASE-ID> in the URL with your own base ID (it starts with app...). Set your schedule: Configure the Schedule Trigger to your desired frequency (daily is a good start). Activate the workflow. Your automated, AI-powered backup system is now live! Taking it further Add notifications:* Add a *Slack* or *Email** node at the end of the workflow to send a summary of which workflows were backed up during each run. Use different storage:* While designed for Airtable, you could adapt the logic to store the JSON files in *Google Drive* or *Dropbox* and the metadata in *Google Sheets* or *Notion**. Optimize AI costs:* The *Check workflow status** (Code) node is set to regenerate the main AI summary for the first few snapshots and then every 5th snapshot. You can edit the code in this node to change this frequency and manage your token consumption.
by Rully Saputra
Who’s it for This workflow is perfect for IT departments, helpdesk teams, or internal service units that manage incoming support requests through Jotform. It automates ticket handling, classification, and response—saving time and ensuring consistent communication. How it works When a new IT service request is submitted through Jotform, this workflow automatically triggers in n8n. The submitted details (name, department, category, comments, etc.) are structured and analyzed using Google Gemini AI to summarize and classify the issue’s priority level (P0–P2). P0 (High): Urgent issues that send an immediate Telegram alert. P1 (Medium) / P2 (Low): Logged in Google Sheets for tracking and reporting. After classification, the workflow sends a confirmation email to the requester via Gmail, providing a summary of their submission and current status. How to set up Connect your Jotform account to the Jotform Trigger node. Add your Google Sheets, Gmail, and (optionally) Telegram credentials. Map your Jotform fields in the “Set” node (Full Name, Department, Category, etc.). Test by submitting a form response. Requirements Jotform account and published IT request form Google Sheets account Gmail account (for replies) Optional: Telegram bot for real-time alerts n8n account (cloud or self-hosted) How to customize the workflow Adjust AI classification logic in the Priority Classifier node. Modify email templates for tone or format. Add filters or additional routing for different departments. Extend to integrate with your internal ticketing or Slack systems.
by Madame AI
Auto-reply to Telegram messages using BrowserAct & Google Gemini This workflow acts as a smart, 24/7 personal assistant for your Telegram chats. It runs on a schedule to monitor your message history, uses AI to decide if a reply is necessary, drafts a personalized response, and sends it back to the user, all while handling delivery verification and potential CAPTCHA challenges via BrowserAct. Target Audience Community managers, busy professionals, and customer support teams who need to manage Telegram communications efficiently. How it works Scheduled Check: Every 15 minutes, the workflow triggers BrowserAct to fetch the latest chat history. Analysis: An AI Agent (using Google Gemini) reviews the conversation. It determines if the last message requires a response (e.g., a question) or if the chat is idle. Drafting: If a reply is needed, the AI drafts a personalized message that includes the user's name and a standard footer disclaimer. Formatting: A Code node cleans up the text to ensure proper line breaks and formatting for Telegram. Delivery: BrowserAct executes the task to send the drafted reply. The workflow loops to check the task status, ensuring the message is delivered successfully. How to set up Configure Credentials: Connect your BrowserAct and Google Gemini accounts in n8n. Prepare BrowserAct: Ensure the Telegram Personal Assistant template is saved in your BrowserAct account. Set Schedule: The default trigger interval is 15 minutes. Adjust the Schedule Trigger node if you need a different frequency. Activate: Turn on the workflow to start monitoring your chats. Requirements BrowserAct* account with the *Telegram Personal Assistant** template. Google Gemini** account. Telegram** account (accessed via BrowserAct). How to customize the workflow Change AI Persona: Modify the system prompt in the Chatting & Answering AI agent to change the tone from "Professional Support" to "Casual Assistant" or "Sales Representative." Adjust Frequency: Change the Schedule Trigger interval to run every 5 minutes for faster responses or hourly for less urgency. Add Notification: Add a Slack or Email node after the delivery step to get notified whenever the bot sends a reply. Need Help? How to Find Your BrowserAct API Key & Workflow ID How to Connect n8n to BrowserAct How to Use & Customize BrowserAct Templates Workflow Guidance and Showcase Video Telegram Personal Assistant: Auto-Read Chats & Auto-Reply them with n8n
by Avkash Kakdiya
How it works This workflow captures webinar feedback through a webhook and normalizes the submitted data for processing. It stores raw feedback in Google Sheets, uses an AI model to understand sentiment and intent, and generates a personalized response. A professional HTML thank-you email is sent automatically to each attendee. All replies and delivery details are logged back into the spreadsheet for tracking. Step-by-step Receive webinar feedback** Feedback Webhook – Accepts feedback submissions from a webinar form in real time. ID Generation – Creates a human-readable, unique feedback ID for tracking. Normalize Feedback – Cleans and standardizes incoming fields like name, email, rating, and comments. Store and enrich feedback** Store Partial – Saves the raw feedback data into Google Sheets. Common Resources – Attaches shared webinar resources such as recordings and slides. Analyze feedback with AI** Message a model – Evaluates sentiment, engagement level, and intent using an AI model. Parse AI Response – Extracts structured insights like segment, reply text, and next steps. Generate and send follow-up** Merge – Combines feedback data, AI response, and resources. Build Email HTML – Creates a clean, professional HTML email tailored to each attendee. Send AI Thank You Email – Sends the personalized follow-up via Gmail. Log final outcome** Store Feedback – Updates Google Sheets with the sent email content, timestamp, and status. Why use this? Save time by automating webinar feedback follow-ups end to end. Ensure every attendee receives a thoughtful, personalized response. Maintain a complete feedback and communication log in one place. Improve engagement without sounding promotional or generic. Scale post-webinar communication without manual effort.
by OwenLee
🤯 Problem of Traditional Bookkeeping 🔀 Context switch kills the habit: Because bookkeeping lives outside the apps you use every day, you postpone it → forget to log. 🧱 High input friction: You’re forced to fill rigid fields (amount/category/date/notes…), which is slow and discouraging for quick capture. 🎙️💸 Weak or pricey natural-language options: A few tools support voice/chat, but they’re often expensive, and the experience is hit-or-miss. 🔒📦 Limited data ownership: Records live on third-party servers, so privacy and control are diluted. 📲 How This Workflow Fixes It 💬 Put the capture back where you already are: Log expenses directly inside Telegram (or other channels) in a familiar chat—no new app to learn. ⚡ Ultra-low-friction, unstructured input: Send text, a voice note, or a receipt photo—the flow extracts amount · item · date, supports multiple languages and relative dates, and can split multiple expenses from one message. 🗂️📝 Your data, your sheet: Final records are written to your own Google Sheet (columnar fields or a JSON column). You keep full control. 🔗 Demo Google Sheet: click me 👥 Who Is This For 😤 Anyone fed up with traditional bookkeeping but curious about an AI-assisted, chat-based way to log expenses. 🤖 People who tried AI bookkeeping apps but found them pricey, inflexible, or clunky. 💵 Bookkeeping beginners who want frictionless capture first, simple review and categorize later. 🧩 How It Works 💬 Captures expenses from Telegram (text, voice note, or receipt photo). 🔎 Normalizes inputs into raw text (uses Gemini to transcribe voice and extract text from images). 🧠 Parses amount · item · date with an LLM expense parser. 📊 Appends tidy rows to Google Sheets. 🔔 Sends a Telegram confirmation summarizing exactly what was recorded. 🛠️ How to Set Up 1) 🔑 Connect credentials (once) TELEGRAM_BOT_TOKEN LLM_API_KEY GOOGLE_SHEETS_OAUTH 2) 🚀 Quick Start Setup:* Create a Google Sheet to store *Log Expense** data and configure it in n8n. Telegram:* Fill in and verify the *Telegram chatId**. Remember enable the workflow!* 🧰 How to Customize the Workflow 📝 Other user interaction channels: Add Gmail, Slack, or a website Webhook to accept email/command/form submissions that map into the same parser. 🌍 Currency: Extract and store currency in its own column (e.g., MYR, USD); keep amount numeric only (no symbols). 🔎 Higher-accuracy OCR / STT to reduce errors 📩 Help Contact: owenlzyxg@gmail.com
by Cheng Siong Chin
How It Works This workflow automates credit operations onboarding by running KYC verification, credit bureau checks, identity validation, and sanctions screening through a single AI-powered agent. Built for credit operations teams, compliance officers, and fintech platforms, it eliminates manual eligibility reviews that are slow and error-prone. Triggered via webhook, the Credit Operations Agent orchestrates all verification tools simultaneously, then routes customers by eligibility status, eligible, ineligible, pending documentation, or compliance escalation. Each path prepares structured data stored in Airtable, triggers appropriate follow-up actions (email, Slack alerts), and logs a full audit trail. A final formatted response is returned to the originating system, closing the loop end-to-end with no manual handoffs. Setup Steps Set webhook URL and connect Credit Operations webhook node to your intake system. Add OpenAI API key to the OpenAI Chat Model node. Configure KYC, Credit Bureau, Identity, and Sanctions tool credentials. Add Gmail OAuth2 and Slack bot token for notification nodes. Connect Airtable API key; set base/table IDs for eligible and ineligible customer stores. Prerequisites KYC & Credit Bureau API credentials Sanctions screening API access Gmail OAuth2 and Slack bot token Airtable API key Use Cases Fintech platforms automating loan application eligibility screening Customisation Add extra verification tools (e.g., biometric or document OCR APIs) Benefits Eliminates manual KYC and sanctions review bottlenecks
by Oneclick AI Squad
This enterprise-grade n8n workflow automates influencer contract compliance for Instagram campaigns — from deadline tracking to breach detection — using Claude AI, Instagram API, and smart reminders. It ensures timely deliverables by monitoring posts, sending proactive alerts, and escalating breaches, all with zero manual oversight. Key Features Daily contract scanning** from Google Sheets Deadline calculations** for approaching or passed dates Automated reminders** via Slack for nearing deadlines Post verification* using Instagram API and *Claude 3.5 Sonnet** for hashtag/description matching Breach detection engine** identifies non-compliant influencers Escalation notifications** to managers on breaches Audit logs** stored in Google Sheets for compliance records Scalable triggers:** Daily schedule or webhook for new contracts Workflow Process | Step | Node | Description | | ---- | ----------------------------------- | -------------------------------------------------------- | | 1 | Schedule Trigger | Runs daily at 9 AM or via webhook (/influencer-compliance) | | 2 | Get Contracts | Loads influencer contracts from Contracts sheet | | 3 | Loop Over Contracts | Processes each contract individually | | 4 | Calculate Deadline Status | Computes days to deadline, flags approaching/passed | | 5 | IF Approaching Deadline | Branches if within 3 days | | 6 | Send Reminder (Slack) | Notifies influencer/team of upcoming deadline | | 7 | IF Deadline Passed | Branches if deadline has passed | | 8 | Get Influencer Posts | Fetches recent posts from influencer's Instagram | | 9 | Check Post Compliance (Claude AI) | Uses AI to verify if post matches contract requirements | | 10 | IF Breach Detected | Branches if non-compliant | | 11 | Notify Breach (Slack) | Alerts manager of contract breach | | 12 | Log Breach | Records breach details in Logs sheet | | 13 | End (Compliant Path) | Terminates compliant branches | Setup Instructions 1. Import Workflow Open n8n → Workflows → Import from Clipboard Paste the JSON workflow 2. Configure Credentials | Integration | Details | | ----------------- | -------------------------------------------------- | | Google Sheets | Service account with spreadsheet access | | Instagram API | Business access token for fetching media | | Claude AI | Anthropic API key for claude-3-5-sonnet-20241022 | | Slack | Webhook or OAuth app | 3. Update Spreadsheet IDs Ensure your Google Sheets include: Contracts Logs 4. Set Triggers Webhook:** /webhook/influencer-compliance (for real-time updates) Schedule:** Daily at 9:00 AM 5. Run a Test Use manual execution to confirm: Reminder and breach notifications Post compliance checks Log entries in sheet Google Sheets Structure Contracts | campaignId | influencerName | influencerUserId | deadline | requiredHashtag | campaignDescription | |------------|----------------|------------------|------------|-----------------|---------------------| | CAMP-001 | InfluencerX | 1234567890 | 2023-12-31 | #BrandPromo | Promote new product | Logs | campaignId | influencerName | action | timestamp | details | |------------|----------------|-----------------|--------------------|----------------------| | CAMP-001 | InfluencerX | Breach Detected | 2023-10-01T12:00:00Z | Failed to post by deadline | System Requirements | Requirement | Version/Access | | --------------------- | ---------------------------------------------- | | n8n | v1.50+ (AI and API integrations supported) | | Claude AI API | claude-3-5-sonnet-20241022 | | Instagram Graph API| Business account access token | | Google Sheets API | https://www.googleapis.com/auth/spreadsheets | | Slack Webhook | Required for notifications | Optional Enhancements Add email reminders to influencers directly Integrate Asana/Trello for task creation on breaches Use advanced AI prompts for deeper post analysis (e.g., sentiment) Connect Payment APIs (Stripe) to withhold payments on breaches Enable multi-platform checks (e.g., TikTok/YouTube) Add reporting dashboard via Google Data Studio Implement auto-contract updates from CRM systems Result: A single automated system that tracks, reminds, and enforces influencer deliverables — with full AI verification and zero manual work. Get in touch with us for custom n8n automation!
by Anatoly
AI-Powered Voice Appointment Booking with ElevenLabs Transform your appointment scheduling with this production-ready workflow that connects ElevenLabs voice AI to your Google Calendar. Patients can call and book appointments naturally through conversation, while the system handles real-time availability checking, calendar management, and automated confirmations—completely hands-free. 🎯 What It Does: Voice-Activated Booking: Patients speak naturally to book appointments via ElevenLabs AI agent Real-Time Availability: Instantly checks calendar conflicts and returns available time slots Smart Scheduling: Automatically generates available 30-minute slots across business hours for the next 30 days Calendar Sync: Creates Google Calendar events with all appointment details, locations, and attendees Email Confirmations: Sends personalized HTML confirmation emails with appointment details and preparation instructions Dual Functionality: Handles both "check availability" queries and "book appointment" requests through a single webhook 💼 Perfect For: Medical/dental clinics • Beauty salons • Spas • Consulting services • Legal offices • Fitness studios • Therapy practices • Any appointment-based business seeking 24/7 automated scheduling Real conversation example: Voice version: https://drive.google.com/file/d/1CjhyjICTtLioA737SOkpbgcdLdUpVCr3/view Text version: ++AI Voice Assistant++ (00:00) Hello, I'm Ella from Evergreen Clinic. How can I assist you today? ++Patient++ (00:05) Hello Ella, I wanted to book an appointment for Botox treatment. ++AI Voice Assistant++ (00:11) I'd be happy to help you schedule a Botox appointment. Our Botox treatments take about 20 minutes and are performed by Dr. Sava or our nurse injector Sarah. Would you prefer our Zurich or Bern location? ++Patient++ (00:24) I prefer Zurich. ++AI Voice Assistant++ (00:28) great Zurich it is. Do you have a specific date in mind for your Botox appointment? ++Patient++ (00:32) Yeah, October the 7th and let's say 5 p.m. ++AI Voice Assistant++ (00:47) I have the 5 p.m. slot available on October 7th in Zurich. May I have your first name to proceed with the booking? ++Patient++ (00:54) Yes, my first name Anatoly, surname Sava. ++AI Voice Assistant++ (01:02) Thank you, Anatoly Sava. Could you please provide your email address for the appointment confirmation? Please spell it out slowly. ++Patient++ (01:08) [Your Email] ++AI Voice Assistant++ (01:19) Thank you. Just to confirm, your email is [Your Email] correct? ++Patient++ (01:29) Yes. ++AI Voice Assistant++ (01:34) Perfect. Lastly, may I have your phone number to complete the booking? ++Patient++ (01:39) My phone number is [Your Number] ++AI Voice Assistant++ (01:51) Thank you for the details, Anatoly. I will now book your Botox appointment in Zurich on October 7th at 5 p.m. Perfect. I've booked your Botox appointment at our Zurich clinic on October 7th at 5 p.m. A confirmation email will be sent to [Your Email]. Thank you for choosing Evergreen Clinic. If you have any questions before your appointment, please don't hesitate to call us back. Have a wonderful day. ++Patient++ (02:19) Perfect. You too. ++AI Voice Assistant++ (02:22) Thank you, take care and have a great day ahead. 🔧 Technical Features: Intelligent routing logic distinguishes between availability checks and booking requests Custom JavaScript algorithm calculates conflict-free time slots Timezone-aware date/time handling (customizable for any timezone) Configurable business hours, appointment duration, and look-ahead period Natural language date formatting for voice responses Webhook-based architecture for seamless ElevenLabs integration 📋 Requirements: ElevenLabs account (for voice agent) Google Calendar Gmail address for confirmations ⚡ Quick Setup: Step 1: n8n Configuration Connect your Google Calendar, customize business hours in the code node, update clinic details in the email template. Includes pinned test data for easy testing before going live. Step 2: ElevenLabs Agent Setup 1) Create a new Conversational AI Italicagent in ElevenLabs 2) Configure two tools into ElevenLabs with webhook URL pointing to this workflow: check_availability(date, time, appointmentType, location) book_appointment(fullName, email, phone, date, time, appointmentType, location) Create a detailed system prompt defining your agent's identity, personality, tone of voice, and conversation behavior This is a chunk of a system prompt (the full version is too long): IDENTITY You are Ella, an appointment scheduling voice assistant for Evergreen Clinic, which specializes in plastic surgery procedures. Your primary purpose is to efficiently schedule, confirm, reschedule, or cancel appointments while providing accurate information about services from your knowledge files only. VOICE & STYLE Use a friendly, professional yet warm tone suitable for a plastic surgery clinic Keep all your responses short and simple. Use casual language, phrases like "Umm..", "Well...", and "I mean" are preferred. NEVER interrupt a patient if they start talking This is a voice conversation, so keep your responses short, like in a real conversation. Don't ramble for too long. Keep responses concise (8-15 words per sentence ideal) Use natural speech patterns with occasional phrases like "I understand" and "Of course..." Insert brief pauses between information chunks to sound natural Show empathy and sensitivity appropriate for plastic surgery clients Never rush the caller, but keep the conversation focused Maintain a reassuring tone when discussing procedures Zürich — pronounce /ˈtsyːʁɪç/. In plain terms: start with “ts”, the ü is like French u in tu (rounded front, not “oo”), stress the first syllable, and the final “ch” is the soft German sound as in ich (not “k”). Avoid saying “ZOO-rich.” Bern — pronounce /bɛrn/. Short e like in bet, single r (not American “urn”), no extra vowel at the end. Avoid saying “BURN.” Full ElevenLabs set up guide: https://youtu.be/kbiWe93nFMk?si=_bQYIoq1cjXKO5kl 🎁 What You Get: Complete end-to-end automation with detailed sticky note documentation, live conversation example, video tutorial, system prompt template, customization guides, troubleshooting tips, and security best practices.
by Cheng Siong Chin
How It Works This workflow automates end-to-end sustainability lifecycle management for corporate sustainability teams, ESG governance officers, and circular economy programme leads. It addresses the challenge of coordinating fragmented sustainability inputs, scheduled monitoring, platform data feeds, and manual initiative submissions into a single governed, auditable reporting pipeline. Data enters from three sources: a lifecycle monitor, an external sustainability data API, and an initiative submission form. Inputs are merged and passed to a Sustainability Orchestrator with shared memory, which delegates to three specialist agents: a Circular Economy Agent (metrics calculation and circular output scoring), a Sustainability Governance Agent (governance evaluation and Slack-based approvals), and a Documentation Agent (ESG document creation). The orchestrator then routes actions for documentation tracking, lifecycle analytics, and governance approvals, before combining results, aggregating metrics, and sending a stakeholder report via Gmail and a summary notification via Slack. Setup Steps Import workflow; configure the lifecycle monitor trigger interval and external sustainability API endpoint URL. Add AI model credentials to the Sustainability Orchestrator and Documentation Agent. Connect Slack credentials to the Approval Request Tool and Send Summary Notification nodes. Link Gmail credentials to the Send Stakeholder Report node. Configure Google Sheets credentials; set sheet IDs for ESG Documentation, Lifecycle Analytics, and Governance. Set scoring thresholds in the Metrics Calculator node. Prerequisites OpenAI API key (or compatible LLM) Slack workspace with bot credentials Gmail account with OAuth credentials Google Sheets with tracking tabs pre-created Use Cases Enterprises automating circular economy programme scoring across product lines Customisation Swap Circular Economy scoring thresholds to align with GRI, Ellen MacArthur, or regional frameworks Benefits Triple-source ingestion eliminates sustainability data blind spots
by Pratyush Kumar Jha
Smart Resume Screener — JD ↔ Resume AI Match & Sheet Logger Smart Resume Screener ingests a candidate resume and a job description link, extracts clean text from both, runs an LLM-powered screening agent to produce a structured assessment (strengths, weaknesses, risk/reward, justification, and a 0–10 fit score), extracts contact details, and appends a single, validated row to a Google Sheet for tracking. How It Works (Step-by-Step) 1. Trigger — On Form Submission Public form webhook sends: Binary resume file (PDF / DOCX) Job Description (JD) URL or text 2. Extract & Fetch Content Resume Extraction node** Converts the uploaded binary resume into plain text (data.resume). HTTP Request node** Fetches the JD HTML/text from the provided link. Job Description Extractor (LLM-driven)** Parses the fetched content into structured JD fields: Requirements Responsibilities Skills Seniority etc. 3. Prepare and Aggregate Set Resume node** Normalizes the resume into a clean JSON object. Merge/Aggregate node** Builds a single payload containing: { "resume": "...", "job_description": "...", "meta": "..." } 4. AI Evaluation Recruiter Agent (LangChain node, powered by Google Gemini)** Receives aggregated payload Returns a strict JSON-formatted screening report including: candidate_strengths candidate_weaknesses risk reward overall_fit_rating (0–10 numeric) justification Structured Output Parser** Enforces JSON schema Ensures predictable downstream data 5. Identity Extraction & Logging Contact Info Extractor** Extracts: Name Email Append to Google Sheets** Writes: Date Name Email Strengths Weaknesses Risk Reward Justification Overall Fit Score 6. (Optional) Notifications / Follow-Ups Add Slack / Email / Webhook nodes Trigger alerts for high-fit candidates Quick Setup Guide 👉 Demo & Setup Video 👉 Sheet Template 👉 Course Nodes of Interest You Can Edit Trigger — On Form Submission Change webhook URL Modify accepted form fields Add metadata capture (job_id, source) Resume Extraction (Extract from File) Enable OCR fallback Adjust encoding/charset handling Replace with third-party resume parser HTTP Request (Fetch Job Description) Configure timeouts Add retry policy Set headers Restrict allowed domains Job Description Extractor (Information Extractor1) Modify extractor prompt/schema Add fields like must_have and nice_to_have Set Resume (Prepare Resume) Strip headers/footers Normalize dates Split resume sections Merge / Aggregate Modify payload structure Add context fields (job_id, recruiter_notes, source_platform) Recruiter Agent (LangChain Agent) Edit system/user prompts Adjust model temperature Modify token limits Switch LLM provider Structured Output Parser Update JSON schema Add fields like: experience_years certifications notice_period Contact Info Extractor Add: Phone LinkedIn Location Append to Google Sheets Modify column mapping Add fields like: workflow_run_id resume_link What You’ll Need (Credentials) Google Sheets API credentials (OAuth or Service Account) Google Drive / Storage credentials (if resumes are stored there) LLM provider credentials (e.g., Google Gemini API key/service account) (Optional) OCR / Vision API credentials for scanned PDFs (Optional) Email / Slack / Teams webhook or SMTP credentials Access to public JD URLs (or credentials if behind authentication) Recommended Settings & Best Practices LLM temperature:** 0.0–0.3 for consistent output Max tokens:** 800–1200 for justification (with enforced limits) Strict JSON schema:** Fail fast on invalid structure Retries & timeouts:** ~10s HTTP timeout 2 retries with exponential backoff Rate limiting:** Protect LLM quotas Deduplication:** Check existing email or resume hash Least privilege:** Scope Google service account to target sheet only PII handling:** Limit exposed fields; encrypt sensitive data if needed Schema versioning:** Add schema_version column Error logging:** Use Catch node with workflow_run_id Human review gate:** Route borderline scores (6–7) for manual review Customization Ideas Conditional alerts (overall_fit_rating >= 8) Multi-model scoring (Gemini + alternative model) Automated outreach emails ATS integration (Greenhouse, Lever, etc.) JD template library Multi-language resume routing Skill-level mapping (e.g., python: 4/5) Candidate scoring dashboard Resume storage with secure links Troubleshooting — Quick Tips Resume Extraction Issues Validate binary input Enable OCR for scanned PDFs Check encoding and file type JD Fetch Failure Validate URL reachability Add headers (User-Agent) Increase timeout Provide auth if needed LLM JSON Errors Lower temperature (0–0.2) Enforce strict JSON prompt Add retry with "fix-json" prompt Inspect raw LLM output Google Sheets Append Fails Check credential expiry Confirm sheet ID and gid Validate column mapping Monitor API quota Duplicate Rows Add email-based dedupe logic Hash resume content PII Exposure Audit sheet sharing settings Use restricted service accounts Tags / Suggested Listing Fields recruiting resume-parser ai-screening langchain google-gemini google-sheets n8n ats-integration pii-sensitive automation
by Neloy Barman
Self-Hosted This workflow provides a complete end-to-end system for automatically managing your inbox by reading incoming questions, matching them to approved guidelines, and sending consistent, 24/7 replies. By combining local AI processing with an automated retrieval-augmented generation (RAG) pipeline, it ensures fast resolution times without compromising data privacy or incurring ongoing AI API costs. Who is this for? This is designed for University Admissions, Student Support Teams, Customer Service Staff, or professionals in any industry who are overwhelmed by their inboxes and spend countless hours answering repetitive questions. It is particularly useful for any organization looking to automate routine FAQs across various fields, maintaining personalized, human-like, and threaded email conversations while keeping data completely in-house. 🛠️ Tech Stack n8n**: For workflow orchestration of both the ingestion pipeline and response automation. Docker & Docker Compose**: For containerizing and orchestrating the n8n and Qdrant services locally. Google Drive**: To host and trigger updates from the approved FAQ knowledge base. Gmail**: For real-time incoming email triggers and threaded outbound replies. Qdrant**: For self-hosted vector database storage and similarity matching. LM Studio**: To host the local AI models via an OpenAI-compatible API for two primary tasks: Embedding Generation: Uses the mxbai-embed-large-v1 model to convert FAQ data and incoming questions into high-dimensional vectors for semantic matching. Response Generation: Uses the llama-3.2-3b-instruct model to process the retrieved context and craft a polite, personalized HTML email reply. ✨ How it works Knowledge Base Ingestion: The workflow automatically detects updates to a specific FAQ JSON file in Google Drive, converts the Q&A pairs into vector embeddings using the local mxbai model, and stores them in Qdrant. Email Trigger: The resolution pipeline kicks off instantly when a new incoming email arrives via the Gmail trigger. Semantic Search: The incoming question is converted to an embedding using the mxbai-embed-large-v1 model and checked against the Qdrant database to retrieve the top 3 most relevant FAQ answers, enforcing a minimum 0.7 similarity threshold for quality control. LLM Response Generation: The OpenAI node (pointing to LM Studio) processes the retrieved context and the student's email using the llama-3.2-3b-instruct model to craft a polite, personalized HTML email response. Threaded Reply: The Gmail node sends the generated response directly back into the original email thread, exactly like a human would. 📋 Requirements Docker* and *Docker Compose** installed to run n8n and Qdrant locally. LM Studio** running a local server on port 1234. mxbai-embed-large-v1* (GGUF) and *llama-3.2-3b-instruct** (GGUF) models loaded in LM Studio. Google Cloud Console** account with Gmail and Google Drive APIs enabled. An FAQ JSON file properly formatted and hosted in Google Drive. 🚀 How to set up Prepare your Local AI: Open LM Studio, download both the embedding and LLM models. Start the Local Server on port 1234. Note your machine's local IP address (e.g., 192.168.1.50). Spin up Services: Clone the repository and configure the .env file with your QDRANT_COLLECTION name. Run docker compose up -d to start the n8n and Qdrant containers. Import the Workflow: Open n8n at http://localhost:5678 and import the provided JSON workflow file. Link Services: Update the Google Drive nodes with the File ID of your FAQ JSON document. Update the embedding and AI nodes with your local IP address in the Base URL. Test and Activate: Execute the ingestion pipeline manually to populate Qdrant. Toggle the workflow to Active. Send a test email to your connected Gmail address to verify the automated reply. 🔑 Credential Setup To run this workflow, you must configure the following credentials in n8n: Google (Gmail & Drive)**: Create new Gmail OAuth2 API and Google Drive OAuth2 API credentials. Enter your Client ID and Client Secret obtained from the Google Cloud Console (the same credentials can be used for both). Qdrant API**: Create a new Qdrant API credential. REST URL: Set this to http://host.docker.internal:6333. Leave the API key blank for the self-hosted Docker setup. OpenAI API (Local)**: Create a new OpenAI API credential for connecting to LM Studio. API Key: Enter any placeholder text (e.g., lm-studio). Base URL: Set this to your machine's local IP address (e.g., http://<LM_STUDIO_IP>:1234/v1) to ensure n8n can connect to the local AI server from within the Docker network. ⚙️ How to customize Refine Response Tone**: Update the System Message in the AI node to change the personality, signature, or formatting rules of the generated email reply. Switch to Cloud AI: If you prefer not to host models locally, swap out the local **LM Studio connection for external APIs like OpenAI (GPT-4o), Anthropic (Claude), or Cohere for both embeddings and text generation. Change Embedding Models: While the workflow uses a local model by default, anyone can easily swap the embedding nodes to use alternative models like **OpenAI (text-embedding-3-small) or Google Gemini (text-embedding-004) if desired. Adjust Similarity Threshold**: Modify the semantic search threshold (default 0.7) in the Qdrant node to be stricter or more lenient depending on your knowledge base accuracy. Alternative Triggers & Channels: Replace the Gmail nodes with **Outlook / Microsoft 365, Zendesk, Intercom, or Slack to resolve queries across different communication platforms.
by Amina Doszhan
Hi, I’m Amina I built this workflow to remove the daily pain of Meta Ads reporting. If you manage multiple ad accounts, you know how time-consuming it is to open Ads Manager, export campaign data, clean spreadsheets, and send updates to clients. This automation pulls campaign performance directly from the Meta Ads API, writes structured data into Google Sheets (per client), generates a performance summary, and sends a Telegram alert when the report is updated. It transforms manual reporting into a fully automated monitoring system. What it does Reads a list of clients/ad accounts from a Google Sheets “client register” Fetches campaign-level performance data from the Meta Ads API Splits the campaign results into clean, row-ready records Appends each campaign as a new row in the client’s Google Sheets report Calculates aggregated performance metrics Applies campaign diagnostics logic Sends a Telegram notification with summary + alerts Runs automatically on a schedule (daily/weekly) How it works Schedule Trigger starts the workflow on a defined schedule. Google Sheets (Get rows) loads your client register (one row per client). Use this template structure for the client register: 👉 Client Register Template The register should include: ad_account_id access_token report_sheet_url Loop Over Items processes each client individually. Code (ctx) prepares the current client context (account ID, token, report sheet URL). HTTP Request calls the Meta Ads Insights endpoint and retrieves campaign-level metrics. IF checks the response (skip if no data). Merge combines client context + API response. Code (Split campaigns) converts the campaigns array into individual items (one per campaign) and formats metrics for reporting. Code (Extract spreadsheetId) extracts the spreadsheet ID from the report URL. Google Sheets (Append row) writes each campaign row into the client’s report sheet. Example report structure: 👉 Campaign Report Template Code (Summary & Status Logic) aggregates totals and applies campaign diagnostics. Telegram sends a structured performance summary including: Total metrics Campaign-level highlights Status alerts Direct link to the report Data captured (campaign level) Ad account ID Report date (date_start) Campaign name Spend Impressions Clicks CTR CPM CPC Date start / Date stop Summary Generation Logic After writing campaign rows to Google Sheets, the workflow generates a performance overview using a JavaScript node. Aggregated calculations: Total Spend** = sum of all campaign spend values Total Impressions** = sum of impressions Total Clicks** = sum of clicks CTR** = (Total Clicks / Total Impressions) × 100 CPC** = Total Spend / Total Clicks The workflow then: Formats the totals into a readable performance summary Lists campaign highlights Applies status diagnostics Appends the Google Sheets report link Sends everything via Telegram This provides both structured spreadsheet reporting and a quick executive snapshot. Status Logic (Campaign Diagnostics) Each campaign is automatically evaluated based on CTR and CPC thresholds. Status Rules Weak Creative (❌) If CTR < 1.5% → Indicates low engagement. The ad creative may need improvement. Expensive Click (⚠) If CPC > 0.5 → Indicates high cost per click. Audience targeting or bidding strategy may need optimization. Good Candidate to Scale (🔥) If CTR ≥ 2% AND CPC ≤ 0.5 → Strong performance. The campaign may be suitable for scaling. OK (✅) If none of the above conditions are met → Campaign performance is within an acceptable range. The status appears directly in the Telegram notification, allowing marketers to quickly identify which campaigns need attention without logging into Ads Manager. How to set up Estimated setup time: 10–20 minutes. Create a Google Sheets client register with: ad_account_id access_token report_sheet_url Connect Google Sheets credentials in n8n. Add your Meta Ads API access token (do not hardcode API keys in nodes). Connect your Telegram bot and set the destination chat ID. Adjust the schedule (daily/weekly) and run a test execution. Requirements Meta Ads API access token (Facebook Graph API) Google Sheets credentials Telegram bot token + chat ID How to customize Add additional metrics (e.g., conversions, purchases, ROAS) by extending Meta API fields. Modify CTR/CPC thresholds in the Status Logic section. Change the report structure in the “Split campaigns” step. Switch notifications from Telegram to Slack or Email. Add filters (e.g., only active campaigns, only spend > X). Benefits Fully automated reporting Multi-client support Dynamic spreadsheet handling Built-in KPI calculations Automated campaign diagnostics Instant performance alerts Scalable agency-ready structure