by Gofive
Template: Create an AI Knowledge Base Chatbot with Google Drive and OpenAI GPT (Venio/Salesbear) 📋 Template Overview This comprehensive n8n workflow template creates an intelligent AI chatbot that automatically transforms your Google Drive documents into a searchable knowledge base. The chatbot uses OpenAI's GPT models to provide accurate, context-aware responses based exclusively on your uploaded documents, making it perfect for customer support, internal documentation, and knowledge management systems. 🎯 What This Template Does Automated Knowledge Processing Real-time Document Monitoring**: Automatically detects when files are added or updated in your designated Google Drive folder Intelligent Document Processing**: Converts PDFs, text files, and other documents into searchable vector embeddings Smart Text Chunking**: Breaks down large documents into optimally-sized chunks for better AI comprehension Vector Storage**: Creates a searchable knowledge base that the AI can query for relevant information AI-Powered Chat Interface Webhook Integration**: Receives questions via HTTP requests from any external platform (Venio/Salesbear) Contextual Responses**: Maintains conversation history for natural, flowing interactions Source-Grounded Answers**: Provides responses based strictly on your document content, preventing hallucinations Multi-platform Support**: Works with any chat platform that can send HTTP requests 🔧 Pre-conditions and Requirements Required API Accounts and Permissions 1. Google Drive API Access Google Cloud Platform account Google Drive API enabled OAuth2 credentials configured Read access to your target Google Drive folder 2. OpenAI API Account Active OpenAI account with API access Sufficient API credits for embeddings and chat completions API key with appropriate permissions 3. n8n Instance n8n cloud account or self-hosted instance Webhook functionality enabled Ability to install community nodes (LangChain nodes) 4. Target Chat Platform (Optional) API credentials for your chosen chat platform Webhook capability or API endpoints for message sending Required Permissions Google Drive**: Read access to folder contents and file downloads OpenAI**: API access for text-embedding-ada-002 and gpt-4o-mini models External Platform**: API access for sending/receiving messages (if integrating with existing chat systems) 🚀 Detailed Workflow Operation Phase 1: Knowledge Base Creation File Monitoring: Two trigger nodes continuously monitor your Google Drive folder for new files or updates Document Discovery: When changes are detected, the workflow searches for and identifies the modified files Content Extraction: Downloads the actual file content from Google Drive Text Processing: Uses LangChain's document loader to extract text from various file formats Intelligent Chunking: Splits documents into overlapping chunks (configurable size) for optimal AI processing Vector Generation: Creates embeddings using OpenAI's text-embedding-ada-002 model Storage: Stores vectors in an in-memory vector store for instant retrieval Phase 2: Chat Interaction Question Reception: Webhook receives user questions in JSON format Data Extraction: Parses incoming data to extract chat content and session information AI Processing: AI Agent analyzes the question and determines relevant context Knowledge Retrieval: Searches the vector store for the most relevant document sections Response Generation: OpenAI generates responses based on found content and conversation history Authentication: Validates the request using token-based authentication Response Delivery: Sends the answer back to the originating platform 📚 Usage Instructions After Setup Adding Documents to Your Knowledge Base Upload Files: Simply drag and drop documents into your configured Google Drive folder Supported Formats: PDFs, TXT, DOC, DOCX, and other text-based formats Automatic Processing: The workflow will automatically detect and process new files within minutes Updates: Modify existing files, and the knowledge base will automatically update Integrating with Your Chat Platform Webhook URL: Use the generated webhook URL to send questions POST https://your-n8n-domain/webhook/your-custom-path Content-Type: application/json { "body": { "Data": { "ChatMessage": { "Content": "What are your business hours?", "RoomId": "user-123-session", "Platform": "web", "User": { "CompanyId": "company-456" } } } } } Response Format: The chatbot returns structured responses that your platform can display Testing Your Chatbot Initial Test: Send a simple question about content you know exists in your documents Context Testing: Ask follow-up questions to test conversation memory Edge Cases: Try questions about topics not in your documents to verify appropriate responses Performance: Monitor response times and accuracy 🎨 Customization Options System Message Customization Modify the AI Agent's system message to match your brand and use case: You are a [YOUR_BRAND] customer support specialist. You provide helpful, accurate information based on our documentation. Always maintain a [TONE] tone and [SPECIFIC_GUIDELINES]. Response Behavior Customization Tone and Voice**: Adjust from professional to casual, formal to friendly Response Length**: Configure for brief answers or detailed explanations Fallback Messages**: Customize what the bot says when it can't find relevant information Language Support**: Adapt for different languages or technical terminologies Technical Configuration Options Document Processing Chunk Size**: Adjust from 1000 to 4000 characters based on your document complexity Overlap**: Modify overlap percentage for better context preservation File Types**: Add support for additional document formats AI Model Configuration Model Selection**: Switch between gpt-4o-mini (cost-effective) and gpt-4 (higher quality) Temperature**: Adjust creativity vs. factual accuracy (0.0 to 1.0) Max Tokens**: Control response length limits Memory and Context Conversation Window**: Adjust how many previous messages to remember Session Management**: Configure session timeout and user identification Context Retrieval**: Tune how many document chunks to consider per query Integration Customization Authentication Methods Token-based**: Default implementation with bearer tokens API Key**: Simple API key validation OAuth**: Full OAuth2 implementation for secure access Custom Headers**: Validate specific headers or signatures Response Formatting JSON Structure**: Customize response format for your platform Markdown Support**: Enable rich text formatting in responses Error Handling**: Define custom error messages and codes 🎯 Specific Use Case Examples Customer Support Chatbot Scenario: E-commerce company with product documentation, return policies, and FAQ documents Setup: Upload product manuals, policy documents, and common questions to Google Drive Customization: Professional tone, concise answers, escalation triggers for complex issues Integration: Website chat widget, mobile app, or customer portal Internal HR Knowledge Base Scenario: Company HR department with employee handbook, policies, and procedures Setup: Upload HR policies, benefits information, and procedural documents Customization: Friendly but professional tone, detailed policy explanations Integration: Internal Slack bot, employee portal, or HR ticketing system Technical Documentation Assistant Scenario: Software company with API documentation, user guides, and troubleshooting docs Setup: Upload API docs, user manuals, and technical specifications Customization: Technical tone, code examples, step-by-step instructions Integration: Developer portal, support ticket system, or documentation website Educational Content Helper Scenario: Educational institution with course materials, policies, and student resources Setup: Upload syllabi, course content, academic policies, and student guides Customization: Helpful and encouraging tone, detailed explanations Integration: Learning management system, student portal, or mobile app Healthcare Information Assistant Scenario: Medical practice with patient information, procedures, and policy documents Setup: Upload patient guidelines, procedure explanations, and practice policies Customization: Compassionate tone, clear medical explanations, disclaimer messaging Integration: Patient portal, appointment system, or mobile health app 🔧 Advanced Customization Examples Multi-Language Support // In Edit Fields node, detect language and route accordingly const language = $json.body.Data.ChatMessage.Language || 'en'; const systemMessage = { 'en': 'You are a helpful customer support assistant...', 'es': 'Eres un asistente de soporte al cliente útil...', 'fr': 'Vous êtes un assistant de support client utile...' }; Department-Specific Routing // Route questions to different knowledge bases based on department const department = $json.body.Data.ChatMessage.Department; const vectorStoreKey = vector_store_${department}; Advanced Analytics Integration // Track conversation metrics const analytics = { userId: $json.body.Data.ChatMessage.User.Id, timestamp: new Date().toISOString(), question: $json.body.Data.ChatMessage.Content, response: $json.response, responseTime: $json.processingTime }; 📊 Performance Optimization Tips Document Management Optimal File Size**: Keep documents under 10MB for faster processing Clear Structure**: Use headers and sections for better chunking Regular Updates**: Remove outdated documents to maintain accuracy Logical Organization**: Group related documents in subfolders Response Quality System Message Refinement**: Regularly update based on user feedback Context Tuning**: Adjust chunk size and overlap for your specific content Testing Framework**: Implement systematic testing for response accuracy User Feedback Loop**: Collect and analyze user satisfaction data Cost Management Model Selection**: Use gpt-4o-mini for cost-effective responses Caching Strategy**: Implement response caching for frequently asked questions Usage Monitoring**: Track API usage and set up alerts Batch Processing**: Process multiple documents efficiently 🛡️ Security and Compliance Data Protection Document Security**: Ensure sensitive documents are properly secured Access Control**: Implement proper authentication and authorization Data Retention**: Configure appropriate data retention policies Audit Logging**: Track all interactions for compliance Privacy Considerations User Data**: Minimize collection and storage of personal information Session Management**: Implement secure session handling Compliance**: Ensure adherence to relevant privacy regulations Encryption**: Use HTTPS for all communications 🚀 Deployment and Scaling Production Readiness Environment Variables**: Use environment variables for sensitive configurations Error Handling**: Implement comprehensive error handling and logging Monitoring**: Set up monitoring for workflow health and performance Backup Strategy**: Ensure document and configuration backups Scaling Considerations Load Testing**: Test with expected user volumes Rate Limiting**: Implement appropriate rate limiting Database Scaling**: Consider external vector database for large-scale deployments Multi-Instance**: Configure for multiple n8n instances if needed 📈 Success Metrics and KPIs Quantitative Metrics Response Accuracy**: Percentage of correct answers Response Time**: Average time from question to answer User Satisfaction**: Rating scores and feedback Usage Volume**: Questions per day/week/month Cost Efficiency**: Cost per interaction Qualitative Metrics User Feedback**: Qualitative feedback on response quality Use Case Coverage**: Percentage of user needs addressed Knowledge Gaps**: Identification of missing information Conversation Quality**: Natural flow and context understanding
by Alexey from Mingles.ai
AI Image Generator & Editor with GPT-4 Vision - Complete Workflow Template Description Transform text prompts into stunning images or edit existing visuals using OpenAI's latest GPT-4 Vision model through an intuitive web form interface. This comprehensive n8n automation provides three powerful image generation modes: 🎨 Text-to-Image Generation Simply enter a descriptive prompt and generate high-quality images from scratch using OpenAI's gpt-image-1 model. Perfect for creating original artwork, concepts, or visual content. 🖼️ Image-to-Image Editing Upload an existing image file and transform it based on your text prompt. The AI analyzes your input image and applies modifications while maintaining the original structure and context. 🔗 URL-Based Image Editing Provide a direct URL to any online image and edit it with AI. Great for quick modifications of web images or collaborative workflows. Key Features Smart Input Processing Flexible Form Interface**: User-friendly web form with authentication Multiple Input Methods**: File upload, URL input, or text-only generation Quality Control**: Selectable quality levels (low, medium, high) Format Support**: Accepts PNG, JPG, and JPEG formats Advanced AI Integration Latest GPT-4 Vision Model**: Uses gpt-image-1 for superior results Intelligent Switching**: Automatically detects input type and routes accordingly Context-Aware Editing**: Maintains image coherence during modifications Customizable Parameters**: Control size (1024x1024), quality, and generation settings Dual Storage Options Google Drive Integration**: Automatic upload with public sharing permissions ImgBB Hosting**: Alternative cloud storage for instant public URLs File Management**: Organized storage with timestamp-based naming Instant Telegram Delivery Real-time Notifications**: Results sent directly to your Telegram chat Rich Media Messages**: Includes generated image with prompt details Quick Access Links**: Direct links to view and download results Markdown Formatting**: Clean, professional message presentation Technical Workflow Form Submission → User submits prompt and optional image Smart Routing → System detects input type (text/file/URL) AI Processing → OpenAI generates or edits image based on mode Binary Conversion → Converts base64 response to downloadable file Cloud Upload → Stores in Google Drive or ImgBB with public access Telegram Delivery → Sends result with viewing links and metadata Perfect For Content Creators**: Generate unique visuals for social media and marketing Designers**: Quick concept development and image variations Developers**: Automated image processing for applications Teams**: Collaborative image editing and sharing workflows Personal Use**: Transform ideas into visual content effortlessly Setup Requirements OpenAI API Key**: Access to GPT-4 Vision model Google Drive API** (optional): For Google Drive storage ImgBB API Key** (optional): For alternative image hosting Telegram Bot**: For result delivery Basic Auth Credentials**: For form security What You Get ✅ Complete image generation and editing pipeline ✅ Secure web form with authentication ✅ Dual cloud storage options ✅ Instant Telegram notifications ✅ Professional result formatting ✅ Flexible input methods ✅ Quality control settings ✅ Automated file management Start creating AI-powered images in minutes with this production-ready template! Tags: #AI #ImageGeneration #OpenAI #GPT4 #ImageEditing #Telegram #GoogleDrive #Automation #ComputerVision #ContentCreation
by Diego Alejandro Parrás
AI job application tracker and interview prep assistant Categories: AI, Productivity, Career Transform your job search from chaos to clarity. This workflow automatically tracks every application, researches companies, and prepares you for interviews with AI-generated materials — all saved to a visual Notion pipeline. Benefits Never lose track of applications** — Every job gets logged automatically with status tracking Walk into interviews prepared** — AI generates likely questions, talking points, and company insights Save 2-3 hours per application** — Research and prep that took hours now happens in seconds Automated follow-up reminders** — Get notified when it's time to send that follow-up email How It Works Submit application via form (paste job URL) or forward your confirmation email AI extracts job details — company, role, requirements, salary, location Interview prep generates — likely questions, suggested talking points, questions to ask Everything saves to Notion — visual pipeline with follow-up dates Daily reminders — Slack notification for applications needing follow-up Required Setup Notion Database Structure Create a Notion database with these properties: | Property Name | Type | Purpose | |---------------|------|---------| | Company | Title | Company name | | Role | Text | Job title | | Status | Select | Applied, Interviewing, Offer, Rejected, Ghosted | | Applied Date | Date | When you applied | | Salary Range | Text | Compensation info | | Job URL | URL | Link to posting | | Location | Text | City/Remote | | Interview Prep | Text | AI-generated prep materials | | Follow Up Date | Date | When to follow up | | Requirements | Text | Key job requirements | | Notes | Text | Your personal notes | Credentials Needed OpenAI API** — For job extraction and interview prep generation Notion** — Connected to your job tracker database Gmail** (optional) — For email forwarding and confirmations Slack** (optional) — For follow-up reminders Use Cases Active job seekers** — Track 20+ applications without spreadsheet chaos Career changers** — Get AI help understanding new industry requirements Recent graduates** — Build interview confidence with generated prep materials Passive searchers** — Keep a running list with minimal effort Set Up Steps Import the workflow into your n8n instance Create Notion database with the structure above (or duplicate template) Connect OpenAI credentials — API key with GPT-5 access recommended Connect Notion credentials — Select your job tracker database Configure Gmail trigger (optional) — Set filter for forwarded job emails Set up Slack webhook (optional) — Choose channel for reminders Test with a sample job posting — Paste a LinkedIn or company careers page URL Customization Tips Edit the interview prep prompt to mention your background/experience Adjust the follow-up reminder interval (default: 7 days) Add additional research sources (LinkedIn, Crunchbase) for richer company data Connect to calendar to block interview prep time automatically Technical Notes Uses Jina AI Reader for web scraping (free tier available) GPT-5-mini recommended for cost efficiency Notion text fields limited to 2000 characters (full prep saved) Daily check runs at 9 AM (configurable) Difficulty Level: Intermediate Estimated Setup Time: 30-45 minutes Monthly Operating Cost: ~$2-5 (based on 50 applications/month with GPT-5-mini)
by Cheng Siong Chin
How It Works This workflow automates regulatory compliance monitoring and policy violation detection for enterprises managing complex governance requirements. Designed for compliance officers, legal teams, and risk management departments, it addresses the challenge of continuous policy adherence across organizational activities while reducing manual audit overhead.The system initiates on schedule, triggering compliance checks across operational data. Solar compliance data generation simulates policy document collection from various business units. Claude AI performs comprehensive policy validation against regulatory frameworks, while parallel NVIDIA governance models analyze specific compliance dimensions through structured outputs. The workflow routes findings by compliance status: violations trigger immediate escalation emails to compliance teams with detailed Slack notifications, warnings generate supervisor alerts with tracking mechanisms, and compliant activities proceed to standard documentation. All execution paths merge for consolidated audit trail creation, logging enforcement actions and generating governance reports for regulatory submissions. Setup Steps Configure Schedule Compliance Check node with monitoring frequency Add Claude AI credentials in Workflow Configuration and Policy Validation nodes Set up NVIDIA API keys for governance output parser and agent modules in respective nodes Connect Gmail authentication for compliance team alerts and configure recipient distribution lists Integrate Slack workspace credentials and specify compliance channel webhooks Prerequisites Claude API access, NVIDIA API credentials, Gmail/Google Workspace account Use Cases Financial services regulatory compliance (SOX, GDPR), healthcare HIPAA monitoring Customization Add industry-specific regulatory frameworks, integrate document management systems Benefits Reduces compliance audit time by 70%, ensures consistent policy application across departments
by Cheng Siong Chin
How It Works This workflow automates competitive intelligence gathering and market analysis for businesses needing real-time insights on competitors, industry trends, and market positioning. Designed for marketing teams, strategy analysts, and business development professionals, it solves the time-intensive challenge of manually monitoring competitor activities across multiple channels. The system schedules regular data collection, fetches competitor information from various sources, employs multiple AI agents (OpenAI for analysis, sentiment evaluation, and report generation) to process data, validates outputs through structured parsing, and delivers comprehensive reports via email. By automating data aggregation, sentiment analysis, and insight generation, organizations gain actionable intelligence faster, identify market opportunities proactively, and maintain competitive advantage through continuous monitoring—essential for dynamic markets where timing determines success. Setup Steps Connect Schedule Trigger (set monitoring frequency: daily/weekly) Configure Fetch Data node with competitor website URLs/APIs Add OpenAI API keys to all AI agent nodes Link Google Sheets credentials for storing historical analysis data Configure Gmail node with SMTP credentials for report distribution Set up Slack/Discord webhooks for instant critical alert notifications Prerequisites OpenAI API account (GPT-4 recommended), competitor data sources/APIs Use Cases SaaS competitor feature tracking, retail pricing intelligence Customization Modify AI prompts for industry-specific metrics, adjust sentiment thresholds for alert triggers Benefits Reduces research time by 85%, provides 24/7 competitor monitoring, eliminates manual data aggregation
by Cheng Siong Chin
How It Works This workflow automates end-to-end AI-driven content moderation for platforms managing user-generated content, including marketplaces, communities, and enterprise systems. It is designed for product, trust & safety, and governance teams seeking scalable, policy-aligned enforcement without subjective scoring. The workflow validates structured review, goal, and feedback data using a Performance Signal Agent that standardizes moderation signals and removes ambiguity. A Governance Agent then orchestrates policy enforcement, eligibility checks, escalation logic, and audit preparation. Content enters via webhook, is classified, validated, and routed by action type (approve, flag, escalate). Enforcement logic determines whether to store clean content, flag violations, or trigger escalation emails and team notifications. All actions are logged for traceability and compliance. This template solves inconsistent moderation decisions, lack of structured governance controls, and manual escalation overhead by embedding deterministic checkpoints, structured outputs, and audit-ready logging into a single automated pipeline. Setup Steps Connect OpenAI API credentials for AI agents. Configure Google Sheets or database for logging. Connect Gmail for escalation emails. Define moderation policies and routing rules. Activate webhook and test sample content. Prerequisites n8n account, OpenAI API key, Google Sheets or DB access, Gmail credentials, defined moderation policies. Use Cases Marketplace listing moderation, enterprise HR review screening Customization Adjust policy rules, add risk scoring, integrate Slack instead of Gmail Benefits Improves moderation accuracy, reduces manual review, enforces governance consistency
by Rajeet Nair
Overview This workflow implements a complete Retrieval-Augmented Generation (RAG) knowledge assistant with built-in document ingestion, conversational AI, and automated analytics using n8n, OpenAI, and Pinecone. The system allows users to upload documents, automatically convert them into embeddings, query the knowledge base through a chat interface, and receive daily reports about chatbot performance and document usage. Instead of manually searching through documentation, users can ask questions in natural language and receive answers grounded in the uploaded files. The workflow retrieves the most relevant document chunks from a vector database and provides them to the language model as context, ensuring accurate and source-based responses. In addition to answering questions, the workflow records all chat interactions and generates daily usage analytics. These reports summarize chatbot activity, highlight the most referenced documents, and identify failed lookups where information could not be found. This architecture is useful for teams building internal knowledge assistants, documentation chatbots, AI support tools, or searchable company knowledge bases powered by Retrieval-Augmented Generation. How It Works Document Upload Interface Users upload PDF, CSV, or JSON files through a form trigger. These documents become part of the knowledge base used by the chatbot. Document Processing Uploaded files are loaded and converted into text. The text is split into smaller chunks to improve embedding quality and retrieval accuracy. Embedding Generation Each text chunk is converted into vector embeddings using the OpenAI Embeddings node. Vector Database Storage The embeddings are stored in a Pinecone vector database. This creates a searchable semantic index of the uploaded documents. Chat Interface Users interact with the knowledge base through a chat interface. Each message becomes a query sent to the RAG system. RAG Retrieval The workflow retrieves the most relevant document chunks from Pinecone. These chunks are provided to the language model as context. AI Response Generation The chatbot generates an answer using only the retrieved document information. This ensures responses remain grounded in the knowledge base. Chat Logging User questions, AI responses, timestamps, and referenced documents are logged. This enables monitoring and analytics of chatbot usage. Daily Analytics Workflow A scheduled trigger runs every morning. The workflow retrieves chat logs from the previous 24 hours. Report Generation Usage statistics are calculated, including: total questions asked failed document lookups most referenced documents overall success rate. Email Summary A formatted HTML report is generated and sent via email to provide a daily overview of chatbot activity and knowledge base performance. Setup Instructions Configure Pinecone Create a Pinecone index for storing document embeddings. Enter the index name in the Workflow Configuration node. Add OpenAI Credentials Configure credentials for: OpenAI Chat Model OpenAI Embeddings node. Configure Data Tables Create the following n8n Data Tables: form_responses chat_logs Set Workflow Parameters In the Workflow Configuration node configure: Pinecone namespace chunk size chunk overlap retrieval depth (top-K). Configure Email Notifications Add Gmail credentials to send daily summary reports. Deploy the Workflow Share the document upload form with users. Enable the chat interface for question answering. Use Cases Internal Knowledge Assistant Allow employees to search internal documentation using natural language questions. Customer Support Knowledge Base Provide instant answers from support manuals, product documentation, or help center articles. Documentation Search Engine Turn large document collections into an AI-powered searchable knowledge system. AI Helpdesk Assistant Enable support teams to quickly retrieve answers from company knowledge repositories. Knowledge Base Analytics Monitor chatbot usage, identify missing documentation, and understand which files are most valuable to users. Requirements n8n with LangChain nodes enabled OpenAI API credentials Pinecone account and index Gmail credentials for sending reports n8n Data Tables: form_responses chat_logs
by Safa Khan
This n8n template demonstrates how to automatically process invoice attachments from email using OCR and AI. When an invoice is received in Gmail, the workflow extracts structured invoice data and stores it in Airtable while preventing duplicates. This automation is useful for freelancers, agencies, and finance teams who receive invoices by email and want to maintain a structured invoice database without manual data entry. Who’s it for This workflow is designed for: Freelancers managing client invoices Agencies handling multiple invoice emails Finance teams automating invoice intake Automation consultants building accounting workflows How it works When a new email with an attachment arrives in Gmail, the workflow checks whether the subject contains the word “invoice”. The attachment is uploaded to an image-hosting service to generate a public URL. The invoice file is then processed using OCR to extract text content. An AI agent analyzes the extracted text and converts it into structured invoice fields such as invoice number, sender, recipient, dates, description, and amount. Before saving, Airtable is searched to ensure the invoice does not already exist. Valid invoices are stored automatically, while invalid data triggers an error notification email. How to set up Connect Gmail credentials and enable attachment download. Connect Airtable credentials and create an invoice table. Add your imgbb API key in the HTTP Request node. Connect your OCR provider (Mistral). Connect your OpenAI API credentials. Activate the workflow and send a test invoice email. Requirements Gmail account Airtable account OpenAI API key Mistral OCR API key imgbb API key n8n instance (cloud or self-hosted) How to customize the workflow You can customize this workflow by: Modifying the AI prompt used for invoice extraction Adding new invoice fields in Airtable Changing validation rules Supporting additional invoice formats (PDF, PNG, JPG) Adding integrations with accounting tools like QuickBooks or Stripe
by Tejasv Makkar
🚀 Overview This n8n workflow automatically generates professional API documentation from C header (.h) files using AI. It scans a Google Drive folder for header files, extracts the source code, sends it to GPT-4o for structured analysis, and generates a beautiful HTML documentation page. The final documentation is uploaded back to Google Drive and a completion email is sent. This workflow is ideal for embedded systems teams, firmware engineers, and SDK developers who want an automated documentation pipeline. ✨ Key Features ⚡ Fully automated documentation generation 📁 Reads .h files directly from Google Drive 🤖 Uses AI to analyze C APIs and extract documentation 📑 Generates clean HTML documentation 📊 Documents functions, types, enums, and constants 🔁 Processes files one-by-one for reliability ☁️ Saves generated documentation back to Google Drive 📧 Sends a completion email notification 🧠 What the AI Extracts The workflow automatically identifies and documents: 📘 Overview of the header file 🔧 Functions Signatures Parameters Return values Usage examples 🧩 Enumerations 🧱 Data Types & Structures 🔢 Constants / Macros 📝 Developer Notes 🖥 Generated Documentation The output is a clean developer-friendly HTML documentation page including: 🧭 Sidebar navigation 📌 Function cards 📊 Parameter tables 💻 Code examples 🎨 Professional developer layout Perfect for: Developer portals SDK documentation Internal engineering documentation Embedded system libraries ⚙️ Workflow Architecture | Step | Node | Purpose | |-----|-----|--------| | 1 | ▶️ Manual Trigger | Starts the workflow | | 2 | 📂 Get all files | Reads files from Google Drive | | 3 | 🔎 Filter .h files | Keeps only header files | | 4 | 🔁 Split in Batches | Processes files sequentially | | 5 | ⬇️ Download file | Downloads the header file | | 6 | 📖 Extract text | Extracts code content | | 7 | 🤖 AI Extraction | AI extracts API structure | | 8 | 🧹 Parse JSON | Cleans AI output | | 9 | 🎨 Generate HTML | Builds documentation page | |10 | ☁️ Upload to Drive | Saves documentation | |11 | 📧 Email notification | Sends completion email | 🔧 Requirements To run this workflow you need: 🔹 Google Drive OAuth2 credentials 🔹 OpenAI API credentials 🔹 Gmail credentials 🛠 Setup Guide 1️⃣ Configure Google Drive Create two folders. Source folder Output folder Update the folder IDs in the nodes: Get all files from folder Save documentation to Google Drive 2️⃣ Configure OpenAI Add an OpenAI credential in n8n. Model used: The model analyzes C header files and returns structured API documentation. 3️⃣ Configure Gmail Add a Gmail OAuth credential. Update the recipient address inside: ▶️ Run the Workflow Click Execute Workflow. The workflow will: 1️⃣ Scan the Google Drive folder 2️⃣ Process each .h file 3️⃣ Generate HTML documentation 4️⃣ Upload documentation to Drive 5️⃣ Send a completion email 🖼 Documentation Preview 💡 Use Cases 🔧 Embedded firmware documentation 📦 SDK documentation generation 🧑💻 Developer portal automation 📚 C library documentation ⚙️ Continuous documentation pipelines 🔮 Future Improvements This workflow can be extended with several enhancements: 📄 PDF Documentation Export Add a step to convert the generated HTML documentation into PDF files using tools such as: Puppeteer HTML-to-PDF services n8n community PDF nodes This allows teams to distribute documentation as downloadable reports. 🔐 Local AI for Security (Ollama / Open-Source Models) Instead of using the OpenAI node, the workflow can be modified to run fully locally using AI models such as: Ollama** Open-source LLMs (Llama, Mistral, CodeLlama)** These models can run on your own server, which provides: 🔒 Better data privacy 🏢 No external API calls ⚡ Faster responses on local infrastructure 🛡 Increased security for proprietary source code This can be implemented in n8n using: HTTP Request node → Ollama API** Local AI inference servers Private LLM deployments 📚 Multi-Language Documentation The workflow could also support additional languages such as: .c .cpp .hpp .rs .go
by Bernhard Zindel
Summarize Google Alerts with Gemini Turn your noisy Google Alerts folder into a concise, AI-curated executive briefing. This workflow replaces dozens of individual notification emails with a single, structured daily digest. How it works Ingest:** Fetches unread Google Alerts emails from your Gmail inbox. Clean:** Extracts article links, scrapes the website content, and strips away ads and clutter to ensure high-quality AI processing. Analyze:** Uses Google Gemini to summarize each article into a concise 2-4 sentence overview. Deliver:** Compiles a professional HTML email report sorted by topic, sends it to you, and automatically marks the original alerts as read. Set up steps Connect Gmail:** Authenticate your Gmail account to allow reading alerts and sending the digest. Connect Gemini:** Add your Google Gemini API key. Configure Recipient:* Update the *Send Email Digest** node with your desired destination email address. Schedule:* (Optional) Replace the Manual Trigger with a *Schedule Trigger** (e.g., every morning at 7 AM) to fully automate the process.
by Cheng Siong Chin
How It Works Automates daily learner engagement monitoring, progress analysis, and personalized feedback delivery for training programs. Target audience: learning and development teams, corporate training managers, and online education platforms scaling instructor workload. Problem solved: manual progress tracking consumes instructor time; AI analysis identifies struggling learners early for intervention. Workflow runs daily checks on learner activity, retrieves course data and progress, analyzes engagement with OpenAI models, evaluates quiz scores, generates performance summaries, sends progress reports to learners, emails instructors on at-risk cases, generates learning paths, and triggers manager notifications. Setup Steps Configure daily schedule trigger. Connect learning management system APIs (LMS). Set OpenAI keys for progress analysis. Enable Gmail for multi-recipient notifications. Map learner risk thresholds and escalation rules. Prerequisites LMS platform credentials, OpenAI API key, learner database, email service for notifications, manager contact lists. Use Cases Corporate onboarding programs tracking employee progress, online learning platforms identifying struggling students Customization Adjust AI analysis criteria for your curriculum. Integrate Slack for instructor alerts. Benefits Reduces instructor workload by 70%, identifies at-risk learners 2 weeks early
by Mychel Garzon
Reduce MTTR with context-aware AI severity analysis and automated SLA enforcement Know that feeling when a "low priority" ticket turns into a production fire? Or when your on-call rotation starts showing signs of serious burnout from alert overload? This workflow handles that problem. Two AI agents do the triage work—checking severity, validating against runbooks, triggering the right response. What This Workflow Does Incident comes in through webhook → two-agent analysis kicks off: Agent 1 (Incident Analyzer) checks the report against your Google Sheets runbook database. Looks for matching known issues, evaluates risk signals, assigns a confidence-scored severity (P1/P2/P3). Finally stops you from trusting "CRITICAL URGENT!!!" subject lines. Agent 2 (Response Planner) builds the action plan: what to do first, who needs to know, investigation steps, post-incident tasks. Like having your most experienced engineer review every single ticket. Then routing happens: P1 incidents** → PagerDuty goes off + war room gets created + 15-min SLA timer starts P2 incidents** → Gmail alert + you've got 1 hour to acknowledge P3 incidents** → Standard email notification Nobody responds in time? Auto-escalates to management. Everything logs to Google Sheets for the inevitable post-mortem. What Makes This Different | Feature | This Workflow | Typical AI Triage | |---------|--------------|-------------------| | Architecture | Two specialized agents (analyze + coordinate) | Single generic prompt | | Reliability | Multi-LLM fallback (Gemini → Groq) | Single model, fails if down | | SLA Enforcement | Auto-waits, checks, escalates autonomously | Sends alert, then done | | Learning | Feedback webhook improves accuracy over time | Static prompts forever | | Knowledge Source | Your runbooks (Google Sheets) | Generic templates | | War Room Creation | Automatic for P1 incidents | Manual | | Audit Trail | Every decision logged to Sheets | Often missing | How It Actually Works: Real Example Scenario: Your monitoring system detects database errors. Webhook receives this messy alert: { "title": "DB Connection Pool Exhausted", "description": "user-service reporting 503 errors", "severity": "P3", "service": "user-service" } Agent 1 (Incident Analyzer) reasoning: Checks Google Sheets runbook → finds entry: "Connection pool exhaustion typically P2 if customer-facing" Scans description for risk signals → detects "503 errors" = customer impact Cross-references service name → confirms user-service is customer-facing Decision: Override P3 → P2 (confidence score: 0.87) Reasoning logged: "Customer-facing service returning errors, matches known high-impact pattern from runbook" Agent 2 (Response Coordinator) builds the plan: Immediate actions:** "Check active DB connections via monitoring dashboard, restart service if pool usage >90%, verify connection pool configuration" Escalation tier:** "team" (not manager-level yet) SLA target:** 60 minutes War room needed:** No (P2 doesn't require it) Recommended assignee:** "Database team" (pulled from runbook escalation contact) Notification channels:** #incidents (not #incidents-critical) What happens next (autonomously): Slack alert posted to #incidents with full context 60-minute SLA timer starts automatically Workflow waits, then checks Google Sheets "Acknowledged By" column If still empty after 60 min → escalates to #engineering-leads with "SLA BREACH" tag Everything logged to both Incidents and AI_Audit_Log sheets Human feedback loop (optional but powerful): On-call engineer reviews the decision and submits: POST /incident-feedback { "incidentId": "INC-20260324-143022-a7f3", "feedback": "Correct severity upgrade - good catch", "correctSeverity": "P2" } → This correction gets logged to AI_Audit_Log. Over time, Agent 1 learns which patterns justify severity overrides. Key Benefits Stop manual triage:** What took your on-call engineer 5-10 minutes now takes 3 seconds. Agent 1 checks the runbook, Agent 2 builds the response plan. Severity validation = fewer false alarms:** The workflow cross-checks reported severity against runbook patterns and risk signals. That "P1 URGENT" email from marketing? Gets downgraded to P3 automatically. SLAs enforce themselves:** P1 gets 15 minutes. P2 gets 60. Timers run autonomously. If nobody acknowledges, management gets paged. No more "I forgot to check Slack." Uses YOUR runbooks, not generic templates:** Agent 1 pulls context from your Google Sheets runbook database — known issues, escalation contacts, SLA targets. It knows your systems. Multi-LLM fallback = 99.9% uptime:** Primary: Gemini 2.0. Fallback: Groq. Each agent retries 3x with 5-sec intervals. Basically always works. Self-improving feedback loop:** Engineers can submit corrections via /incident-feedback webhook. The workflow logs every decision + human feedback to AI_Audit_Log. Track accuracy over time, identify patterns where AI needs tuning. Complete audit trail:** Every incident, every AI decision, every escalation — all in Google Sheets. Perfect for post-mortems and compliance. Required APIs & Credentials Google Gemini API** (main LLM, free tier is fine) Groq API** (backup LLM, also has free tier) Google Sheets** (stores runbooks and audit trail) Gmail** (handles P2/P3 notifications) Slack OAuth2 API** (creates war rooms) PagerDuty** (P1 alerts—optional, you can just use Slack/Gmail) Setup Complexity This is not a 5-minute setup. You'll need: Google Sheets structure: 3 tabs: Runbooks, Incidents, AI_Audit_Log Pre-populated runbook data (services, known issues, escalation contacts) Slack configuration: 4 channels: #incidents-critical, #incidents, #management-escalation, #engineering-leads Slack OAuth2 with bot permissions Estimated setup time: 30-45 minutes Quick start option: Begin with just Slack + Google Sheets. Add PagerDuty later. Who This Is For DevOps engineers done being the human incident router SRE teams drowning in alert fatigue IT ops managers who need real accountability Security analysts triaging at high volume Platform engineers trying to automate the boring stuff