by Cheng Siong Chin
How It Works This workflow automates regulatory compliance monitoring and policy violation detection for enterprises managing complex governance requirements. Designed for compliance officers, legal teams, and risk management departments, it addresses the challenge of continuous policy adherence across organizational activities while reducing manual audit overhead.The system initiates on schedule, triggering compliance checks across operational data. Solar compliance data generation simulates policy document collection from various business units. Claude AI performs comprehensive policy validation against regulatory frameworks, while parallel NVIDIA governance models analyze specific compliance dimensions through structured outputs. The workflow routes findings by compliance status: violations trigger immediate escalation emails to compliance teams with detailed Slack notifications, warnings generate supervisor alerts with tracking mechanisms, and compliant activities proceed to standard documentation. All execution paths merge for consolidated audit trail creation, logging enforcement actions and generating governance reports for regulatory submissions. Setup Steps Configure Schedule Compliance Check node with monitoring frequency Add Claude AI credentials in Workflow Configuration and Policy Validation nodes Set up NVIDIA API keys for governance output parser and agent modules in respective nodes Connect Gmail authentication for compliance team alerts and configure recipient distribution lists Integrate Slack workspace credentials and specify compliance channel webhooks Prerequisites Claude API access, NVIDIA API credentials, Gmail/Google Workspace account Use Cases Financial services regulatory compliance (SOX, GDPR), healthcare HIPAA monitoring Customization Add industry-specific regulatory frameworks, integrate document management systems Benefits Reduces compliance audit time by 70%, ensures consistent policy application across departments
by Cheng Siong Chin
How It Works This workflow automates end-to-end AI-driven content moderation for platforms managing user-generated content, including marketplaces, communities, and enterprise systems. It is designed for product, trust & safety, and governance teams seeking scalable, policy-aligned enforcement without subjective scoring. The workflow validates structured review, goal, and feedback data using a Performance Signal Agent that standardizes moderation signals and removes ambiguity. A Governance Agent then orchestrates policy enforcement, eligibility checks, escalation logic, and audit preparation. Content enters via webhook, is classified, validated, and routed by action type (approve, flag, escalate). Enforcement logic determines whether to store clean content, flag violations, or trigger escalation emails and team notifications. All actions are logged for traceability and compliance. This template solves inconsistent moderation decisions, lack of structured governance controls, and manual escalation overhead by embedding deterministic checkpoints, structured outputs, and audit-ready logging into a single automated pipeline. Setup Steps Connect OpenAI API credentials for AI agents. Configure Google Sheets or database for logging. Connect Gmail for escalation emails. Define moderation policies and routing rules. Activate webhook and test sample content. Prerequisites n8n account, OpenAI API key, Google Sheets or DB access, Gmail credentials, defined moderation policies. Use Cases Marketplace listing moderation, enterprise HR review screening Customization Adjust policy rules, add risk scoring, integrate Slack instead of Gmail Benefits Improves moderation accuracy, reduces manual review, enforces governance consistency
by Cheng Siong Chin
How It Works This workflow automates competitive intelligence gathering and market analysis for businesses needing real-time insights on competitors, industry trends, and market positioning. Designed for marketing teams, strategy analysts, and business development professionals, it solves the time-intensive challenge of manually monitoring competitor activities across multiple channels. The system schedules regular data collection, fetches competitor information from various sources, employs multiple AI agents (OpenAI for analysis, sentiment evaluation, and report generation) to process data, validates outputs through structured parsing, and delivers comprehensive reports via email. By automating data aggregation, sentiment analysis, and insight generation, organizations gain actionable intelligence faster, identify market opportunities proactively, and maintain competitive advantage through continuous monitoring—essential for dynamic markets where timing determines success. Setup Steps Connect Schedule Trigger (set monitoring frequency: daily/weekly) Configure Fetch Data node with competitor website URLs/APIs Add OpenAI API keys to all AI agent nodes Link Google Sheets credentials for storing historical analysis data Configure Gmail node with SMTP credentials for report distribution Set up Slack/Discord webhooks for instant critical alert notifications Prerequisites OpenAI API account (GPT-4 recommended), competitor data sources/APIs Use Cases SaaS competitor feature tracking, retail pricing intelligence Customization Modify AI prompts for industry-specific metrics, adjust sentiment thresholds for alert triggers Benefits Reduces research time by 85%, provides 24/7 competitor monitoring, eliminates manual data aggregation
by Rajeet Nair
Overview This workflow implements a complete Retrieval-Augmented Generation (RAG) knowledge assistant with built-in document ingestion, conversational AI, and automated analytics using n8n, OpenAI, and Pinecone. The system allows users to upload documents, automatically convert them into embeddings, query the knowledge base through a chat interface, and receive daily reports about chatbot performance and document usage. Instead of manually searching through documentation, users can ask questions in natural language and receive answers grounded in the uploaded files. The workflow retrieves the most relevant document chunks from a vector database and provides them to the language model as context, ensuring accurate and source-based responses. In addition to answering questions, the workflow records all chat interactions and generates daily usage analytics. These reports summarize chatbot activity, highlight the most referenced documents, and identify failed lookups where information could not be found. This architecture is useful for teams building internal knowledge assistants, documentation chatbots, AI support tools, or searchable company knowledge bases powered by Retrieval-Augmented Generation. How It Works Document Upload Interface Users upload PDF, CSV, or JSON files through a form trigger. These documents become part of the knowledge base used by the chatbot. Document Processing Uploaded files are loaded and converted into text. The text is split into smaller chunks to improve embedding quality and retrieval accuracy. Embedding Generation Each text chunk is converted into vector embeddings using the OpenAI Embeddings node. Vector Database Storage The embeddings are stored in a Pinecone vector database. This creates a searchable semantic index of the uploaded documents. Chat Interface Users interact with the knowledge base through a chat interface. Each message becomes a query sent to the RAG system. RAG Retrieval The workflow retrieves the most relevant document chunks from Pinecone. These chunks are provided to the language model as context. AI Response Generation The chatbot generates an answer using only the retrieved document information. This ensures responses remain grounded in the knowledge base. Chat Logging User questions, AI responses, timestamps, and referenced documents are logged. This enables monitoring and analytics of chatbot usage. Daily Analytics Workflow A scheduled trigger runs every morning. The workflow retrieves chat logs from the previous 24 hours. Report Generation Usage statistics are calculated, including: total questions asked failed document lookups most referenced documents overall success rate. Email Summary A formatted HTML report is generated and sent via email to provide a daily overview of chatbot activity and knowledge base performance. Setup Instructions Configure Pinecone Create a Pinecone index for storing document embeddings. Enter the index name in the Workflow Configuration node. Add OpenAI Credentials Configure credentials for: OpenAI Chat Model OpenAI Embeddings node. Configure Data Tables Create the following n8n Data Tables: form_responses chat_logs Set Workflow Parameters In the Workflow Configuration node configure: Pinecone namespace chunk size chunk overlap retrieval depth (top-K). Configure Email Notifications Add Gmail credentials to send daily summary reports. Deploy the Workflow Share the document upload form with users. Enable the chat interface for question answering. Use Cases Internal Knowledge Assistant Allow employees to search internal documentation using natural language questions. Customer Support Knowledge Base Provide instant answers from support manuals, product documentation, or help center articles. Documentation Search Engine Turn large document collections into an AI-powered searchable knowledge system. AI Helpdesk Assistant Enable support teams to quickly retrieve answers from company knowledge repositories. Knowledge Base Analytics Monitor chatbot usage, identify missing documentation, and understand which files are most valuable to users. Requirements n8n with LangChain nodes enabled OpenAI API credentials Pinecone account and index Gmail credentials for sending reports n8n Data Tables: form_responses chat_logs
by Tejasv Makkar
🚀 Overview This n8n workflow automatically generates professional API documentation from C header (.h) files using AI. It scans a Google Drive folder for header files, extracts the source code, sends it to GPT-4o for structured analysis, and generates a beautiful HTML documentation page. The final documentation is uploaded back to Google Drive and a completion email is sent. This workflow is ideal for embedded systems teams, firmware engineers, and SDK developers who want an automated documentation pipeline. ✨ Key Features ⚡ Fully automated documentation generation 📁 Reads .h files directly from Google Drive 🤖 Uses AI to analyze C APIs and extract documentation 📑 Generates clean HTML documentation 📊 Documents functions, types, enums, and constants 🔁 Processes files one-by-one for reliability ☁️ Saves generated documentation back to Google Drive 📧 Sends a completion email notification 🧠 What the AI Extracts The workflow automatically identifies and documents: 📘 Overview of the header file 🔧 Functions Signatures Parameters Return values Usage examples 🧩 Enumerations 🧱 Data Types & Structures 🔢 Constants / Macros 📝 Developer Notes 🖥 Generated Documentation The output is a clean developer-friendly HTML documentation page including: 🧭 Sidebar navigation 📌 Function cards 📊 Parameter tables 💻 Code examples 🎨 Professional developer layout Perfect for: Developer portals SDK documentation Internal engineering documentation Embedded system libraries ⚙️ Workflow Architecture | Step | Node | Purpose | |-----|-----|--------| | 1 | ▶️ Manual Trigger | Starts the workflow | | 2 | 📂 Get all files | Reads files from Google Drive | | 3 | 🔎 Filter .h files | Keeps only header files | | 4 | 🔁 Split in Batches | Processes files sequentially | | 5 | ⬇️ Download file | Downloads the header file | | 6 | 📖 Extract text | Extracts code content | | 7 | 🤖 AI Extraction | AI extracts API structure | | 8 | 🧹 Parse JSON | Cleans AI output | | 9 | 🎨 Generate HTML | Builds documentation page | |10 | ☁️ Upload to Drive | Saves documentation | |11 | 📧 Email notification | Sends completion email | 🔧 Requirements To run this workflow you need: 🔹 Google Drive OAuth2 credentials 🔹 OpenAI API credentials 🔹 Gmail credentials 🛠 Setup Guide 1️⃣ Configure Google Drive Create two folders. Source folder Output folder Update the folder IDs in the nodes: Get all files from folder Save documentation to Google Drive 2️⃣ Configure OpenAI Add an OpenAI credential in n8n. Model used: The model analyzes C header files and returns structured API documentation. 3️⃣ Configure Gmail Add a Gmail OAuth credential. Update the recipient address inside: ▶️ Run the Workflow Click Execute Workflow. The workflow will: 1️⃣ Scan the Google Drive folder 2️⃣ Process each .h file 3️⃣ Generate HTML documentation 4️⃣ Upload documentation to Drive 5️⃣ Send a completion email 🖼 Documentation Preview 💡 Use Cases 🔧 Embedded firmware documentation 📦 SDK documentation generation 🧑💻 Developer portal automation 📚 C library documentation ⚙️ Continuous documentation pipelines 🔮 Future Improvements This workflow can be extended with several enhancements: 📄 PDF Documentation Export Add a step to convert the generated HTML documentation into PDF files using tools such as: Puppeteer HTML-to-PDF services n8n community PDF nodes This allows teams to distribute documentation as downloadable reports. 🔐 Local AI for Security (Ollama / Open-Source Models) Instead of using the OpenAI node, the workflow can be modified to run fully locally using AI models such as: Ollama** Open-source LLMs (Llama, Mistral, CodeLlama)** These models can run on your own server, which provides: 🔒 Better data privacy 🏢 No external API calls ⚡ Faster responses on local infrastructure 🛡 Increased security for proprietary source code This can be implemented in n8n using: HTTP Request node → Ollama API** Local AI inference servers Private LLM deployments 📚 Multi-Language Documentation The workflow could also support additional languages such as: .c .cpp .hpp .rs .go
by Bernhard Zindel
Summarize Google Alerts with Gemini Turn your noisy Google Alerts folder into a concise, AI-curated executive briefing. This workflow replaces dozens of individual notification emails with a single, structured daily digest. How it works Ingest:** Fetches unread Google Alerts emails from your Gmail inbox. Clean:** Extracts article links, scrapes the website content, and strips away ads and clutter to ensure high-quality AI processing. Analyze:** Uses Google Gemini to summarize each article into a concise 2-4 sentence overview. Deliver:** Compiles a professional HTML email report sorted by topic, sends it to you, and automatically marks the original alerts as read. Set up steps Connect Gmail:** Authenticate your Gmail account to allow reading alerts and sending the digest. Connect Gemini:** Add your Google Gemini API key. Configure Recipient:* Update the *Send Email Digest** node with your desired destination email address. Schedule:* (Optional) Replace the Manual Trigger with a *Schedule Trigger** (e.g., every morning at 7 AM) to fully automate the process.
by Cheng Siong Chin
How It Works Automates daily learner engagement monitoring, progress analysis, and personalized feedback delivery for training programs. Target audience: learning and development teams, corporate training managers, and online education platforms scaling instructor workload. Problem solved: manual progress tracking consumes instructor time; AI analysis identifies struggling learners early for intervention. Workflow runs daily checks on learner activity, retrieves course data and progress, analyzes engagement with OpenAI models, evaluates quiz scores, generates performance summaries, sends progress reports to learners, emails instructors on at-risk cases, generates learning paths, and triggers manager notifications. Setup Steps Configure daily schedule trigger. Connect learning management system APIs (LMS). Set OpenAI keys for progress analysis. Enable Gmail for multi-recipient notifications. Map learner risk thresholds and escalation rules. Prerequisites LMS platform credentials, OpenAI API key, learner database, email service for notifications, manager contact lists. Use Cases Corporate onboarding programs tracking employee progress, online learning platforms identifying struggling students Customization Adjust AI analysis criteria for your curriculum. Integrate Slack for instructor alerts. Benefits Reduces instructor workload by 70%, identifies at-risk learners 2 weeks early
by Avkash Kakdiya
How it works This workflow runs on a schedule to monitor HubSpot deals with upcoming contract expiry dates. It filters deals that are 30, 60, or 90 days away from expiration and processes each one individually. Based on the remaining days, it sends personalized email reminders to contacts via Gmail. It also notifies account managers in Slack and creates follow-up tasks in ClickUp for tracking. Step-by-step Schedule and filter expiring deals** Schedule Trigger – Runs the workflow at defined intervals. Get all deals – Fetches deals and contract expiry data from HubSpot. Filter Deals – Calculates days left and keeps only 30, 60, or 90-day expiries. Process deals and fetch contacts** Loop Over Deals – Iterates through each filtered deal. Fetch Associated Contact With Deal – Retrieves linked contact IDs via API. Get Contact Details – Pulls contact email and basic info from HubSpot. Route and send reminder emails** Switch – Routes deals based on days left (30, 60, 90). 30 day mail – Sends urgent renewal reminder via Gmail. 60 day mail – Sends friendly renewal notification email. 90 day mail – Sends early awareness email. Merge – Combines all email paths into a single output. Notify team and create follow-ups** Nofity Account Manager – Sends Slack alert with deal and contact details. Create Follow-up Task – Creates a ClickUp task for renewal tracking. Why use this? Prevent missed renewals with automated tracking and alerts Improve customer retention through timely communication Reduce manual CRM monitoring and follow-ups Keep teams aligned with Slack notifications and task creation Scale contract management without increasing workload
by Mychel Garzon
Reduce MTTR with context-aware AI severity analysis and automated SLA enforcement Know that feeling when a "low priority" ticket turns into a production fire? Or when your on-call rotation starts showing signs of serious burnout from alert overload? This workflow handles that problem. Two AI agents do the triage work—checking severity, validating against runbooks, triggering the right response. What This Workflow Does Incident comes in through webhook → two-agent analysis kicks off: Agent 1 (Incident Analyzer) checks the report against your Google Sheets runbook database. Looks for matching known issues, evaluates risk signals, assigns a confidence-scored severity (P1/P2/P3). Finally stops you from trusting "CRITICAL URGENT!!!" subject lines. Agent 2 (Response Planner) builds the action plan: what to do first, who needs to know, investigation steps, post-incident tasks. Like having your most experienced engineer review every single ticket. Then routing happens: P1 incidents** → PagerDuty goes off + war room gets created + 15-min SLA timer starts P2 incidents** → Gmail alert + you've got 1 hour to acknowledge P3 incidents** → Standard email notification Nobody responds in time? Auto-escalates to management. Everything logs to Google Sheets for the inevitable post-mortem. What Makes This Different | Feature | This Workflow | Typical AI Triage | |---------|--------------|-------------------| | Architecture | Two specialized agents (analyze + coordinate) | Single generic prompt | | Reliability | Multi-LLM fallback (Gemini → Groq) | Single model, fails if down | | SLA Enforcement | Auto-waits, checks, escalates autonomously | Sends alert, then done | | Learning | Feedback webhook improves accuracy over time | Static prompts forever | | Knowledge Source | Your runbooks (Google Sheets) | Generic templates | | War Room Creation | Automatic for P1 incidents | Manual | | Audit Trail | Every decision logged to Sheets | Often missing | How It Actually Works: Real Example Scenario: Your monitoring system detects database errors. Webhook receives this messy alert: { "title": "DB Connection Pool Exhausted", "description": "user-service reporting 503 errors", "severity": "P3", "service": "user-service" } Agent 1 (Incident Analyzer) reasoning: Checks Google Sheets runbook → finds entry: "Connection pool exhaustion typically P2 if customer-facing" Scans description for risk signals → detects "503 errors" = customer impact Cross-references service name → confirms user-service is customer-facing Decision: Override P3 → P2 (confidence score: 0.87) Reasoning logged: "Customer-facing service returning errors, matches known high-impact pattern from runbook" Agent 2 (Response Coordinator) builds the plan: Immediate actions:** "Check active DB connections via monitoring dashboard, restart service if pool usage >90%, verify connection pool configuration" Escalation tier:** "team" (not manager-level yet) SLA target:** 60 minutes War room needed:** No (P2 doesn't require it) Recommended assignee:** "Database team" (pulled from runbook escalation contact) Notification channels:** #incidents (not #incidents-critical) What happens next (autonomously): Slack alert posted to #incidents with full context 60-minute SLA timer starts automatically Workflow waits, then checks Google Sheets "Acknowledged By" column If still empty after 60 min → escalates to #engineering-leads with "SLA BREACH" tag Everything logged to both Incidents and AI_Audit_Log sheets Human feedback loop (optional but powerful): On-call engineer reviews the decision and submits: POST /incident-feedback { "incidentId": "INC-20260324-143022-a7f3", "feedback": "Correct severity upgrade - good catch", "correctSeverity": "P2" } → This correction gets logged to AI_Audit_Log. Over time, Agent 1 learns which patterns justify severity overrides. Key Benefits Stop manual triage:** What took your on-call engineer 5-10 minutes now takes 3 seconds. Agent 1 checks the runbook, Agent 2 builds the response plan. Severity validation = fewer false alarms:** The workflow cross-checks reported severity against runbook patterns and risk signals. That "P1 URGENT" email from marketing? Gets downgraded to P3 automatically. SLAs enforce themselves:** P1 gets 15 minutes. P2 gets 60. Timers run autonomously. If nobody acknowledges, management gets paged. No more "I forgot to check Slack." Uses YOUR runbooks, not generic templates:** Agent 1 pulls context from your Google Sheets runbook database — known issues, escalation contacts, SLA targets. It knows your systems. Multi-LLM fallback = 99.9% uptime:** Primary: Gemini 2.0. Fallback: Groq. Each agent retries 3x with 5-sec intervals. Basically always works. Self-improving feedback loop:** Engineers can submit corrections via /incident-feedback webhook. The workflow logs every decision + human feedback to AI_Audit_Log. Track accuracy over time, identify patterns where AI needs tuning. Complete audit trail:** Every incident, every AI decision, every escalation — all in Google Sheets. Perfect for post-mortems and compliance. Required APIs & Credentials Google Gemini API** (main LLM, free tier is fine) Groq API** (backup LLM, also has free tier) Google Sheets** (stores runbooks and audit trail) Gmail** (handles P2/P3 notifications) Slack OAuth2 API** (creates war rooms) PagerDuty** (P1 alerts—optional, you can just use Slack/Gmail) Setup Complexity This is not a 5-minute setup. You'll need: Google Sheets structure: 3 tabs: Runbooks, Incidents, AI_Audit_Log Pre-populated runbook data (services, known issues, escalation contacts) Slack configuration: 4 channels: #incidents-critical, #incidents, #management-escalation, #engineering-leads Slack OAuth2 with bot permissions Estimated setup time: 30-45 minutes Quick start option: Begin with just Slack + Google Sheets. Add PagerDuty later. Who This Is For DevOps engineers done being the human incident router SRE teams drowning in alert fatigue IT ops managers who need real accountability Security analysts triaging at high volume Platform engineers trying to automate the boring stuff
by DigiMetaLab
How It Works Trigger: The workflow starts automatically when a new file (PDF, DOCX, or TXT) is uploaded to a specific Google Drive folder for client briefs. Configuration: The workflow sets up key variables, such as the folder for storing reports, the account manager’s email, the tracking Google Sheet, and the error notification email. File Type Check & Text Extraction: It checks the file type and extracts the text using the appropriate method for PDF, DOCX, or TXT files. Extraction Validation: If text extraction fails or the file is empty, an error notification is sent to the designated email. AI Analysis: The extracted text is analyzed using Groq AI (Llama 3 model) to summarize the brief, extract client needs, goals, challenges, and more. Industry Research: The workflow performs additional AI-powered research on the client’s industry and project type, using Wikipedia and Google Search tools. Report Generation: The analysis and research are combined into a comprehensive, formatted report. Google Doc Creation: The report is saved as a new Google Doc in a specified folder. Logging: Key details are logged in a Google Sheet for tracking and record-keeping. Notification: The account manager receives an email with highlights and a link to the full report. Error Handling: If any step fails (e.g., text extraction), an error email is sent with troubleshooting advice. Setup Steps Google Drive Folders: Create a folder for incoming client briefs. Create a folder for storing generated client summary reports. Google Sheet: Create a Google Sheet with a sheet/tab named “Brief Analysis Log” for tracking analysis results. Google Cloud Project: Set up a Google Cloud project and enable APIs for Google Drive, Google Docs, Google Sheets, and Gmail. Create OAuth2 credentials for n8n and connect them in your n8n instance. Groq AI Credentials: Obtain API credentials for Groq AI and add them to n8n. SerpAPI (Optional, for Google Search): If using Google Search in research, get a SerpAPI key and add it to n8n. n8n Workflow Configuration: In the “Workflow Configuration” node, set the following variables: clientSummariesFolderId: Google Drive folder ID for reports. accountManagerEmail: Email address to notify. trackingSheetId: Google Sheet ID for logging. errorNotificationEmail: Email for error alerts. Connect All Required Credentials: Make sure all Google and AI nodes have the correct credentials selected in n8n. Test the Workflow: Upload a sample client brief to the monitored Google Drive folder. Check that the workflow runs, generates a report, logs the result, and sends the notification email.
by Yassin Zehar
Description This workflow turns scattered user feedback into a structured product backlog pipeline. It collects feedback from three channels (Telegram bot, Google Form/Sheets, and Gmail), normalizes it, and sends it to an AI model that: Classifies the feedback (bug, feature request, question, etc.) Extracts sentiment and pain level Estimates business impact and implementation effort Generates a short summary Then a custom RICE-style priority score is computed, a Jira ticket is created automatically, a Notion page is generated for documentation, and a monthly product report is sent by email to stakeholders. It helps product & support teams move from “random feedback in multiple tools” to a repeatable, data-driven product intake process with zero manual triage. Context In most teams, feedback is: spread across emails, forms, and chat messages manually copy–pasted into Jira (when someone remembers) hard to prioritize objectively nearly impossible to review at the end of the month This workflow solves that by: Centralizing feedback from Telegram, Google Forms/Sheets, and Gmail Automatically normalizing all inputs into the same JSON structure Using AI to categorize, tag, summarize, and score each request Calculating a RICE-based priority adapted to your tiers (free / pro / enterprise) Creating a Jira issue with all the context and acceptance criteria Generating a Notion page for each feedback+ticket pair Sending a monthly “Product Intelligence Report” by email with insights & recommendations The result: less manual work, better prioritization, and a clear story of what users are asking for. Target Users This template is designed for: Product Managers and Product Owners SaaS teams with multiple feedback channels Support / CS teams that need a structured escalation path Project Managers who want objective, data-driven prioritization Any team that wants “feedback → backlog” automation without building a custom platform Technical Requirements You’ll need: Google Sheets credential Gmail credential Telegram Bot + Chat ID Google Form connected to a Google Sheet Jira credential (Jira Cloud) Notion credential OpenAI/ Anthropic credential for the AI analysis node An existing Jira project where tickets will be created A Notion database or parent page where feedback pages will be stored Workflow Steps The workflow is organized into four main sections: 1) Triggers (Multi-channel Intake) Telegram Trigger – Listens for new messages sent to your bot Google Form / Sheet Trigger – Listens for new form responses / rows Gmail Trigger – Listens for new emails matching your filter (e.g. [Feedback] in subject) All three paths send their payloads into a “Data Normalizer” node that outputs a unified structure: 2) Request Treated and Enriched (AI Analysis) Instant Reply (Telegram only) – Sends a quick “Thanks, we’re analysing your feedback” message User Enrichment – Enriches user tier based on mapping Message a Model (AI) classifies the feedback extracts tags scores sentiment, pain, business impact, effort generates a short summary & acceptance criteria JSON Parse / Merge – Merges AI output back into the original feedback object 3) Priority Calculation & Jira Ticket Creation Priority Calculator applies a RICE-style formula using: pain level business impact implementation effort user tier weight assigns internal priority: P0 / P1 / P2 / P3 maps to Jira priority: Highest / High / Medium / Low Create Jira Issue – Creates a ticket with: summary from AI description including raw feedback, AI analysis, and RICE breakdown labels based on tags priority based on the calculator Post-processing – Prepares a clean payload for notifications & logging IF (Source = Telegram) – Sends a rich Telegram message back to the user with: Jira key + URL category, priority, RICE score, tags, and estimated handling time Append to Google Sheet (Analytics Log) – Logs each feedback with: source, user, category, sentiment, RICE score, priority, Jira key, Jira URL Create Notion Page – Creates a documentation page linking: the feedback the Jira ticket AI analysis acceptance criteria 4) Monthly Reporting (Product Intelligence Report) Monthly Trigger – Runs once a month Query Google Sheet – Fetches all feedback logs for the previous month Aggregate Monthly Stats – Computes: feedback volume breakdown by category / sentiment / source / tier / priority average RICE, pain, and impact top P0/P1 issues and top feature requests Message a Model (AI) – Generates a written “Product Intelligence Report” with: executive summary key insights & trends top pain points strategic recommendations Parse Response: Extracts structured insights + short summary Create Notion Report Page with: metrics, charts-ready tables, insights, and recommendations Append Monthly Log to Google Sheet – Stores high-level stats for historical tracking Send Email with a formatted HTML report to stakeholders with: key metrics top issues recommendations link to the full Notion report Key Features Multi-channel intake: Telegram + Google Forms/Sheets + Gmail AI-powered triage: automatic category, sentiment, tags, and summary RICE-style priority scoring with tier weighting Automatic Jira ticket creation with full context Notion documentation for each feedback and for monthly reports Google Sheets analytics log for exploration and dashboards Monthly “Product Intelligence Report” sent automatically by email Designed to be adaptable: you can plug in your own labels, tiers, and scoring rules Expected Output When the workflow is running, you can expect: A Jira issue created automatically for each relevant feedback A confirmation email A Telegram confirmation message when the feedback comes from Telegram A Google Sheet filled with normalized feedback and scoring data A Notion page per feedback/ticket with AI analysis and acceptance criteria Every month: a Notion “Monthly Product Intelligence Report” page a summary email with key metrics and insights for your stakeholders How it works Trigger – Listens to Telegram / Google Forms / Gmail Normalize – Converts all inputs to a unified feedback format Enrich with AI – Category, sentiment, pain, impact, effort, tags, summary Score – Computes RICE-style priority and maps to Jira priority Create Ticket – Opens a Jira issue + Notion page + logs to Google Sheets Notify – Sends Telegram confirmation (if source is Telegram) Report – Once a month, aggregates everything and sends a Product Intelligence Report Tutorial Video Tutorial video: Watch the Youtube Tutorial video About me I’m Yassin a Project & Product Manager Scaling tech products with data-driven project management. 📬 Feel free to connect with me on Linkedin
by Codez & AI
AI Invoice Processor for QuickBooks - Email to Bill with PDF Attachment Automatically processes vendor invoices received by email, creates QuickBooks bills with full details, and attaches the original PDF. Who is this for? Small/medium businesses using QuickBooks Online Bookkeepers processing 20+ invoices/month Accounting firms managing multiple clients Anyone tired of manually entering invoice data into QuickBooks What it does Monitors Gmail for new emails with PDF attachments (every 15 minutes) Extracts text from the PDF using n8n's built-in PDF parser AI classification - determines if the PDF is actually an invoice (skips receipts, contracts, etc.) AI data extraction - pulls structured data: vendor name, invoice number, amount, currency, dates, and line items Vendor lookup - searches QuickBooks for the vendor by name Creates a Bill in QuickBooks with all extracted data (amount, description, dates) Attaches the original PDF to the bill for reference Sends confirmation email back to the sender with bill details Error handling Not an invoice?** Silently skipped - no noise AI can't extract valid data?** Email sent to AP team with error details Vendor not found in QuickBooks?** Email sent to AP team with vendor name and action steps Setup (5 minutes) Prerequisites Gmail account (OAuth2) OpenAI API key QuickBooks Online account (OAuth2) Steps Import the workflow into your n8n instance Connect credentials: Gmail OAuth2 OpenAI API QuickBooks OAuth2 Edit the Config node with your values: realmId - your QuickBooks Company ID (Settings → Account) apTeamEmail - where error notifications go defaultExpenseAccountId - your QB expense account ID (see below) Activate the workflow How to find your Expense Account ID Log in to QuickBooks Online Go to Settings (gear icon) → Chart of Accounts Find an expense account (e.g. "Office Supplies", "Professional Services") Hover → click View register (or Run report) Look at the URL for accountId=XX or account=XX That number is your defaultExpenseAccountId Sandbox vs Production If using QuickBooks Sandbox, update the Upload PDF to Bill node URL from: https://quickbooks.api.intuit.com/v3/company/... to: https://sandbox-quickbooks.api.intuit.com/v3/company/... Technical details AI extraction schema The AI extracts these fields from each invoice PDF: | Field | Type | Example | |-------|------|---------| | is_invoice | boolean | true | | vendor_name | string | "Acme Corp" | | invoice_number | string | "INV-2024-001" | | amount | number | 1500.00 | | currency | string | "USD" | | due_date | string | "2024-12-31" | | txn_date | string | "2024-12-01" | | line_items | array | [{description, amount, quantity}] | Binary data flow PDF binary data is lost after the AI extraction step (LangChain nodes don't preserve binary). The attachment pipeline solves this by referencing the binary from the Config node using $('Config').item.binary.attachment_0 - a named reference that works regardless of the connection path. Force Inline Binary (n8n v2 quirk) n8n v2 stores binary data as database streams. QuickBooks' /upload API requires Content-Length in multipart uploads, which streams can't provide. A Code node converts binary streams to inline base64 before upload. Nodes used Gmail Trigger (polling) Extract from File (PDF) Information Extractor (LangChain + OpenAI) QuickBooks Online (vendor search, bill creation) HTTP Request (PDF upload to bill) Gmail (confirmation & error emails) Code nodes (data transformation) IF nodes (routing logic) Limitations Single line item per bill** - the native QuickBooks node supports only one line item. All extracted line items are combined into the description field with invoice number.