by Stephan Koning
VEXA: AI-Powered Meeting Intelligence I'll be honest, I built this because I was getting lazy in meetings and missing key details. I started with a simple VEXA integration for transcripts, then added AI to pull out summaries and tasks. But that just solved part of the problem. The real breakthrough came when we integrated Mem0, creating a persistent memory of every conversation. Now, you can stop taking notes and actually focus on the person you're talking to, knowing a system is tracking everything that matters. This is the playbook for how we built it. How It Works This isn't just one workflow; it's a two-part system designed to manage the entire meeting lifecycle from start to finish. Bot Management: It starts when you flick a switch in your CRM (Baserow). A command deploys or removes an AI bot from Google Meet. No fluff—it's there when you need it, gone when you don't. The workflow uses a quick "digital sticky note" in Redis to remember who the meeting is with and instantly updates the status in your Baserow table. AI Analysis & Memory: Once the meeting ends, VEXA sends the transcript over. Using the client ID (thank god for redis) , we feed the conversation to an AI model (OpenAI). It doesn't just summarize; it extracts actionable next steps and potential risks. All this structured data is then logged into a memory layer (Mem0), creating a permanent, searchable record of every client conversation. Setup Steps: Your Action Plan This is designed for rapid deployment. Here's what you do: Register Webhook: Run the manual trigger in the workflow once. This sends your n8n webhook URL to VEXA, telling it where to dump transcripts after a call. Connect Your CRM: Copy the vexa-start webhook URL from n8n. Paste it into your Baserow automation so it triggers when you set the "Send Bot" field to Start_Bot. Integrate Your Tools: Plug your VEXA, Mem0, Redis, and OpenAI API credentials into n8n. Use the Baserow Template: I've created a free Baserow template to act as your control panel. Grab it here: https://baserow.io/public/grid/t5kYjovKEHjNix2-6Rijk99y4SDeyQY4rmQISciC14w. It has all the fields you need to command the bot. Requirements An active n8n instance or cloud account. Accounts for VEXA.ai, Mem0.ai, Baserow, and OpenAI. A Redis database . Your Baserow table must have these fields: Meeting Link, Bot Name, Send Bot, and Status. Next Steps: Getting More ROI This workflow is the foundation. The real value comes from what you build on top of it. Automate Follow-ups:** Use the AI-identified next steps to automatically trigger follow-up emails or create tasks in your project management tool. Create a Unified Client Memory:** Connect your email and other communication platforms. Use Mem0 to parse and store every engagement, building a complete, holistic view of every client relationship. Build a Headless CRM:** Combine these workflows to build a fully AI-powered system that handles everything from lead capture to client management without any manual data entry. Copy the workflow and stop taking notes
by Cheng Siong Chin
How It Works A scheduled process aggregates content from eight distinct data sources and standardizes all inputs into a unified format. AI models perform sentiment scoring, detect conspiracy or misinformation signals, and generate trend analyses across domains. An MCDN routing model prioritizes and channels insights to the appropriate workflows. Dashboards visualize real-time analytics, trigger KPIs based on thresholds, and compile comprehensive market-intelligence reports for stakeholders. Setup Steps Data Sources: Connect news APIs, social media platforms, academic databases, code repositories, and documentation feeds. AI Analysis: Configure OpenAI models for sentiment analysis, conspiracy detection, and trend scoring. Dashboards: Integrate analytics platforms and enable automated email or reporting outputs. Storage: Configure a database for historical records, trend archives, and competitive-intelligence storage. Prerequisites Multi-source API credentials; OpenAI API key; dashboard platform access; email service; code repository access; academic database credentials Use Cases Competitive intelligence monitoring; market trend analysis; technology landscape tracking; product strategy research; misinformation filtering Customization Adjust sentiment thresholds; add/remove data sources; modify analysis rules; extend AI models Benefits Reduces research time 80%; consolidates market intelligence; improves decision accuracy
by totoma
Use Cases Receive a newsletter featuring curated, contributor-friendly issues from your favorite repositories. By regularly reviewing active issues and new releases, you'll naturally develop stronger habits around open source contribution as your brain starts recognizing these projects as important. How It Works Collects the latest issues, comments, and recent commits using the GitHub API. Uses an AI model to select up to three beginner-friendly issues worth contributing to. Summarizes each issue—with contribution guidance and relevance insights—using Deepwiki MCP. Converts the summaries into HTML and delivers them as an email newsletter. Requirements GitHub Personal Access Token OpenRouter API Key Google App Password Make sure your target open-source project is indexed at https://deepwiki.com/{owner}/{repo} (e.g. https://deepwiki.com/vercel/next.js) How to Use Update the “Load repo info” node with your target repository’s owner and name (e.g. owner: vercel, repo: next.js). Add your GitHub Personal Access Token to the credentials of the “Get Issues from GitHub” node. Connect your OpenRouter API key to all models linked to the Agent node. Add your Google App Password to the “Send Email” node credentials. Enter the same email address (associated with the Google App Password) in both the “to email” and “from email” fields — the newsletter will be sent to this address. Customization Adjust the maximum number of contributor-friendly issues retrieved in the “Get Top Fit Issues” node. Improve results by tuning the models connected to the Agent node. Refine the criteria for “contributor-friendliness” within the “IssueRank Agent” node. Cron Setup Replace the manual trigger with a Schedule Trigger node or another scheduling-capable node. If you don't have an n8n Cloud account, use this alternative setup: fork the repository and follow the setup instructions. TroubleShooting If there is an issue with the AI model’s response, modify the ai_model setting. (If you want to use a free model, search for models containing “free” and choose one of them.)
by Connor Provines
[Meta] Multi-Format Documentation Generator for N8N Creators (+More) One-Line Description Transform n8n workflow JSON into five ready-to-publish documentation formats including technical guides, social posts, and marketplace submissions. Detailed Description What it does: This workflow takes an exported n8n workflow JSON file and automatically generates a complete documentation package with five distinct formats: technical implementation guide, LinkedIn post, Discord community snippet, detailed use case narrative, and n8n Creator Commons submission documentation. All outputs are compiled into a single Google Doc for easy access and distribution. Who it's for: n8n creators** preparing workflows for the template library or community sharing Automation consultants** documenting client solutions across multiple channels Developer advocates** creating content about automation workflows for different audiences Teams** standardizing workflow documentation for internal knowledge bases Key Features: Parallel AI generation** - Creates all five documentation formats simultaneously using Claude, saving 2+ hours of manual writing Automatic format optimization** - Each output follows platform-specific best practices (LinkedIn character limits, Discord casual tone, n8n marketplace guidelines) Single Google Doc compilation** - All documentation consolidated with clear section separators and automatic workflow name detection JSON upload interface** - Simple form-based trigger accepts workflow exports without technical setup Smart content adaptation** - Same workflow data transformed into technical depth for developers, engaging narratives for social media, and searchable descriptions for marketplaces Ready-to-publish outputs** - No editing required—each format follows platform submission guidelines and style requirements How it works: User uploads exported n8n workflow JSON through a web form interface Five AI agents process the workflow data in parallel, each generating format-specific documentation (technical guide, LinkedIn post, Discord snippet, use case story, marketplace listing) All outputs merge into a formatted document with section headers and separators Google Docs creates a new document with auto-generated title from workflow name and timestamp Final document populates with all five documentation formats, ready for copying to respective platforms Setup Requirements Prerequisites: Anthropic API** (Claude AI) - Powers all documentation generation; requires paid API access or credits Google Docs API** - Creates and updates documentation; free with Google Workspace account n8n instance** - Cloud or self-hosted with AI agent node support (v1.0+) Estimated Setup Time: 20-25 minutes (15 minutes for API credentials, 5-10 minutes for testing with sample workflow) Installation Notes API costs**: Each workflow documentation run uses ~15,000-20,000 tokens across five parallel AI calls (approximately $0.30-0.50 per generation at current Claude pricing) Google Docs folder**: Update the folderId parameter in the "Create a document" node to your target folder—default points to a specific folder that won't exist in your Drive Testing tip**: Use a simple 3-5 node workflow for your first test to verify all AI agents complete successfully before processing complex workflows Wait node purpose**: The 5-second wait between document creation and content update prevents Google Docs API race conditions—don't remove this step Form URL**: After activation, save the form trigger URL for easy access—bookmark it or share with team members who need to generate documentation Customization Options Swappable integrations: Replace Google Docs with Notion, Confluence, or file system storage by swapping final nodes Switch from Claude to GPT-4, Gemini, or other LLMs by changing the language model node (may require prompt adjustments) Add Slack/email notification nodes after completion to alert when documentation is ready Adjustable parameters: Modify AI prompts in each agent node to match your documentation style preferences or add company-specific guidelines Add/remove documentation formats by duplicating or deleting agent nodes and updating merge configuration Change document formatting in the JavaScript code node (section separators, headers, metadata) Extension possibilities: Add automatic posting to LinkedIn/Discord by connecting their APIs after doc generation Create version history tracking by appending to existing docs instead of creating new ones Build approval workflow by adding human-in-the-loop steps before final document creation Generate visual diagrams by adding Mermaid chart generation from workflow structure Create multi-language versions by adding translation nodes after English generation Category Development Tags documentation n8n content-generation ai claude google-docs workflow automation-publishing Use Case Examples Marketplace contributors**: Generate complete n8n template submission packages in minutes instead of hours of manual documentation writing across multiple format requirements Agency documentation**: Automation consultancies can deliver client workflows with professional documentation suite—technical guides for client IT teams, social posts for client marketing, and narrative case studies for portfolio Internal knowledge base**: Development teams standardize workflow documentation across projects, ensuring every automation has consistent technical details, use case examples, and setup instructions for team onboarding
by Divyansh Chauhan
🪄 Prompt To Video (MagicHour API) with Music & YouTube Automate AI video creation, background music, YouTube uploads, and result logging — all from a single text prompt. ⚡ Overview This n8n template turns a text prompt into a complete AI-generated video using the MagicHour API, adds background music, generates YouTube metadata, uploads to YouTube, and logs results in Google Sheets — all in one flow. Perfect for creators, marketers, and startups producing YouTube content at scale — from daily AI Shorts to explainers or marketing clips. 🧩 Use Cases 🎥 Daily AI-generated Shorts 🧠 Product explainers 🚀 Marketing & brand automation 🔁 Repurpose blog posts into videos 💡 AI storytelling or creative projects ⚙️ How It Works Trigger when a new row is added to Google Sheets or via Chat input. Gemini parses and normalizes the text prompt. MagicHour API generates the AI video. Poll until the render completes. (Optional) Mix background audio using MediaFX. Gemini generates YouTube title, description, and tags. Upload the video to YouTube with metadata. Log YouTube URL, metadata, and download link back to Google Sheets. 🧰 Requirements Service Purpose MagicHour API Key Text-to-video generation Gemini API Key Prompt parsing & metadata YouTube OAuth2 Video uploads Google Sheets OAuth2 Trigger & logging (Optional) MediaFX Node Audio mixing 🗂️ Google Sheets Setup Column Description Prompt Text prompt for video Background Music URL (Optional) Royalty-free track Status Tracks flow progress YouTube URL Auto-filled after upload Metadata Title, tags, and description JSON Date Generated (Optional) Auto-filled with video creation date 📅 100 Daily Prompts Automation You can scale this workflow to generate one video per day from a batch of 100 prompts in Google Sheets. Setup Steps Add 100 prompts to your Google Sheet — one per row. Set the Status column for each to Pending. Use a Cron Trigger in n8n to run the workflow once daily (e.g., at 9 AM). Each run picks one Pending prompt, generates a video, uploads to YouTube, then marks it as Done. Continues daily until all 100 prompts are processed. Example Cron Expression 0 9 * * * → Runs the automation every day at 9:00 AM. Node Sequence [Schedule Trigger (Daily)] → [Get Pending Prompt from Sheets] → [Gemini Prompt Parser] → [MagicHour Video Generation] → [Optional: MediaFX Audio Mix] → [Gemini Metadata Generator] → [YouTube Upload] → [Update Row in Sheets] 💡 Optional Enhancements: Add a notification node (Slack, Discord, or Email) after each upload. Add a counter check to stop after 100 videos. Add a “Paused” column to skip specific rows. 🧠 Gemini Integration Gemini handles: JSON parsing for MagicHour requests Metadata generation (title, description, tags) Optional creative rewriting of prompts 🎧 Audio Mixing (Optional) Install MediaFX Community Node → Settings → Community Nodes → n8n-nodes-mediafx Use it to blend background music automatically into videos. 🪶 Error Handling Avoid “Continue on Fail” in key nodes Use IF branches for MagicHour API errors Add retry/timeout logic for polling steps 🧱 Node Naming Tips Rename generic nodes for clarity: Merge → Merge Video & Audio If → Check Video Completion HTTP Request → MagicHour API Request 🚀 How to Use Add MagicHour, Gemini, YouTube, and Sheets credentials Replace background music with your own track Use Google Sheets trigger or daily cron for automation Videos are created, uploaded, and logged — hands-free ⚠️ Disclaimer This template uses community nodes (MediaFX). Install and enable them manually. MagicHour API usage may incur costs based on video duration and quality. 🌐 SEO Keywords MagicHour API, n8n workflow, AI video generator, automated YouTube upload, Gemini metadata, AI Shorts, MediaFX, Google Sheets automation, AI marketing, content automation.
by Abdullah Alshiekh
What Problem Does It Solve? Customers often ask product questions or prices in comments. Businesses waste time replying manually, leading to delays. Some comments only need a short thank-you reply, while others need a detailed private response. This workflow solves these by: Replying with a friendly public comment. Sending a private message with details when needed. Handling compliments, complaints, and unclear comments in a consistent way. How to Configure It Facebook Setup Connect your Facebook Page credentials in n8n. Add the webhook URL from this workflow to your Facebook App/Webhook settings. AI Setup Add your Google Gemini API key (or swap for OpenAI/Claude). The included prompt is generic — you can edit it to match your brand tone. Optional Logging If you want to track processed messages, connect a Notion database or another CRM. How It Works Webhook catches new Facebook comments. AI Agent analyzes the comment and categorizes it (question, compliment, complaint, unclear, spam). Replying: For questions/requests → public reply + private message with full details. For compliments → short thank-you reply. For complaints → apology reply + private message for clarification. For unclear comments → ask politely if they need help. For spam/offensive → ignored (no reply). Replies and messages are sent instantly via the Facebook Graph API. Customization Ideas Change the AI prompt to match your brand voice. Add forwarding to Slack/Email if a human should review certain replies. Log conversations in Notion, Google Sheets, or a CRM for reporting. Expand to Instagram or WhatsApp with small adjustments. If you need any help Get In Touch
by Yulia
This template presents a multi-agent system in which a coordinating agent manages specialized sub-agents: an AI agent for RAG and document summarization, and an email agent. Each agent effectively operates in its own domain, working collaboratively under the management of the primary agent. In addition to the two sub-agents, the coordinator agent queries the latest news by calling the HTTPS Request Tool. 💡 This template is an extended version of the initial workflow on how to Build a RAG Agent with n8n, Qdrant & OpenAI. The RAG sub-agent can use the same Qdrant collection. You can import this example collection (n8n-rag-2437367325990310-2025-11-04-10-41-54.snapshot) of 3 documents into the free Qdrant cloud or self-hosted account, rather than creating it from scratch. 🔗Example files for RAG The template uses the following example files in the Google Docs format: German Data Protection law: Bundesdatenschutzgesetz (BDSG) Computer Security Incident Handling Guide (NIST.SP.800-61r2) Berkshire Hathaway letter to shareholders from 2024 🚀How to Get Started Copy or import the template to your n8n instance. Create your Google Drive credentials via the Google Cloud Console and add them to the "Get Document" node. A detailed walk-through can be found in the n8n docs. Create your Gmail credentials via the Google Cloud Console and add them to the Gmail nodes. Create a Qdrant API key and add it to the "Search Documents" node credentials. The API key will be displayed after you have logged into Qdrant and created a Cluster. Create or activate your OpenAI API key. Create or activate your OpenRouter API key. Create or activate your News API key. 💬Chat with the main Agent to query document data, search latest news or perform Email actions 1️⃣ Ask the agent about specific information, facts, quotes, or details that are stored in the uploaded documents. E.g. What should be documented during incident response? 2️⃣ Ask the agent about recent news and current information from web sources. E.g. What does BDSG say about data breaches and are there any recent cases? 3️⃣ Ask the agent to summarize the document or information related to the documents and email it to you. E.g.I need a short summary of the Berkshire Hathaway letter, please send it to my email [user@example.com]. 4️⃣ Aks the agent to update you on your recent emails. E.g. I’d like to know the content of the latest email from [username]. 5️⃣ Ask the agent to create a draft of the email. E.g. Please create an email draft of the [document] summary. 🌟Adapt this template for your own use case Enterprise workflows** - Google Docs processing with automated communications Research teams** - Document analysis with automatic report distribution Customer success** - Intelligent document search with follow-up email automation Content operations** - Document summarization with stakeholder notifications Compliance workflows** - Policy queries with automated alert systems ⚠️ The current multi-agent architecture comes with certain trade-offs: the sequential nature of agent hand-offs can increase latency compared to single calls, and the full conversation history is not shared between all sub-agents. 💻 📞Get in touch if you want to customise this workflow or have any questions.
by Rahul Joshi
Automate user consent collection with a seamless workflow that captures form submissions, stores them securely, and sends professional AI-generated confirmation emails 📧🤖. This template streamlines compliance by logging every consent action directly into Google Sheets while also notifying your internal team instantly through Slack. With built-in Azure OpenAI email generation, every user receives a personalized, secure, trust-building confirmation without manual intervention. Perfect for DPDP/GDPR-aligned consent management systems. What This Template Does Receives user consent submissions via a Webhook trigger 🚀 Extracts name, email, version, and timestamp for structured processing 🔍 Saves or updates the record in Google Sheets for audit and compliance tracking 📄 Generates a responsive HTML thank-you email using Azure OpenAI 🤖 Formats the output into a clean subject + email body via a Code node 🧩 Sends the user a confirmation email via SMTP 📧 Converts HTML into a Slack-friendly message for internal alerts 🔔 Posts the formatted notification to your Slack channel for instant visibility 💬 Key Benefits ✅ Fully automated consent logging—no manual tracking required ✅ AI-generated HTML emails ensure professional, consistent communication ✅ Real-time Slack alerts keep your team informed instantly ✅ Compliant with DPDP/GDPR consent tracking best practices ✅ Easy to integrate into any website or mobile app via webhook ✅ Ensures audit-ready records with accurate timestamps and version history Features Webhook trigger for instant consent capture Google Sheets integration for centralized data storage Azure OpenAI-powered HTML email generation SMTP email delivery with dynamic fields Slack API integration for real-time notifications Custom JS transformations for email + Slack formatting Timestamp automatic insertion for compliance Requirements Google Sheets OAuth2 credentials Azure OpenAI API key SMTP email credentials (e.g., Gmail, Outlook, SendGrid) Slack API credentials A consent form or preference center that can send POST requests Target Audience SaaS founders needing user consent management EdTech, HealthTech, FinTech, and compliance-heavy platforms Data Protection & Privacy teams (DPDP/GDPR compliance) Automation consultants building consent or preferences centers If you want, I can also generate: ✅ Landing page text for this template ✅ A companion version for "Consent Withdrawal" ✅ A website prompt for Lovable to auto-generate UI/buttons
by Yassin Zehar
Description This workflow transforms raw SaaS metrics into a fully automated Product Health Monitoring & Incident Management system. It checks key revenue and usage metrics every day (such as churn MRR and feature adoption), detects anomalies using a statistical baseline, and automatically creates structured incidents when something unusual happens. When an anomaly is found, the workflow logs it into a central incident database, alerts the product team on Slack and by email, enriches the incident with context and AI-generated root-cause analysis, and produces a daily health report for leadership. It helps teams move from passive dashboard monitoring to a proactive, automated system that surfaces real issues with clear explanations and recommended next steps. Context Most SaaS teams struggle with consistent product health monitoring: Metrics live in dashboards that people rarely check proactively Spikes in churn or drops in usage are noticed days later There is no unified system to track, investigate, and report on incidents Post-mortems rely on memory rather than structured data Leadership often receives anecdotal updates instead of reliable daily reporting This workflow solves that by: Tracking core health metrics daily (revenue and usage) Detecting anomalies based on recent baselines, not arbitrary thresholds Logging all incidents in a consistent format Notifying teams only when action is needed Generating automated root-cause insights using AI + underlying database context Producing a daily “Product Health Report” for decision-makers The result: Faster detection, clearer understanding, and better communication across product, growth, and leadership teams. Target Users This template is ideal for: Product Managers & Product Owners SaaS founders and early-stage teams Growth, Analytics, and Revenue Ops teams PMO / Operations teams managing product performance Any organization wanting a lightweight incident monitoring system without building internal tooling Technical Requirements You will need: A Postgres / Supabase database containing your product metrics Slack credentials for alerts Gmail credentials for email notifications (Optional) Notion credentials for incident documentation and daily reports An OpenAI / Anthropic API key for AI-based root cause analysis Workflow Steps The workflow is structured into four main sections: 1) Daily Revenue Health Runs once per day, retrieves recent revenue metrics, identifies unusual spikes in churn MRR, and creates incidents when needed. If an anomaly is detected, a Slack alert and email notification are sent immediately. 2) Daily Usage Health Monitors feature usage metrics to detect sudden drops in adoption or engagement. Incidents are logged with severity, context, and alerts to the product team. 3) Root Cause & Summary For every open incident, the workflow: Collects additional context from the database (e.g., churn by country or plan) Uses AI to generate a clear root cause hypothesis and suggested next steps Sends a summarized report to Slack and email Updates the incident status accordingly 4) Daily Product Health Report Every morning, the workflow compiles all incidents from the previous day into: A daily summary email for leadership A Notion page for documentation and historical tracking This ensures stakeholders have clear visibility into product performance trends. Key Features Automated anomaly detection across revenue and usage metrics Centralized incident logging with metadata and raw context Severity scoring based on deviation from historical baselines Slack and email alerts for fast response AI-generated root cause analysis with recommended actions Daily product health reporting for leadership and PM teams Optional Notion integration for incident documentation System logging for observability and auditability Fully modular: you can add more metrics, alert channels, or analysis steps easily Expected Output When running, the workflow will generate: Structured incident records in your database Slack alerts for revenue or usage anomalies Email notifications with severity, baseline vs actual, and context AI-generated root cause summaries A daily health report summarizing all incidents (Optional) Notion pages for both incidents and daily reports System logs recording successful executions Tutorial video: Watch the Youtube Tutorial video About me I’m Yassin a Project & Product Manager Scaling tech products with data-driven project management. 📬 Feel free to connect with me on Linkedin
by Muhammad Anas Farooq
n8n Gmail AI Auto-Labeler > An intelligent n8n workflow that automatically classifies and labels Gmail emails using Google Gemini AI, keeping your inbox organized with zero manual effort. This workflow uses AI-powered classification to analyze email content, learn from sender patterns, and automatically apply appropriate labels while archiving processed emails. How It Works Trigger: The workflow runs automatically every minute to check for new unread emails (or manually for bulk processing). Check for Existing Labels: Before processing, it verifies if the email already has an AI-assigned label to avoid duplicate processing. AI Classification: If unlabeled, the AI agent analyzes the email using: Sender History Tool - Fetches up to 10 previous emails from the same sender to identify patterns 80% Majority Rule - If 80%+ of sender's past emails have the same label, strongly prefers that category Label Examples Tool - When uncertain, compares the email with existing examples from suspected categories Smart Decision: The AI returns a structured JSON response: { "label": "Category Name" } Or "None" if no category fits. Apply & Archive: Label Applied → The workflow adds the appropriate Gmail label to the thread. Auto-Archive → Removes the email from INBOX (archives it) to maintain zero-inbox. Loop: Processes the next email in the batch, ensuring all new emails are classified. Requirements Gmail OAuth2 Credentials** - Connected Gmail account with API access. Google Gemini API Key** - Get it here Free tier: 15 requests/minute Gmail Labels** - Must be created in Gmail exactly as listed: Meetings Income Inquiries Notify / Verify Expenses Orders / Deliveries Trash Likely How to Use Import the Workflow: Copy the provided JSON file. In your n8n instance → click Import Workflow → select the JSON file. Create Gmail Labels: Open Gmail → Settings → Labels → Create new labels. Use the exact names listed above (case-sensitive). Get Your Label IDs: In the workflow, click "When clicking 'Execute workflow'" manual trigger. Execute the "Get Labels Info" node only. Copy each label's ID (format: Label_1234567890123456789). Update Code Nodes with Your Label IDs: Node 1: "Check Label Existence" const labelMap = { "Label_YOUR_ID_HERE": "Meetings", "Label_YOUR_ID_HERE": "Inquiries", "Label_YOUR_ID_HERE": "Notify / Verify", "Label_YOUR_ID_HERE": "Expenses", "Label_YOUR_ID_HERE": "Orders / Deliveries", "Label_YOUR_ID_HERE": "Trash Likely" }; Node 2: "Convert Label to Label ID" const labelToId = { "Meetings": "Label_YOUR_ID_HERE", "Inquiries": "Label_YOUR_ID_HERE", "Notify / Verify": "Label_YOUR_ID_HERE", "Expenses": "Label_YOUR_ID_HERE", "Orders / Deliveries": "Label_YOUR_ID_HERE", "Trash Likely": "Label_YOUR_ID_HERE" }; Set Up Credentials: Gmail OAuth2 → Authorize your Gmail account in n8n. Google Gemini API → Add your API key in n8n credentials. Test the Workflow: Send yourself test emails with clear content (e.g., invoice, meeting invite). Use the manual trigger to process them. Verify labels are applied correctly. Activate for Auto Mode: Toggle the workflow to Active. New unread emails will be processed automatically every minute. Notes Dual Execution Modes**: Auto Mode - Gmail Trigger polls inbox every minute for unread emails (real-time processing). Manual Mode - Use the manual trigger to bulk process existing emails (adjust limit in "Get many messages" node). AI Learning from Patterns**: The workflow applies an 80% majority rule - if 80% or more of a sender's historical emails share the same label, the AI strongly prefers that category for new emails from that sender. This creates intelligent sender-based routing over time. Skip Already Labeled Emails**: The "Check Label Existence" node prevents re-processing emails that already have an AI-assigned label. Ensures efficient execution and avoids duplicate work. Structured AI Output**: Uses a Structured Output Parser to ensure the AI always returns valid JSON: { "label": "Category" }. If uncertain, returns { "label": "None" } and the email stays in inbox. Background Archiving**: After labeling, emails are automatically removed from INBOX (archived). Maintains a zero-inbox workflow while preserving emails under their labels. Rate Limits**: Google Gemini free tier: 15 requests/minute. Adjust polling frequency if hitting limits. Example Behavior Minute 1**: New invoice email arrives → AI fetches sender history → 85% were labeled "Expenses" → applies "Expenses" label → archives email. Minute 2**: Meeting invite arrives → No sender history → AI analyzes content (Zoom link, time) → applies "Meetings" label → archives email. Minute 3**: Promotional email arrives → AI compares with "Trash Likely" examples → applies label → archives email. Minute 4**: Already-labeled email detected → skipped silently. Label Categories | Label | Description | |-------|-------------| | Meetings | Calendar invites, Zoom/Meet links, appointments, scheduled events | | Expenses | Bills, invoices, receipts, payment reminders, subscription renewals | | Income | Payments received, payouts, deposits, earnings notifications | | Notify / Verify | Verification codes, login alerts, 2FA codes, account notifications | | Orders / Deliveries | Order confirmations, shipping updates, tracking numbers, deliveries | | Inquiries | Business outreach, sales proposals, partnerships, cold emails | | Trash Likely | Spam, newsletters, promotions, marketing blasts, ads | > If no category fits clearly, the email returns "None" and remains in the inbox. Customization Change Polling Frequency**: Edit the "Gmail Trigger" node → pollTimes → mode (e.g., every5Minutes). Adjust Email Limit**: Modify limit: 10 in "Get many messages" node for manual bulk processing. Add Custom Labels**: Create in Gmail → Get ID → Update both Code nodes + AI system prompt. Modify 80% Rule**: Edit the AI agent's system message to adjust the majority threshold. Increase Tool Limits**: Change limit: 10 in Gmail tool nodes to fetch more historical data. Author: Muhammad Anas Farooq
by Linearloop Team
🖥️ Automated Website Uptime Monitor with Email Alerts & GitHub Status Page Update This n8n workflow continuously monitors your website’s availability, sends email alerts when the server goes down, and automatically updates a status page (index.html) in your GitHub repository to reflect the live status. 📌 Good to Know The workflow checks your website every 2 minutes (interval configurable). If the website is down (503, bad response, or error) → it sends an email alert and updates the GitHub-hosted status page to show Down. If the website is up (200) → it updates the GitHub-hosted status page to show Up. The email notification includes an HTML-formatted alert page. You can use GitHub Pages to host the status page publicly. ℹ️ What is GitHub Pages? GitHub Pages is a free hosting service provided by GitHub that lets you publish static websites (HTML, CSS, JS) directly from a GitHub repository. You can use it to make your index.html status page publicly accessible with a URL like: ⚡ How to Set Up GitHub Pages for Your Status Page Create a new repository on GitHub (recommended name: status). Add a blank index.html file (n8n workflow will later update this file). Go to your repository → Settings → Pages. Under Source, select the branch (main or master) and folder (/root). Save changes. Your status page will now be live at: https://<USERNAME>.github.io/status ✅ Prerequisites An n8n instance (self-hosted or cloud). A GitHub account & repository (to host the status page). A Gmail account (or any email service supported by n8n – example uses Gmail). Access to the target website URL you want to monitor. ⚙️ How it Works Schedule Trigger → Runs every 2 minutes. HTTP Request → Pings your website URL. Switch Node → Evaluates the response status (200 OK vs error/503). Code Node → Generates a dynamic HTML status page (Up/Down). GitHub Repo & File → Github Repo Name Should be https://github.com/<OWNER_NAME>/status (recommended) & Must have(required) a blank file named as index.html before triggering this flow. GitHub Node → Updates/commits the index.html file in your repository. Gmail Node → Sends an email alert if the site is down. 🚀 How to Use Import the workflow JSON into your n8n instance. Configure credentials for: GitHub (Personal Access Token with repo permissions). Gmail (or your preferred email service). Replace the following: https://app.yourdomain.com/health → with your own website URL. example@gmail.com → with your email address (or distribution list). GitHub repo details → with your repository where index.html will live. Deploy the workflow. (Optional) Enable GitHub Pages on your repo to serve index.html as a live status page. 🛠 Requirements n8n v1.0+ GitHub personal access token Gmail API credentials (or SMTP/email service of your choice) 🎨 Customising this Workflow Interval** → Change schedule from 2 minutes to any desired frequency. Email Content** → Modify HTML alert template in the Gmail node. Status Page Styling** → Edit the HTML/CSS in the Code node to match your branding. Error Handling** → Extend Switch node for other status codes (e.g., 404, 500). Multiple Websites** → Duplicate HTTP Request + Switch nodes for multiple URLs. 👤 Who Can Use It? DevOps & SRE Engineers** → For automated uptime monitoring. Freelancers/Developers** → To monitor client websites. Startups & SMEs** → For a free, lightweight status page without paid tools. Educators/Students** → As a hands-on learning project with n8n. 🌟 Key Features 🔄 Automated uptime checks (configurable interval). 📧 Email notifications on downtime. 📝 Dynamic HTML status page generation. 🌍 GitHub Pages integration for public visibility. ⚡ Lightweight & cost-effective (no paid monitoring tool needed). 🔗 Tools Integration n8n** – Orchestration & automation. GitHub** – Version control + hosting of status page. Gmail** – Email notifications. HTTP Request** – Website availability check. 📈 Example Use Cases Personal website monitoring with public status page. Monitoring SaaS apps & notifying support teams. Internal company services uptime dashboard.
by Rahul Joshi
Description Automatically ingests new employee data, extracts relevant signals, scores attrition risk, and notifies HR/managers with structured insights and recommended actions. Built on Azure OpenAI Chat with Structured Output Parser and true/false routing for escalation. What This Template Does Trigger for new data: Starts when a new profile, survey, or report file is added. Download & extract: Retrieves the file and converts PDFs/text into analyzable content. Analyze signals: Uses Azure OpenAI Chat to interpret sentiment, workload, performance notes, feedback, and changes (role, compensation, manager, location). Structured parsing: Maps to fields like risk_score, risk_level, key_drivers, recommended_interventions, escalation_required. Logic routing: Applies thresholds (e.g., risk_score ≥ 0.7) and flags for urgent follow-up. Email alerts: Drafts and sends tailored notifications to HR/manager with action steps. Key Signals Considered Sentiment & language: Negative tone, burnout cues, disengagement in feedback. Activity trends: Drop in participation, delayed responses, meeting absenteeism. Performance & goals: Recent rating changes, missed OKRs, quality issues. Role & compensation: Lateral moves, pay gaps vs. market, stalled progression. Managerial context: Team churn, conflict mentions, low recognition frequency. Features Azure OpenAI Chat: Interprets unstructured text into consistent risk fields. Structured Output Parser: Guarantees schema for downstream decisions. Conditional Logic (true/false): Threshold checks for escalation. Memory: Maintains context across multiple files per employee for trend-aware scoring. Calculate avg span: Computes tenure or recency metrics used in risk scoring. Email Composer & Sender: Generates and dispatches HR-ready alerts. Requirements n8n instance with access to employee data sources (Drive, Inbox, HR folder). Extract From PDF configured for clean text output. Azure OpenAI credentials (e.g., GPT‑4o‑mini) connected to Chat Model. Email service (Gmail/SMTP) set in n8n Credentials. Parser schema aligned to your People Analytics fields (risk_score, drivers, actions).