by IranServer.com
Monitor VPS security with AI analysis via SSH and Telegram alerts This n8n template automatically monitors your VPS for suspicious processes and network connections using AI analysis. It connects to your server via SSH, analyzes running processes, and sends Telegram alerts when potential security threats are detected. Who's it for System administrators managing VPS/dedicated servers DevOps teams monitoring production environments Security-conscious users who want automated threat detection Anyone running services on Linux servers who wants proactive monitoring How it works The workflow runs on a scheduled basis and performs the following steps: SSH Connection: Connects to your VPS via SSH and executes system commands to gather process and network information Data Collection: Runs ps aux --sort=-%cpu,-%mem && ss -tulpn to capture running processes sorted by CPU/memory usage and active network connections AI Analysis: Uses OpenAI's language model to analyze the collected data for suspicious patterns, malware signatures, unusual network connections, or abnormal resource usage Structured Output: Parses AI responses into structured data identifying malicious and suspicious activities with explanations Alert System: Sends immediate Telegram notifications when malicious processes are detected Requirements SSH access** to your VPS with valid credentials OpenAI API key** for AI analysis (uses GPT-4 mini model) Telegram Bot** and chat ID for receiving alerts Linux-based VPS or server to monitor How to set up Configure SSH credentials: Set up SSH connection to your VPS in the "Execute a command" node Add OpenAI API key: Configure your OpenAI credentials in the "OpenAI Chat Model" node Set up Telegram bot: Create a Telegram bot and get the API token Get your Telegram chat ID Update the admin_telegram_id in the "Edit Fields" node with your chat ID Configure Telegram credentials in the "Send a text message" node Adjust schedule: Modify the "Schedule Trigger" to set your preferred monitoring frequency Test the workflow: Run a manual execution to ensure all connections work properly How to customize the workflow Change monitoring frequency**: Adjust the schedule trigger interval (hourly, daily, etc.) Modify analysis criteria**: Update the AI prompt in "Basic LLM Chain" to focus on specific security concerns Add more commands**: Extend the SSH command to include additional system information like disk usage, log entries, or specific service status Multiple servers**: Duplicate the SSH execution nodes to monitor multiple VPS instances Different alert channels**: Replace or add to Telegram with email, Slack, or Discord notifications Custom filtering**: Add conditions to filter out known safe processes or focus on specific suspicious patterns Good to know The AI model analyzes both running processes and network connections for comprehensive monitoring Each analysis request costs approximately $0.001-0.01 USD depending on system activity The workflow only sends alerts when malicious or suspicious activity is detected, reducing notification noise SSH commands require appropriate permissions on the target server Consider running this workflow from a secure, always-on n8n instance for continuous monitoring
by Julian Reich
This n8n template demonstrates how to automatically convert voice messages from Telegram into structured, searchable notes in Google Docs using AI transcription and intelligent tagging. Use cases are many: Try capturing ideas on-the-go while walking, recording meeting insights hands-free, creating voice journals, or building a personal knowledge base from spoken thoughts! Good to know OpenAI Whisper transcription costs approximately $0.006 per minute of audio ChatGPT tagging adds roughly $0.001-0.003 per message depending on length The workflow supports both German and English voice recognition Text messages are also supported - they bypass transcription and go directly to AI tagging Perfect companion: Combine with the "Weekly AI Review**" workflow for automated weekly summaries of all your notes! How it works Telegram receives your voice message or text and triggers the workflow An IF node intelligently detects whether you sent audio or text content For voice messages: Telegram downloads the audio file and OpenAI Whisper transcribes it to text For text messages: Content is passed directly to the next step ChatGPT analyzes the content and generates up to 3 relevant keywords (Work, Ideas, Private, Health, etc.) A function node formats everything with Swiss timestamps, message type indicators, and clean structure The formatted entry gets automatically inserted into your Google Doc with date, keywords, and full content Telegram sends you a confirmation with the transcribed/original text so you can verify accuracy How to use Simply send a voice message or text to your Telegram bot - the workflow handles everything automatically The manual execution can be used for testing, but in production this runs on every message Voice messages work best with clear speech in quiet environments for optimal transcription Requirements Telegram Bot Token and configured webhook OpenAI API account for Whisper transcription and ChatGPT tagging Google Docs API access for document writing A dedicated Google Doc where all notes will be collected Customising this workflow Adjust the AI prompt to use different tagging categories relevant to your workflow (e.g., project names, priorities, emotions) Add multiple Google Docs for different contexts (work vs. private notes) Include additional processing like sentiment analysis or automatic task extraction Connect to other apps like Notion, Obsidian, or your preferred note-taking system And don't forget to also implement the complimentary workflow Weekly AI Review!
by Matt Chong
Who is this for? If you’re overwhelmed with incoming emails but only want to be notified about the essentials, this workflow is for you. Perfect for busy professionals who want a short AI summary of new emails delivered directly to Slack. What does it solve? Reading every email wastes time. This workflow filters out the noise by: Automatically summarizing each unread Gmail email using AI Sending you just the sender and a short summary to Slack Helping you stay focused without missing key information How it works Every minute, the workflow checks Gmail for unread emails When it finds one, it: Extracts the email content Sends it to OpenAI’s GPT model for a 250-character summary Delivers the message directly to Slack How to setup? Connect your accounts: Gmail (OAuth2) OpenAI (API key or connected account) Slack (OAuth2) Edit the Slack node: Choose the Slack user/channel to send alerts to Optional: Adjust the AI prompt in the “AI Agent” node to modify the summary style Optional: Change polling frequency in the Gmail Trigger node How to customize this workflow to your needs Edit the AI prompt to: Highlight urgency Include specific keywords Extend or reduce summary length Modify the Slack message format (add emojis, tags, or links)
by Matt Chong
Who is this for? If you're going on vacation or away from work and want your Gmail to respond to emails intelligently while you're out, this workflow is for you. It's perfect for freelancers, professionals, and teams who want a smarter, more personal out-of-office reply powered by AI. What does it solve? No more generic autoresponders or missing urgent emails. This AI-powered workflow: Writes short, polite, and personalized replies while you're away. Skips replying to newsletters, bots, or spam. Helps senders move forward by offering an alternate contact. Works around your specific time zone and schedule. How it works The workflow runs on a schedule (e.g., every 15 minutes). It checks if you are currently out of office (based on your defined start and end dates). If you are, it looks for unread Gmail messages. For each email: It uses AI to decide if a reply is needed. If yes, it generates a short, friendly out-of-office reply using your settings. It sends the reply and labels the email to avoid duplicate replies. How to setup? In the Set node: Define your out-of-office start and end times in ISO 8601 format (e.g., 2025-08-19T07:00:00+02:00). Set your timezone (e.g., Europe/Madrid). Add your backup contact's name and email. In the Gmail nodes: Connect your Gmail account using OAuth2 credentials. Replace the label ID in the final Gmail node with your own label (e.g., "Auto-Replied"). In the Schedule Trigger node: Set how often the workflow should check for new emails (e.g., every 15 minutes). How to customize this workflow to your needs Adjust the prompt in the AI Agent node to change tone or add more rules. Switch to a different timezone or update the return dates as needed. This workflow ensures you stay professional, even while you're offline and saves you from coming back to an email mess.
by Automate With Marc
AI Agent MCP for Email & News Research Build a chat-first MCP-powered research and outreach agent. This workflow lets you ask questions in an n8n chat, then the agent researches news (via Tavily + Perplexity through an MCP server) and drafts emails (via Gmail through a separate MCP server). It uses OpenAI for reasoning and short-term memory for coherent, multi‑turn conversations. Watch build along videos for workflows like these on: www.youtube.com/@automatewithmarc What this template does Chat-native trigger: Start a conversation and ask for research or an email draft. MCP client tools: The agent talks to two MCP servers — one for Email work, one for News research. News research stack: Uses Tavily (search) and Perplexity (LLM retrieval/answers) behind a News MCP server. Email stack: Uses Gmail Tool to generate and send messages via an Email MCP server. Reasoning + memory: OpenAI Chat Model + Simple Memory for context-aware, multi-step outputs. How it works (node map) When chat message received → collects your prompt and routes it to the agent. AI Agent (system prompt = “helpful email assistant”) → orchestrates tools via MCP Clients. OpenAI Chat Model → reasoning/planning for research or email drafting. Simple Memory → keeps recent chat context for follow-ups. News MCP Server exposes: Tavily Tool (Search) and Perplexity Tool (Ask) for up-to-date findings. Email MCP Server exposes: Gmail Tool (To, Subject, Message via AI fields) to send or draft emails. The MCP Clients (News/Email) plug into the Agent, so your single chat prompt can research and then draft/send emails in one flow. Requirements n8n (Cloud or self‑hosted) OpenAI API key for the Chat Model (set on the node) Tavily, Perplexity, and Gmail credentials (connected on their respective tool nodes) Publicly reachable MCP Server endpoints (provided in the MCP Client nodes) Setup (quick start) Import the template and open it in the editor. Connect credentials on: OpenAI, Tavily, Perplexity, and Gmail tool nodes. Confirm MCP endpoints in both MCP Client nodes (News/Email) and leave transport as httpStreamable unless you have special requirements. Run the workflow. In chat, try: “Find today’s top stories on Kubernetes security and draft an intro email to Acme.” “Summarize the latest AI infra trends and email a 3‑bullet update to my team.” Inputs & outputs Input: Natural-language prompt via chat trigger. Tools used: News MCP (Tavily + Perplexity), Email MCP (Gmail). Output: A researched summary and/or a drafted/sent email, returned in the chat and executed via Gmail when requested. Why teams will love it One prompt → research + outreach: No tab‑hopping between tools. Up-to-date answers: Pulls current info through Tavily/Perplexity. Email finalization: Converts findings into send-ready drafts via Gmail. Context-aware: Memory keeps threads coherent across follow-ups. Pro tips Use clear verbs in your prompt: “Research X, then email Y with Z takeaways.” For safer runs, point Gmail to a test inbox first (or disable send and only draft). Add guardrails in the Agent’s system message to match your voice/tone.
by Br1
Who’s it for This workflow is designed for developers, data engineers, and AI teams who need to migrate a Pinecone Cloud index into a Weaviate Cloud class index without recalculating the vectors (embeddings). It’s especially useful if you are consolidating vector databases, moving from Pinecone to Weaviate for hybrid search, or preparing to deprecate Pinecone. ⚠️ Note: The dimensions of the two indexes must match. How it works The workflow automates migration by batching, formatting, and transferring vectors along with their metadata: Initialization – Uses Airtable to store the pagination token. The token starts with a record initialized as INIT (Name=INIT, Number=0). Pagination handling – Reads batches of vector IDs from the Pinecone index using /vectors/list, resuming from the last stored token. Vector fetching – For each batch, retrieves embeddings and metadata fields from Pinecone via /vectors/fetch. Data transformation – Two Code nodes (Prepare Fetch Body and Format2Weaviate) are included to correctly structure the body of each HTTP request and map metadata into Weaviate-compatible objects. Data loading – Inserts embeddings and metadata into the target Weaviate class through its REST API. State persistence – Updates the pagination token in Airtable, ensuring the next run resumes from the correct point. Scheduling – The workflow runs on a defined schedule (e.g., every 15 seconds) until all data has been migrated. How to set up Airtable setup Create a Base (e.g., Cycle) and a Table (e.g., NextPage). The table should have two columns: Name (text) → stores the pagination token. Number (number) → stores the row ID to update. Initialize the first and only row with (INIT, 0). Source and target configuration Make sure you have a Pinecone index and namespace with embeddings. Manually create a target Weaviate Cluster and a target Weaviate Class with the same vector dimensions. In the Parameters node of the workflow, configure the following values: | Parameter | Description | Example Value | |---------------------|----------------------------------------------------------------------------------------------|---------------| | pineconeIndex | The name of your Pinecone index to read vectors from. | my-index | | pineconeNamespace | The namespace inside the Pinecone index (leave empty if unused). | default | | batchlimit | Number of records fetched per iteration. Higher = faster migration but heavier API calls. | 100 | | weaviateCluster | REST endpoint of your Weaviate Cloud instance. | https://dbbqrc9itXXXXXXXXX.c0.europe-west3.gcp.weaviate.cloud | | weaviateClass | Target class name in Weaviate where objects will be inserted. | MyClass | Credentials Configure Pinecone API credentials. Configure Weaviate Bearer token. Configure Airtable API key. Activate Import the workflow into n8n, update the parameters, and start the schedule trigger. Requirements Pinecone Cloud account with a configured index and namespace. Weaviate Cloud cluster with a class defined and matching vector dimensions. Airtable account and base to store pagination state. n8n instance with credentials for Pinecone, Weaviate, and Airtable. How to customize the workflow Adjust the batchlimit parameter to control performance (higher values = fewer API calls, but heavier requests). Adapt the Format2Weaviate Code node if you want to change or expand the metadata stored. Replace Airtable with another persistence store (e.g., Google Sheets, PostgreSQL) if preferred. Extend the workflow to send migration progress updates via Slack, email, or another channel.
by WhySoSerious
What it is This workflow listens for new tickets in HaloPSA via webhook, generates a professional AI-powered summary of the issue using Gemini (or another LLM), and posts it back into the ticket as a private note. It’s designed for MSPs using HaloPSA who want to reduce triage time and give engineers a clear head start on each support case. ⸻ ✨ Features • 🔔 Webhook trigger from HaloPSA on new ticket creation • 🚧 Optional team filter (skip Sales or other queues) • 📦 Extracts ticket subject, details, and ID • 🧠 Builds a structured AI prompt with MSP context (NinjaOne, M365, CIPP) • 🤖 Processes via Gemini or other LLM • 📑 Cleans & parses JSON output (summary, next step, troubleshooting) • 🧱 Generates a branded HTML private note (logo + styled sections) • 🌐 Posts the note back into HaloPSA via API ⸻ 🔧 Setup Webhook • Replace WEBHOOK_PATH and paste the generated Production URL into your HaloPSA webhook. Guard filter (optional) • Change teamName or teamId to skip tickets from specific queues. Branding • Replace YOUR_LOGO_URL and Your MSP Brand in the HTML note builder. HaloPSA API • In the HTTP node, replace YOUR_HALO_DOMAIN and add your Halo API token (Bearer auth). LLM credentials • Set your API key in the Gemini / OpenAI node credentials section. (Optional) Adjust the AI prompt with your own tools or processes. ⸻ ✅ Requirements • HaloPSA account with API enabled • Gemini / OpenAI (or other LLM) API key • SMTP (optional) if you want to extend with notifications ⸻ ⚡ Workflow overview `🔔 Webhook → 🚧 Guard → 📦 Extract Ticket → 🧠 Build AI Prompt → 🤖 AI Agent (Gemini) → 📑 Parse JSON → 🧱 Build HTML Note → 🌐 Post to HaloPSA`
by Alex Huy
How it works This workflow automatically curates and sends a daily AI/Tech news digest by aggregating articles from premium tech publications and using AI to select the most relevant and trending stories. 🔄 Automated News Pipeline RSS Feed Collection - Fetches articles from 14 premium tech news sources (TechCrunch, MIT Tech Review, The Verge, Wired, etc.) Smart Article Filtering - Limits articles per source to ensure diverse coverage and prevent single-source domination Data Standardization - Cleans and structures article data (title, summary, link, date) for AI processing AI-Powered Curation - Uses Google Vertex AI to analyze articles and select top 10 most relevant/trending stories Newsletter Generation - Creates professional HTML newsletter with summaries and direct links Email Delivery - Automatically sends formatted digest via Gmail 🎯 Key Features Premium Sources** - Curates from 14 top-tier tech publications AI Quality Control** - Intelligent article selection and summarization Balanced Coverage** - Prevents source bias with smart filtering Professional Format** - Clean HTML newsletter design Scheduled Automation** - Daily delivery at customizable times Error Resilience** - Continues processing even if some feeds fail Setup Steps 1. 🔑 Required API Access Google Cloud Project** with Vertex AI API enabled Google Service Account** with AI Platform Developer role Gmail API** enabled for email sending 2. ☁️ Google Cloud Setup Create or select a Google Cloud Project Enable the Vertex AI API Create a service account with these permissions: AI Platform Developer Service Account User Download the service account JSON key Enable Gmail API for the same project 3. 🔐 n8n Credentials Configuration Add these credentials to your n8n instance: Google Service Account (for Vertex AI): Upload your service account JSON key Name it descriptively (e.g., "Vertex AI Service Account") Gmail OAuth2: Use your Google account credentials Authorize Gmail API access Required scopes: gmail.send 4. ⚙️ Workflow Configuration Import the workflow into your n8n instance Update node configurations: Google Vertex AI Model: Set your Google Cloud Project ID Send Newsletter Email: Update recipient email address Daily Newsletter Trigger: Adjust schedule time if needed Verify credentials are properly connected to respective nodes 5. 📰 RSS Sources Customization (Optional) The workflow includes 14 premium tech news sources: TechCrunch (AI & Startups) The Verge (AI section) MIT Technology Review Wired (AI/Science) VentureBeat (AI) ZDNet (AI topics) AI Trends Nature (Machine Learning) Towards Data Science NY Times Technology The Guardian Technology BBC Technology Nikkei Asia Technology To customize sources: Edit the "Configure RSS Sources" node Add/remove RSS feed URLs as needed Ensure feeds are active and properly formatted 6. 🚀 Testing & Deployment Manual Test: Execute the workflow manually to verify setup Check Email: Confirm newsletter arrives with proper formatting Verify AI Output: Ensure articles are relevant and well-summarized Schedule Activation: Enable the daily trigger for automated operation 💡 Customization Options Newsletter Timing: Default: 8:00 AM UTC daily Modify "triggerAtHour" in the Schedule Trigger node Add multiple daily sends if desired Content Focus: Adjust the AI prompt in "AI Tech News Curator" node Specify different topics (e.g., focus on startups, enterprise AI, etc.) Change output language or format Email Recipients: Update single recipient in Gmail node Or modify to send to multiple addresses Integrate with mailing list services Article Limits: Current: Max 5 articles per source Modify the filtering logic in "Filter & Balance Articles" node Adjust total article count in AI prompt 🔧 Troubleshooting Common Issues: RSS Feed Failures**: Individual feed failures won't stop the workflow AI Rate Limits**: Vertex AI has generous limits, but monitor usage Gmail Sending**: Ensure sender email is authorized in Gmail settings Missing Articles**: Some RSS feeds may be inactive - check source URLs Performance Tips: Monitor execution times during peak RSS activity Consider adding delays if hitting rate limits Archive old newsletters for reference This workflow transforms daily news consumption from manual browsing to curated, AI-powered intelligence delivered automatically to your inbox.
by Automate With Marc
Step-By-Step AI Stock Market Research Agent (Beginner) Build your own AI-powered daily stock market digest — automatically researched, summarized, and delivered straight to your inbox. This beginner-friendly n8n workflow shows how to combine OpenAI GPT-5, Decodo scraping tool, and Gmail to produce a concise daily financial update without writing a single line of code. 🎥 Watch a full tutorial and walkthrough on how to build and customize similar workflows at: https://www.youtube.com/watch?v=DdnxVhUaQd4 What this template does Every day, this agent automatically: Triggers on schedule (e.g., 9 a.m. daily). Uses Decodo Tool to fetch real market headlines from Bloomberg, CNBC, Reuters, Yahoo Finance, etc. Passes the information to GPT-5, which summarizes key events into a clean daily report covering: Major indices (S&P 500, Nasdaq, Dow) Global markets (Europe & Asia) Sector trends and earnings Congressional trading activity Major financial and regulatory news Emails the digest to you in a neat, ready-to-read HTML format. Why it’s useful (for beginners) Zero coding: everything configured through n8n nodes. Hands-on AI Agent logic: learn how a language-model node, memory, and web-scraping tool work together. Practical use case: a real-world agent that automates market intelligence for investors, creators, or business analysts. Requirements OpenAI API Key (GPT-4/5 compatible) Decodo API Key (for market data scraping) Gmail OAuth2 Credential (to send daily digest) Credentials to set in n8n OpenAI API (Chat Model) → Connect your OpenAI key. Decodo API → Paste your Decodo access key. Gmail OAuth2 → Connect your Google Account and edit “send to” email address. How it works (nodes overview) Schedule Trigger Starts the workflow at a preset time (default: daily). AI Research Agent Acts as a Stock Market Research Assistant. Uses GPT-5 via OpenAI Chat Model. Uses Decodo Tool to fetch real-time data from trusted finance sites. Applies custom system rules for concise summaries and email-ready HTML output. Simple Memory Maintains short-term context for clean message passing between nodes. Decodo Tool Handles all data scraping and extraction using the AI’s tool calls. Gmail Node Emails the final daily digest to the user (default subject: “Daily AI News Update”). Setup (step-by-step) Import template into n8n. Open each credential node → connect your accounts. In the Gmail node, replace “sendTo” with your email. Adjust Schedule Trigger → e.g., every weekday 8:30 a.m. (Optional) Edit the system prompt in AI Research Agent to focus on different sectors (crypto, energy, tech). Click Execute Workflow Once to test — you’ll receive an AI-curated digest in your inbox. Customization tips 🕒 Change frequency: adjust Schedule Trigger to run multiple times daily or weekly. 📰 Add sources: extend the Decodo Tool input with new URLs (e.g., Seeking Alpha, MarketWatch). 📈 Switch topic: modify prompt to track crypto, commodities, or macroeconomic data. 💬 Alternative delivery: send digest via Slack, Telegram, or Notion instead of Gmail. Troubleshooting 401 errors: verify OpenAI/Decodo credentials. Empty output: ensure Decodo Tool returns valid data; inspect the agent’s log. Email not sent: confirm Gmail OAuth2 scope and recipient email. Formatting issues: keep output in HTML mode; avoid Markdown.
by SOLOVIEVA ANNA
Overview This workflow turns photos sent to a LINE bot into tiny AI-generated diary entries and saves everything neatly in Google Drive. Each time a user sends an image, the workflow creates a timestamped photo file and a matching text file with a short diary sentence, stored inside a year/month folder structure (KidsDiary/YYYY/MM). It’s a simple way to keep a lightweight visual diary for kids or daily life without manual typing. LINE Photo to AI Diary with Goo… Who this is for Parents who want to archive kids’ photos with a short daily comment People who often send photos to LINE and want them auto-organized in Drive Anyone who prefers a low-friction, “take a photo and forget” style diary How it works Trigger: A LINE Webhook receives an image message from the user. Extract metadata: The workflow extracts the messageId and replyToken. Download image: It calls the LINE content API to fetch the image as binary. AI diary text: OpenAI Vision generates a one-sentence, diary-style caption (about 50 Japanese characters). Folder structure: A KidsDiary/YYYY/MM folder is created (or reused) in Google Drive. Save files: The photo is saved as YYYY-MM-DD_HHmmss.jpg and the diary text as YYYY-MM-DD_HHmmss_diary.txt in the same folder. Confirm on LINE: The bot replies to the user that the photo and diary have been saved. How to set up Connect your LINE Messaging API credentials in the HTTP Request nodes. Connect your Google Drive credential in the Google Drive nodes and choose a root folder. Make sure the webhook URL is correctly registered in the LINE Developers console. Customization ideas Change the AI prompt to adjust tone (e.g., more playful, more sentimental). Localize the diary language or add an English translation. Add a second branch to post the saved diary entry to Slack, Notion, or email. Organize Google Drive folders by child’s name instead of only by date.
by Easy8.ai
Description Use this workflow to automatically sync Zoom webinar registrants into Mailchimp, filter out internal contacts, and send double opt-in confirmation emails. Ideal for keeping your newsletter audiences clean, accurate, and enriched with new leads—without manual export/import steps. About Workflow This workflow connects Zoom Webinars with Mailchimp via API to automate the onboarding of webinar attendees into your marketing audience. It retrieves registrant data from Zoom (based on Webinar ID and Occurrence ID), extracts attendee emails, filters out internal domains, checks whether the contact already exists in Mailchimp, and then creates or updates each record. New contacts receive a double opt-in confirmation email, and all newly added leads are tagged for segmentation inside Mailchimp. Use Case Perfect for marketing teams running webinars who need to transfer participants into Mailchimp quickly and reliably. This automation streamlines attendee follow-up, ensures compliance with double opt-in requirements. How it works Manual Trigger – Execute workflow** The workflow starts manually. You can optionally replace the manual trigger with a Schedule Trigger if you want to automate recurring webinars. Manual Input – Set Webinar ID and Occurrence ID** The workflow includes a Set node that requires you to enter: webinar_id occurence_id These define which Zoom webinar instance will be synced. Zoom API – Get Webinar Attendees** Retrieves registrants for the selected webinar occurrence using the Zoom API. Code Node – Extract Registrant Emails** Processes the Zoom API response and extracts the email addresses of all registrants. Filter Node – Filter Out Internal Emails** Removes internal/company email addresses by checking that they do not contain your domain. (This is fully configurable.) Mailchimp – Update a Member** Attempts to update the contact in Mailchimp based on their email address. This determines whether the contact already exists. IF Node – If ID Doesn’t Exist** Checks if Mailchimp returned an id during the update attempt. If Mailchimp did not return an ID, the contact is treated as new and continues through the creation + confirmation path. Code Node – MD5 Hash Email** Hashes the email using MD5. Mailchimp uses this hash as the unique identifier for list members. Mailchimp – Send Double Opt-In Email** Creates the contact with “pending” status and sends a double opt-in email. Mailchimp – Add Leads Tag** Tags the contact with "Leads" immediately as part of the creation process. How to Use Import the workflow into your n8n instance Configure credentials: Zoom OAuth2 credential Mailchimp HTTP Basic Auth credential Enter webinar details: Set webinar_id and occurence_id in the Type in IDs node Adjust internal email filtering: Update the domain in the “Filter Out Internal Emails” node (e.g., change @yourcompanydomain.com) Configure Mailchimp nodes: Replace LIST_ID_HERE with your Mailchimp Audience/List ID Adjust tags if needed Test the workflow: Run it with a real webinar and confirm behavior: internal emails are excluded existing contacts are updated new contacts receive the double opt-in email tags are applied correctly Example Use Cases Automated lead generation from webinar attendance Keeping marketing lists clean and external-only Recurring webinars with scheduled syncing Easy double opt-in compliance with no manual steps Requirements Zoom account** with API access Mailchimp account** with API access n8n instance** with correctly configured credentials Optional Enhancements Replace the manual trigger with a Webhook for recurring syncs Auto-detect the latest webinar ID using a Zoom API call Add additional filters (e.g., job title, country, language) Add Slack/email notifications summarizing new leads Add error-handling paths for retrying failed API calls
by Matt Chong
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Gmail Auto-Reply with AI Automatically draft smart email replies using ChatGPT. Reclaim your time typing the same responses again and again. Who is this for? If you're overwhelmed with emails and constantly repeating yourself in replies, this workflow is for you. Whether you're a freelancer, business owner, or team lead, it saves you time by handling email triage and drafting replies for you. What does it solve? This workflow reads your unread Gmail messages and uses AI to: Decide whether the email needs a response Automatically draft a short, polite reply when appropriate Skip spam, newsletters, or irrelevant emails Save the AI-generated reply as a Gmail draft (you can edit it before sending) It takes email fatigue off your plate and keeps your inbox moving. How it works Trigger on New Email: Watches your Gmail inbox for unread messages. AI Agent Review: Analyzes the content to decide if a reply is needed. OpenAI ChatGPT: Drafts a short, polite reply (under 120 words). Create Gmail Draft: Saves the response as a draft for you to review. Label It: Applies a custom label like Action so you can easily find AI-handled emails. How to set up? Connect credentials: Gmail (OAuth2) OpenAI (API key) Create the Gmail label: In your Gmail, create a label named Action (case-sensitive). How to customize this workflow to your needs Change the AI prompt**: Add company tone, extra context, or different reply rules. Label more intelligently**: Add conditions or labels for “Newsletter,” “Meeting,” etc. Adjust frequency**: Change how often the Gmail Trigger polls your inbox. Add manual review**: Route drafts to a team member before sending.