by PollupAI
Who's it for This template is for Customer Success and Sales teams who use HubSpot. It automates the critical handoff from sales to success, ensuring every new customer gets a fast, personalized welcome. It's perfect for anyone looking to standardize their onboarding process, save time on manual tasks, and improve the new customer experience using AI. What it does This workflow triggers when a deal's "Is closed won" property is set to True in HubSpot. It assigns a Customer Success Manager (CSM) by querying an n8n Data Table to find the 'least busy' CSM (based on a deal count) and fetches the deal's details to find all associated contacts. It then loops to identify the "Champion" contact by checking their "Buying Role" (hs_buying_role). An AI agent (in the AI: Write Welcome Email node) generates a personalized welcome email, which is converted to HTML and sent via Gmail. Finally, the workflow updates the Champion's contact record in HubSpot and updates the CSM's deal count in the Data Table to keep the logic in sync. How to set up Create and Populate Data Table: This template requires an n8n Data Table to manage CSM assignments. Create a Data Table named csm_assignments. Add two columns: csm_id (String) and deal_count (Number). Add one row for each CSM with their HubSpot Owner ID and a starting deal_count of 0. Link Data Table Nodes: Open the Get CSM List and Increment CSM Deal Count nodes and select the csm_assignments table you just created from the Table dropdown. Configure Variables: In the Configure Template Variables node, you must set your sender info (company_name, sender_name, and sender_email). Customize AI Prompt: In the AI: Write Welcome Email node, update the placeholder [Link to Your Video] and [Link to Your Help Doc] links with your own URLs. Check HubSpot Property: This workflow assumes you use the "Buying Role" (hs_buying_role) contact property to identify your "Champion". If you use a different property, you must update the HubSpot: Get Contact Details and If Role is 'Champion' nodes. Requirements Access to n8n Data Tables. HubSpot (Developer API):** A credential for the Trigger: Deal Is 'Closed Won' node. HubSpot (OAuth2):** A credential for all other HubSpot nodes (Get Deal Details, Get Contact Details, Assign Contact Owner). AI Credentials:** (e.g., OpenAI) Credentials for the AI Model node (the node connected to AI: Write Welcome Email). Email Credentials:** (e.g., Gmail) Credentials for the Gmail: Send Welcome Email node. How to customize the workflow You can easily customize this workflow to send different emails based on deal properties. Add an If node after the HubSpot: Get Deal Details node to check for the deal's value, product line, or region. Based on these properties, you can route the flow to different AI: Write Welcome Email nodes with unique prompts. For example, you could check the contact's 'industry' or 'company size' to send them links to different, more relevant 'Getting Started' videos and documentation.
by Cheng Siong Chin
How It Works This workflow automates insurance claims processing by deploying specialized AI agents to analyze actuarial data, draft claim memos, and perform risk assessments. Designed for insurance adjusters, underwriters, and claims managers handling high claim volumes, it solves the bottleneck of manual claim review that delays settlements and increases operational costs. The system ingests new claims data via scheduled triggers, then routes information to an actuarial analysis agent that calculates loss ratios and risk scores. A memo writer agent generates detailed claim summaries with recommendations, while a risk assessment agent evaluates fraud indicators and coverage implications. An orchestrator agent coordinates these specialists, ensuring consistent analysis standards. Final reports are automatically distributed via email to product teams and Slack notifications to risk management, creating transparent workflows while reducing claim processing time from days to hours with standardized, comprehensive evaluations. Setup Steps Configure claims database API credentials in "Fetch New Claims Data" node Input NVIDIA API key for all OpenAI Model nodes Add OpenAI API key in Orchestrator Agent configuration Set up Calculator Tool parameters for premium adjustment calculations Configure Gmail credentials and recipient addresses for product team Connect Slack workspace and specify risk team channel for alerts Prerequisites NVIDIA API access, OpenAI API key, claims management system API Use Cases Auto insurance claim triage, property damage assessment automation Customization Adjust risk scoring thresholds, add industry-specific analysis criteria Benefits Reduces claim processing time by 85%, ensures consistent evaluation standards
by Atta
Stop watching long videos, start listening to concise summaries. This workflow transforms any YouTube video URL sent via Telegram into a high-quality, spoken audio summary (MP3) and a structured text overview. It acts as your personal AI research assistant, turning lengthy content into bite-sized audio files that you can consume on the go. It leverages Decodo for robust transcript extraction, OpenAI for intelligent summarization, and for realistic text-to-speech generation. ✨ Features Telegram-First Interface:** Send links and receive audio directly in your chat app. Smart Validation:** Automatically checks if the link is a valid YouTube URL before processing to save API credits. Multi-Language Support:** Easily configure the output language (English, Spanish, German, etc.) via a simple Config node. The AI will translate and speak in this language. Robust Error Handling:** Gracefully handles videos with no captions/transcripts by notifying the user instead of breaking the workflow. Structured Data Extraction:** Uses AI to extract the Genre, Title, and Summary alongside the audio file. ⚙️ How it Works Trigger: You send a YouTube URL to your Telegram Bot. Validate: The workflow checks the URL pattern using Regex. Extract: Decodo scrapes the video page to retrieve the full transcript JSON. Process: A Code node flattens the complex JSON into a readable text format. Summarize: OpenAI (gpt-4o-mini) analyzes the text and writes a script optimized for listening. Speak: OpenAI converts the script into a high-definition MP3 file. Deliver: The bot replies with the Audio File and a formatted text summary including the genre tags and original link. 📥 Decodo Node Installation The Decodo node is used in this workflow for fetching the YouTube Transcript. Find the Node: Click the + button in your n8n canvas. Search: Search for the Decodo node and select it. Credentials: When configuring the first Decodo node, use your API key (obtained with the 80% discount coupon). Setup: Open the Decodo (Fetch YouTube Transcript) node to ensure it is correctly targeting the YouTube service. 🎁 Exclusive Deal for n8n Users To run this workflow, you require a robust scraping provider. We have secured a massive discount for Decodo users: Get 80% OFF the 23k Advanced Scraping API plan. Coupon Code: ATTAN8N Sign Up Here: Claim 80% Discount on Decodo ➕ How to Adapt the Template This workflow is highly flexible and can be modified for various content tasks: Change AI Model:* Easily swap the *OpenAI Chat Model* node with an *OpenAI* or *Anthropic (Claude)** node without altering the core logic. Create Long-Form Drafts:** Modify the AI System Prompt to generate a full 1,000-word blog post draft or a set of social media updates instead of a short audio script. Change Destination:* Replace the *Telegram* nodes with *Slack, **Microsoft Teams, Email (Gmail/SMTP), or Discord to deliver the audio and summary to your preferred channel. Create an Archive:* Connect the successful output to a *Google Sheets* or *Airtable** node to keep a searchable archive of every video summary created.
by Yasser Sami
Customer Support AI Agent for Gmail This n8n template demonstrates how to build an AI-powered customer support workflow that automatically handles incoming Gmail messages, classifies them, finds answers from your knowledge base, and sends a personalized reply. Who’s it for SaaS founders or teams who want to automate customer support. Freelancers and solopreneurs who receive repetitive customer queries. Companies that want to reduce manual email triage and improve response times. How it works / What it does Trigger: A new email arrives in Gmail. Classification: The workflow uses a text classifier to decide whether the email is customer support-related or not. If not, it’s ignored. If yes, it proceeds. AI Agent: Queries a knowledge base (vector database with OpenAI embeddings). Retrieves the most relevant answer. Drafts a reply using AI (OpenAI or Google Gemini model). Post-processing: Labels the email in Gmail for organization. Sends a reply automatically. This ensures that your customers get timely, relevant responses without manual intervention. How to set up Import this template into your n8n account. Connect your Gmail account in the Gmail Trigger, Label, and Reply nodes. Connect your AI model provider (OpenAI or Google Gemini). Configure the knowledge base embeddings (upload your docs/FAQ into the vector database). Activate the workflow — and your AI customer support agent is live! Requirements n8n account. Gmail account (with API access enabled). OpenAI or Google Gemini account for LLM and embeddings. Knowledge base data (FAQ, documentation, or past tickets). Google Drive account for auto update your vector database(with API access enabled). How to customize the workflow Knowledge Base**: Replace or expand with your own company docs, FAQs, or past conversations. Classification Rules**: Train or adjust the classifier to handle more categories (e.g., Sales, Partnership, Technical Support). Reply Style**: Customize AI prompts for tone — professional, casual, or friendly. Labels**: Change Gmail labels to match your workflow (e.g., “Support,” “Sales,” “Priority”). Multi-language**: Add translation steps if your customers speak different languages. This template saves you hours of manual email triage and ensures your customers always get quick, accurate responses.
by Cheng Siong Chin
How It Works This workflow automates enterprise budget monitoring and cost optimization using Anthropic Claude as the core AI engine across multiple specialist agents. It targets finance teams, operations managers, and CFOs managing complex multi-department budgets where manual tracking leads to delayed decisions and cost overruns. The workflow triggers on schedule, generates metrics data, and routes it through a Cost Intelligence Agent that classifies budget status (Critical, Warning, Review, Feedback). Each path activates specialist agents—Budget Alert, Routing Recommendation, and Cost Projection—coordinated by an Optimization Coordinator. Results are routed by action type: urgent alerts fire via Slack, executive summaries deliver via email, and all optimization actions are stored. This gives finance teams real-time cost intelligence with automated escalation and audit-ready records. Setup Steps Import workflow JSON into your n8n instance. Add Anthropic API credentials. Set Schedule Trigger frequency. Update Workflow Configuration node with budget thresholds per department or cost centre. Add Slack credentials and configure the target channel in the Send Slack Alert node. Set Gmail/SMTP credentials for the Send Executive Report Email node. Prerequisites n8n (cloud or self-hosted), Anthropic API key (Claude), Slack workspace with bot token Use Cases Finance teams automating multi-department budget variance detection and escalation Customization Replace Anthropic Claude with OpenAI GPT-4 or NVIDIA NIM in any agent node Benefits Eliminates manual budget reviews through automated AI-driven cost classification
by Cheng Siong Chin
How It Works This workflow automates industrial asset health monitoring and predictive maintenance using Anthropic Claude across coordinated specialist agents. It targets facility managers, maintenance engineers, and operations teams in manufacturing, energy, and infrastructure sectors where reactive maintenance leads to costly unplanned downtime and asset failures. On schedule, the system ingests asset health data and routes it through a Performance Evaluation Agent that coordinates three specialist agents: Maintenance Scheduling, Parts Readiness, and Lifecycle Reporting. An MCP External Data Tool enriches analysis with real-time contextual data. Results are risk-routed—Critical assets trigger immediate Slack alerts, High-risk assets escalate via email reports, and Routine cases are logged for scheduled maintenance. All paths merge into a unified maintenance log, giving operations teams proactive, audit-ready asset intelligence before failures occur. Setup Steps Import workflow JSON into your n8n instance. Add Anthropic API credentials. Set Schedule Trigger frequency aligned to your asset monitoring cycle. Update Workflow Configuration node with asset thresholds. Configure MCP External Data Tool with your external data source endpoint and authentication. Add Slack credentials and set the target channel in the Notify Critical Alert node. Set Gmail/SMTP credentials for the Email Escalation Report node. Prerequisites n8n (cloud or self-hosted), Anthropic API key (Claude), Slack workspace with bot token Use Cases Facility managers automating condition-based maintenance scheduling across multiple assets Customization Replace Anthropic Claude with OpenAI GPT-4 or NVIDIA NIM in any agent node Benefits Shifts maintenance from reactive to predictive, reducing unplanned downtime significantly
by Cheng Siong Chin
How It Works This workflow automates end-to-end medical claims processing using a multi-agent AI orchestration system built on OpenAI GPT-4. It targets healthcare revenue cycle teams, billing departments, and hospital administrators burdened by manual claims adjudication, coding errors, and payer denials. The workflow triggers on a schedule, loads billing data, and routes it through an Orchestrator Agent that coordinates four specialist sub-agents: Coding Validation, Claims Submission, Denial Detection, and Payer Follow-up. Each agent independently validates, submits, or flags claims. Results are parsed, merged, and routed by risk level. Final metrics and a formatted report close the cycle, giving teams real-time visibility into claim status, denial patterns, and revenue recovery. Setup Steps Import workflow JSON into your n8n instance. Add OpenAI API credentials. Configure Schedule Trigger with desired processing frequency. Update Workflow Configuration node with your billing system endpoint or sample data path. Set Gmail/SMTP credentials for the Escalate to Revenue Specialist email node. Connect Google Sheets or database nodes with appropriate credentials and sheet IDs. Test with simulated billing data before enabling live data sources. Prerequisites n8n, OpenAI API key (GPT-4) and Gmail or SMTP account Use Cases Hospital billing departments automating claims submission and denial follow-up Customization Swap OpenAI for NVIDIA NIM or Anthropic models in any agent node and add Slack alerts alongside email escalation Benefits Reduces manual claims review by 80%+ through parallel AI agent processing
by Jitesh Dugar
⚖️ HR Sovereign: AI-Powered Onboarding Hub A high-fidelity employee onboarding engine: Intake → Role-Based Enrichment → AI Personalization → IT Provisioning. ⚙️ Core Sovereign Logic Enrichment:** Auto-classifies Tech, Sales, and Leadership roles to drive specific logic tracks. Intelligence:* Uses *AI Agent (GPT-4)** to generate personalized welcome messaging based on job DNA. Atomization:* *Merge PDF** node assembles role-specific policies and benefits into a single high-res package. Provisioning:* Dynamically generates *Jira* hardware/access tickets and *Notion** tracking dashboards. Delivery:* Sends branded HTML emails via *Gmail* and announces hires on *Slack**. 📋 Setup & Prerequisites Intake: Connect your HRIS (BambooHR/Workday) to the Webhook URL. Assets: Organize Drive folders into "Technical", "Leadership", and "Standard" templates. Tracking: Connect your Notion Onboarding Database and Jira IT Project. Metrics: Time_to_Provision, Engagement_Score, Document_Integrity_Hash.
by Yaron Been
Analyze Reddit sentiment around competitor brands using Bright Data URL-based scraping and GPT-5.4. This workflow Reads competitor post URLs from Google Sheets, scrapes Reddit posts via URL using Bright Data's Reddit Posts API, and uses GPT-5.4 to map sentiment distribution. The AI calculates positive, neutral, and negative sentiment percentages, extracts top complaints and praises, identifies vulnerability areas, and scores the competitive opportunity (0-100). High-opportunity brands trigger alerts to the strategy team. How it works: Weekly schedule trigger runs on Monday at 8 AM. Reads the 'competitor_brands' sheet (columns: brand_name, url, industry). Sends each URL to Bright Data for Reddit post discovery. Validates the Bright Data API response. GPT-5.4 analyzes sentiment distribution and identifies competitive opportunities. Parses AI output and merges with original brand data. Filters by AI confidence (>= 0.7). Brands with competitive_opportunity_score >= 70 trigger email alerts and go to 'high_opportunity_brands'. Lower-scoring brands go to 'competitor_sentiment'. Low-confidence results go to 'low_confidence_sentiment'. Setup: Create a Google Sheet with a 'competitor_brands' tab containing columns: brand_name, url, industry. Create output tabs: high_opportunity_brands, competitor_sentiment, low_confidence_sentiment. Configure Bright Data API credentials as HTTP Header Auth (Bearer token). Connect OpenAI, Google Sheets, and Gmail OAuth2 credentials. Requirements: Bright Data API account (~$0.003-0.005 per URL scrape). OpenAI API account (GPT-5.4 costs ~$0.003-0.008 per call). Google Sheets OAuth2 credentials. Gmail OAuth2 credentials. Notes: Track competitive_opportunity_score over time to spot widening vulnerabilities. Use the vulnerability_areas field to inform your product positioning and messaging. Combine with your own product monitoring (Template 21) for a complete competitive picture.
by Leo Lara
AI Meeting Task Manager - Google Meet to GoHighLevel CRM 📋 TEMPLATE DESCRIPTION Transform your meeting follow-ups from chaos to clarity! This workflow automates the entire post-meeting workflow by scanning Google Meet recordings folders, extracting action items from AI-generated meeting notes, and creating tasks directly in your GoHighLevel CRM. 🎯 Who is this for? Sales teams using GoHighLevel CRM Agency owners managing multiple client meetings Anyone who uses Google Meet with Gemini note-taking Professionals drowning in meeting follow-ups ✨ What it does: Daily File Organization Scans your Google Meet recordings folder Automatically sorts recordings, notes, and chat logs into organized subfolders Keeps your Drive clean and searchable AI-Powered Task Extraction Reads Google Docs meeting notes (generated by Gemini) Identifies action items assigned to you Intelligently determines due dates from context (defaults to 3 business days) CRM Integration Searches for meeting participants in GoHighLevel Creates properly formatted tasks with full context Links tasks to the correct contact record Beautiful Email Summaries Sends a professionally designed HTML email Shows tasks created per contact Includes due dates and status updates 🔧 Technologies Used: Google Drive API (file management) Google Docs API (content extraction) GoHighLevel API (contact search + task creation) OpenAI GPT-4 (task extraction intelligence) Gmail API (email delivery) ⚙️ Setup Requirements: Google Cloud OAuth credentials (Drive, Docs, Gmail) GoHighLevel OAuth credentials OpenAI API key Create 4 Google Drive folders (source + 3 destination folders) 📖 Setup Instructions: Create Google Drive Folders: Source folder: Where Google Meet saves recordings Recordings folder: For video files Notes folder: For Gemini notes Chat folder: For meeting chat logs Configure Credentials: Connect Google Drive OAuth Connect Google Docs OAuth Connect GoHighLevel OAuth Connect Gmail OAuth Add OpenAI API key Update Folder URLs: Replace placeholder URLs in Google Drive nodes with your folder URLs Customize: Set your email address in the Gmail tool Set your GoHighLevel user ID for task assignment Adjust the schedule trigger timing as needed 💡 Pro Tips: Works best with Google Meet's Gemini note-taking feature Customize the AI prompts to match your task naming conventions The HTML email template is fully customizable
by Felix Kemeth
Overview Staying up to date with fast-moving topics like AI, machine learning, research, or your specific industry can be tricky. To solve this for myself (for me, it is mostly AI and automation topics), I built and use this n8n workflow: it pulls fresh articles using NewsAPI based on my topics of interest, lets an AI agent pick the 5 most relevant ones, enriches them with a Tavily search engine, and sends a clean, readable newsletter straight to Telegram - in the language you specify. In this post, I'll: Explain what the workflow does and why it's useful Show you how to import and configure it step by step Highlight the main advantages and common customisations Outline concrete next steps and improvements After following this guide, you'll end up with a fully automated weekly newsletter that delivers relevant news on the topics you care about - without any manual work. This is ideal if you already run n8n and want a mostly no‑code way to get a curated weekly digest in Telegram. What this workflow does At a high level, this workflow: Runs on a schedule (weekly at 9:00 on Sundays by default) Automatically finds recent, relevant news via NewsAPI for your topics of interest Lets AI select the top 5 most relevant news Uses a Tavily-powered AI agent to fact-check and enrich each article Aggregates the final results into a compact newsletter in the language you specify Sends them as a Markdown-formatted Telegram message The result: every week you get an AI-picked, enriched mini-newsletter with the latest news based on your own interests - delivered in Telegram. Requirements To run this workflow, you need: NewsAPI key** Create an account here and generate an API key - it is free. Tavily API key** You can sign up here and create an API key. They also have a generous free tier. OpenAI API key** Get one from OpenAI - we need this for the LLM agent calls. Telegram bot + chat ID** A Telegram bot (via BotFather) and the chat/channel ID where you want the newsletter. It is also free. See for example here how to set that up. How it works The exact logic of the workflow is as follows: Schedule Trigger** Runs the workflow on a fixed interval (in this version: weekly, at 9:00 on Sundays). Set topics and language** A Set node that defines topics (my default is AI,n8n - use a comma-separated list) and language (here I have English, but choose what you prefer). Change these to match your interests (e.g. health,fitness, macroeconomics,markets, climate,policy, or anything you care about). Call NewsAPI** HTTP Request node calling the NewsAPI API. It uses as arguments: from: last 7 days¹ q: the query, built from your topics (topics like AI,n8n become AI OR n8n expected by the API)² sortBy: relevancy - the most relevant ones at the top of the results returned Auth is handled via an httpQueryAuth credential, where your NewsAPI key is passed as a query parameter. AI Topic Selector** An OpenAI - Message a model node using gpt-5.1 via your OpenAI API key with the following prompt: You are an assistant that selects the most relevant news articles for a user. Instructions: Choose the 5 most relevant non-overlapping articles based on the user topics. For each article, provide: title short summary (1–2 sentences) source name url Output the results in the language specified by the user. Output as a "articles" JSON array of objects, each with "title", "summary", "source" and "url". User topics of interest: {{ $('Set topics and language').item.json.topics }} Output language: {{ $('Set topics and language').item.json.language }} NewsAPI articles: {{ $json.articles.map( article => `Title: ${article.title} Description: ${article.description} Content: ${article.content} Source: ${article.source.name} URL: ${article.url}` ).join('\n---\n') }} The prompt instructs the model to read your topics and language, look at all articles from the NewsAPI call (it returns a maximum of 100), select the 5 most relevant, non-overlapping articles, and output a JSON array with title, summary, source and url. Split Out** Splits out the AI message so each article becomes its own item. This lets the downstream AI agent work on each article individually. Under the hood, we parse the JSON array returned by the AI into individual items, so that each article becomes its own item in n8n. This lets the AI Agent node enrich each article separately. Newsletter AI Agent** An AI Agent node with gpt-5.1 as model, again accessed via your OpenAI API key. The agent takes the initial title, summary, source and url, uses the Tavily search tool to find 2–3 reliable, recent sources, and writes a concise 1–3 sentence article in the language you specified. The prompt for the model is shown below. You are a research writer that updates short news summaries into concise, factual articles. Input: Title: {{ $json["title"] }} Summary: {{ $json["summary"] }} Source: {{ $json["source"] }} Original URL: {{ $json["url"] }} Language: {{ $('Set topics and language').item.json.language }} Instructions: Use Tavily Search to gather 2–3 reliable, recent, and relevant sources on this topic. Update the title if a more accurate or engaging one exists. Write 1–3 sentences summarizing the topic, combining the original summary and information from the new sources. Return the original source name and url as well. Output (JSON): { "title": "final article title", "content": "concise 1–3 sentence article content", "source": "the name of the original source", "url": "the url of the original source" } Rules: Ensure the topic is relevant, informative, and timely. Translate the article if necessary to comply with the desired language {{ $('Set topics and language').item.json.language }}. In particular, the prompt instructs the model to Use Tavily Search to gather 2–3 reliable, recent, and relevant sources on this topic. Update the title if a more accurate or engaging one exists Write 1–3 sentences summarizing the topic, combining the original summary and information from the new sources Reply in a pre-defined JSON format including the original source name and url. The Output Parser enforces a structured JSON output with title, content, source and url as fields. Because the model is allowed to adjust titles, you may occasionally see slightly different titles than in the original feed; if you prefer minimal changes, you can tighten the prompt to only allow small tweaks. Aggregate** Aggregate node collecting the output field from the agent. Combines the individual article objects back into one array to be used for messaging. Send a text message** A Telegram - Send a text message node that uses your Telegram bot credentials and chatId. Renders each article as title, content plus Source: source. > To adjust this workflow for your needs, open the Set topics and language node to tweak topics (comma-separated, like AI,startups,LLMs or web dev,TypeScript,n8n) and switch the language to any target language, then inspect the Schedule Trigger to adjust interval and time, e.g. weekly at 07:30. These two tweaks control the content topics of your newsletter and when you will receive it. Why this workflow is powerful End-to-end automation** From news discovery to curated delivery, everything is automated. AI-driven topic relevance** Instead of naïvely listing every headline the AI filters for relevance to your topics and avoids overlapping or duplicate stories. Grounded in facts** By using NewsAPI and Tavily, the newsletter stays fact-based, i.e. you get short, factual summaries grounded in multiple sources. Flexibility** A single parameter (language) lets you specify the output language, while the Schedule Trigger lets you set the frequency. Low friction and mobile-first** Using Telegram as a consumption surface provides quick, low-friction reading, with push notifications as notifiers. Next steps Here are concrete directions to take this workflow further: RAG-workflow for better topic selection** Use a Retrieval-Augmented Generation pattern to let the model better choose topics that align with your evolving preferences. Right now, all news articles go into the prompt, which may bias the model to pick articles that appear first. Prompt iteration and evaluation framework** Systematically experiment with different selection criteria (e.g. "more technical", "more beginner-friendly"), tone and length of the newsletter. Logging using n8n data tables** Persist previous newsletter to avoid repetition and for better debugging. Using the source links provided in the newsletter, track which articles were clicked to enable 1:1 personalization. Email with HTML template** For more flexibility, send the newsletter via email. Trigger based on news relevance** Instead of (or in addition to) a fixed schedule, compute a "relevance score" or "novelty score" across articles. Trigger only when the score crosses a threshold. Incorporating other news APIs or RSS feeds** Add more sources such as other news APIs and RSS feeds from blogs, newsletters, or communities. Adjust for arxiv paper search and research news** Swap NewsAPI for arxiv search or other academic sources to obtain a personal research digest newsletter. Add 1:1 personalization by tracking URL clicks** Use n8n data tables to track which URLs have been clicked. Use this information as input to future AI runs to refine the news suggestions. Audio and video news** Use audio or video models for better news communication. Wrap-up This workflow shows how I use n8n, NewsAPI, Tavily, OpenAI, and Telegram to create a personal weekly newsletter. It’s mostly no-code, easy to customize, and something I rely on myself to stay informed without spending time browsing news manually. Contact me here, visit my website, or connect with me on LinkedIn. Footnotes we do that here with the JS expression ={{ DateTime.fromISO($json.timestamp).minus({ days: 7 }) }} we do that here with the JS expression {{ $json.topics.replaceAll("," , " OR ") }}
by Lucas Peyrin
How it works This workflow creates a sophisticated, self-improving customer support system that automatically handles incoming emails. It's designed to answer common questions using an AI-powered knowledge base and, crucially, to learn from human experts when new or complex questions arise, continuously expanding its capabilities. Think of it like having an AI assistant with a smart memory and a human mentor. Here's the step-by-step process: New Email Received: The workflow is triggered whenever a new email arrives in your designated support inbox (via Gmail). Classify Request: An AI model (Google Gemini 2.5 Flash Lite) first classifies the incoming email to ensure it's a genuine support request, filtering out irrelevant messages. Retrieve Knowledge Base: The workflow fetches all existing Question and Answer pairs from your dedicated Google Sheet knowledge base. AI Answer Attempt: A powerful AI model (Google Gemini 2.5 Pro) analyzes the customer's email against the entire knowledge base. It attempts to find a highly relevant answer and drafts a complete HTML email response if successful. Decision Point: An IF node checks if the AI found a confident answer. If Answer Found: The AI-generated HTML response is immediately sent back to the customer via Gmail. If No Answer Found (Human-in-the-Loop): Escalate to Human: The customer's summarized question and original email are forwarded to a human expert (you or your team) via Gmail, requesting their assistance. Human Reply & AI Learning: The workflow waits for the human expert's reply. Once received, another AI model (Google Gemini 2.5 Flash) processes both the original customer question and the expert's reply to distill them into a new, generic, and reusable Question/Answer pair. Update Knowledge Base: This newly created Q&A pair is then automatically added as a new row to your Google Sheet knowledge base, ensuring the system can answer similar questions automatically in the future. Set up steps Setup time: ~10-15 minutes This workflow requires connecting your Gmail and Google Sheets accounts, and obtaining a Google AI API key. Follow these steps carefully: Connect Your Gmail Account: Select the On New Email Received node. Click the Credential dropdown and select + Create New Credential to connect your Gmail account. Grant the necessary permissions. Repeat this for the Send AI Answer and Ask Human for Help nodes, selecting the credential you just created. Connect Your Google Sheets Account: Select the Get Knowledge Base node. Click the Credential dropdown and select + Create New Credential to connect your Google account. Grant the necessary permissions. Repeat this for the Add to Knowledge Base node, selecting the credential you just created. Set up Your Google Sheet Knowledge Base: Create a new Google Sheet in your Google Drive. Rename the first sheet (tab) to QA Database. In the first row of QA Database, add two column headers: Question (in cell A1) and Answer (in cell B1). Go back to the Get Knowledge Base node in n8n. In the Document ID field, select your newly created Google Sheet. Do the same for the Add to Knowledge Base node. Get Your Google AI API Key (for Gemini Models): Visit Google AI Studio at aistudio.google.com/app/apikey. Click "Create API key in new project" and copy the key. In the workflow, go to the Google Gemini 2.5 Pro node, click the Credential dropdown, and select + Create New Credential. Paste your key into the API Key field and Save. Repeat this for the Google Gemini 2.5 Flash Lite and Google Gemini 2.5 Flash nodes, selecting the credential you just created. Configure Human Expert Email: Select the Ask Human for Help node. In the Send To field, replace the placeholder email address with the actual email address of your human expert (e.g., your own email or a team support email). Activate the Workflow: Once all credentials and configurations are set, activate the workflow using the toggle switch at the top right of your n8n canvas. Start Learning! Send a test email to the Gmail account connected to the On New Email Received node. Observe how the AI responds, or how it escalates to your expert email and then learns from the reply. Check your Google Sheet to see new Q&A pairs being added!