by Alex Berman
Who is this for This workflow is for skip tracers, real estate investors, debt collectors, and data brokers who need to find current contact details -- phone numbers, email addresses, and physical addresses -- for a list of people using only names, phone numbers, or email addresses as inputs. How it works A configuration node lets you define up to five people to search (by name, phone, or email). The workflow submits a skip trace job to the ScraperCity People Finder API and captures the returned run ID. A polling loop checks job status every 60 seconds until the scrape is marked SUCCEEDED (scrapes typically take 10--60 minutes). Once complete, the results CSV is downloaded, parsed, and deduplicated. Clean contact records are written row-by-row into a Google Sheet. How to set up Create a ScraperCity account at app.scrapercity.com and copy your API key. In n8n, create a Header Auth credential named ScraperCity API Key with the header name Authorization and value Bearer YOUR_KEY. Connect your Google Sheets OAuth2 credential. Open the Configure Search Inputs node and replace the sample names/phones/emails. Set your Google Sheet ID and sheet name in the Write Results to Google Sheets node. Click Execute workflow. Requirements ScraperCity account (app.scrapercity.com) Google Sheets OAuth2 credential in n8n How to customize the workflow Swap the manual trigger for a Schedule trigger to run nightly. Feed inputs from a Google Sheet instead of the Set node to process large lists. Add a Filter node after parsing to keep only records that have a verified phone number.
by David Olusola
🎬 YouTube New Video → Auto-Post Link to Slack This workflow automatically checks your YouTube channel’s RSS feed every 30 minutes and posts a message to Slack when a new video is published. It includes the title, description snippet, publish date, and a direct “Watch Now” button. ⚙️ How It Works Check Every 30 Minutes A Cron node runs on a 30-minute interval. Keeps monitoring the channel RSS feed for updates. Fetch YouTube RSS The HTTP Request node retrieves the channel’s RSS feed. Uses the format: https://www.youtube.com/feeds/videos.xml?channel_id=YOUR_CHANNEL_ID Parse RSS & Check for New Video A Code node extracts video info: Title Link Description Published date Sorts by most recent publish date. Ensures only new videos within last 2 hours are processed (avoids duplicate posts). Format Slack Message Builds a rich Slack message with: Video title Description preview Published date Button: “🎥 Watch Now” Post to Slack Sends the formatted message to your chosen Slack channel (default: #general). Includes custom username/icon for branding. 🛠️ Setup Steps 1. Get YouTube Channel RSS Go to your channel page → View Page Source. Find: channel/UCxxxxxxxxxx (your channel ID). Construct RSS feed: https://www.youtube.com/feeds/videos.xml?channel_id=YOUR_CHANNEL_ID Replace YOUR_CHANNEL_ID_HERE in the HTTP Request node. 2. Connect Slack Create a Slack app at api.slack.com. Add OAuth scopes: chat:write, channels:read. Install to your workspace. In n8n, connect your Slack OAuth credentials. 3. Adjust Timing (Optional) Default = runs every 30 minutes. Modify the Cron node if you want faster or slower checks. 📺 Example Slack Output 🎬 New Video Published! How to Automate Your Business with n8n 📅 Published: Aug 29, 2025 Learn how to connect your apps and automate repetitive tasks using n8n… With a clickable 🎥 Watch Now button linking directly to the video. ⚡ With this workflow, your Slack team is always up to date on new YouTube uploads — no manual link sharing needed.
by LeeWei
⚙️ Sales Assistant Build: Automate Prospect Research and Personalized Outreach for Sales Calls 🚀 Steps to Connect: Google Sheets Setup Connect your Google account via OAuth2 in the "Review Calls", "Product List", "Testimonials Tool", "Update Sheet", and "Update Sheets 2" nodes. Duplicate the mock Google Sheet (ID: 1u3WMJwYGwZewW1IztY8dfbEf5yBQxVh8oH7LQp4rAk4) to your drive and update the documentId in all Google Sheets nodes to match your copy's ID. Ensure the sheet has tabs for "Meeting Data", "Products", and "Success Stories" populated with your data. Setup time: ~5 minutes. OpenAI API Key Go to OpenAI and generate your API key. Paste this key into the credentials for both "OpenAI Chat Model" and "OpenAI Chat Model1" nodes. Setup time: ~2 minutes. Tavily API Key Sign up at Tavily and get your API key. In the "Tavily" node, replace the placeholder api_key in the JSON body with your key (e.g., "api_key": "your-tavily-key-here"). Setup time: ~3 minutes. How it Works • Triggers on a new sales call booking (manual for testing). • Pulls prospect details from Google Sheets and researches their company, tech stack, and updates using Tavily. • Matches relevant products/solutions from your product list and updates the sheet. • Generates personalized email confirmation (subject + body) and SMS using testimonials for relevance. • Updates the sheet with the outreach content for easy follow-up. Setup takes ~10-15 minutes total. All nodes are pre-configured—edit only the fields above. Detailed notes (e.g., prompt tweaks) are in sticky notes within the workflow.
by Marsel Bait
🧠 How it works This workflow lets users generate structured summaries from YouTube videos directly inside Slack using n8n, AssemblyAI, and OpenAI. When a user submits a YouTube link via a Slack slash command, the workflow extracts the video ID and validates the video duration. Videos longer than the supported limit are stopped early with a clear message sent back to Slack. For valid videos, the workflow downloads the video audio as an MP3 file and sends it to AssemblyAI for transcription. Once the transcription is complete, the transcript is passed to an AI model to generate a structured summary. The final result includes a concise TL;DR, key takeaways, and notable quotes, which are formatted and posted back to Slack asynchronously using the original response URL. ⚙️ Features • Triggers from a Slack slash command with a YouTube link • Validates video length before processing (maximum 10 minutes) • Downloads YouTube audio as MP3 using RapidAPI • Transcribes audio using AssemblyAI • Generates structured summaries (TL;DR, key takeaways, notable quotes) • Posts the summarized result back to Slack asynchronously 💡 Use cases & expected outcomes • Educational YouTube videos → Receive a clear summary instead of watching the full video • Long-form talks or interviews → Quickly get key points and memorable quotes • Research and learning → Extract insights from videos without manual note-taking • Content discovery → Decide whether a video is worth watching based on its summary In all cases, users receive a clear, structured summary of a YouTube video directly in Slack. 💡 Perfect for • Teams sharing YouTube links and wanting quick context • Researchers and learners reviewing long video content • Content creators analyzing videos efficiently 🧩 Notes • Please note that this workflow generates summaries, not full transcripts
by Rahul Joshi
📘 Description This workflow automates guest enquiry intake, intent classification, response generation, and internal routing for hospitality businesses using webhooks, GPT-4o, Gmail, and Slack. It converts raw guest enquiries into structured, actionable items while ensuring fast acknowledgements, correct team assignment, and SLA visibility. When a guest submits an enquiry through a webhook (website form, booking page, or external system), an AI agent analyzes the message to detect intent—booking, pricing, availability, or policy. The agent generates a polite, human-like acknowledgement message and determines the most suitable internal team to handle the request. Based on the detected intent, the workflow assigns the enquiry to the correct team with a predefined SLA. If the guest prefers email communication, an automated reply is sent via Gmail. In parallel, a detailed enquiry summary—including guest details, detected intent, assigned agent, and SLA—is posted to Slack for internal visibility and follow-up. All AI responses are also logged to Slack for transparency. Any workflow failure triggers an immediate Slack alert with diagnostic details. ⚙️ What This Workflow Does (Step-by-Step) 📥 Webhook Trigger – Guest Enquiry Intake Receives guest enquiry data (name, email, message, dates, preferences). 🧠 AI Intent Classification & Reply Generation GPT-4o analyzes the enquiry, detects intent, and generates a polite acknowledgement. 🔍 Extract Detected Category Parses AI output to identify the enquiry category. 🔀 Route by Intent Category Directs enquiries to booking, pricing, availability, or policy flows. 👥 Team Assignment with SLA Assigns the enquiry to the correct team and sets response SLA. 📧 Send Email Reply to Guest (Conditional) Automatically replies if the guest prefers email contact. 💬 Post Enquiry Summary to Slack Shares full enquiry context, assignment, and SLA for team follow-up. 📝 Log AI Reply to Slack Stores the generated guest reply for internal reference. 🚨 Error Handler → Slack Alert Sends instant alerts if any node fails. 🧩 Prerequisites • n8n webhook endpoint • OpenAI / Azure OpenAI (GPT-4o or GPT-4o-mini) credentials • Gmail OAuth2 credentials • Slack API credentials 💡 Key Benefits ✔ Instant AI-driven enquiry classification ✔ Faster guest acknowledgements ✔ Correct team routing with SLA enforcement ✔ Reduced manual triage for front-desk and sales teams ✔ Full visibility of enquiries and replies in Slack ✔ Built-in error monitoring 👥 Perfect For Hotels and resorts Vacation rental operators Hospitality sales and front-desk teams Property management companies Businesses handling high volumes of guest enquiries
by Shohei Sawada
This template automates overnight system health monitoring for DevOps and IT operations teams. It checks your internal services and APIs on a schedule, logs all results to Google Sheets, and sends AI-generated alert emails when something goes wrong. Who this is for System administrators, DevOps engineers, and IT operations teams who need lightweight, automated overnight monitoring — without setting up a full-scale monitoring platform. What's included A single workflow that handles the full monitoring loop: Read Monitor Targets — Reads active URLs from Google Sheets. Add or disable targets anytime without touching the workflow. Health Check — Sends HTTP GET requests with a 3-second timeout to each target. Response Time Tracking — Measures and logs response time in milliseconds for every check. Full Result Logging — Appends every check result (OK and ERROR) to a Google Sheets log with timestamp, status, status code, response time, error message, and alert status. AI-Generated Alert — On failure, GPT-4o-mini generates a structured Japanese alert message with situation summary, urgency level, first steps to check, and escalation guidance. Gmail Alert — Sends the alert email to a configured address with a clear subject line. Alert Status Tracking — Updates the log to mark when an alert has been sent. Key features Manage all monitoring targets in Google Sheets — no workflow edits needed to add or remove URLs Monitors both HTTP errors (503 etc.) and connection failures (timeout, ECONNREFUSED) Logs every check result for uptime analysis and audit purposes AI alert in Japanese with actionable first-response guidance Configurable schedule via Cron expression (default: every 10 min, weekdays 1:00–4:59 AM) Timezone follows your n8n instance setting — no hardcoded timezone Prerequisites Google Sheets (copy link provided in the workflow) Gmail account connected to n8n OpenAI API key このテンプレートは、DevOpsおよびIT運用チーム向けに夜間のシステム死活監視を自動化します。スケジュールに従って社内サービスやAPIをチェックし、全結果をGoogle Sheetsに記録し、障害検知時にはAIが生成したアラートメールを送信します。 このワークフローの対象ユーザー 本格的な監視プラットフォームを導入せずに、軽量で自動化された夜間監視を必要としているシステム管理者・DevOpsエンジニア・IT運用担当者向けです。 含まれる内容 監視ループ全体を1つのワークフローで処理します: Read Monitor Targets — Google SheetsからアクティブなURLを読み込む。ワークフローを編集せずにいつでも監視対象を追加・無効化できる。 Health Check — 各URLに3秒タイムアウトのHTTP GETリクエストを送信する。 レスポンスタイム計測 — 全チェックのレスポンスタイムをミリ秒単位で計測・記録する。 全結果ログ — タイムスタンプ・ステータス・ステータスコード・レスポンスタイム・エラーメッセージ・アラート送信状況を含む全チェック結果(OKおよびERROR)をGoogle Sheetsに記録する。 AIアラート生成 — 障害検知時、GPT-4o-miniが状況・緊急度・まず確認すべき手順・エスカレーション先をまとめた構造化された日本語アラートを生成する。 Gmailアラート送信 — 設定したメールアドレスに、分かりやすい件名でアラートメールを送信する。 アラート送信状況の記録 — アラート送信済みかどうかをログに更新する。 主な特徴 監視対象のURLはすべてGoogle Sheetsで管理 — URLの追加・削除にワークフローの編集は不要 HTTPエラー(503等)と接続障害(タイムアウト・ECONNREFUSED)の両方を検知 稼働率分析や監査目的のために全チェック結果を記録 初動対応に役立つ日本語構造化アラートをAIが生成 Cron式でスケジュールをカスタマイズ可能(デフォルト:平日1:00〜4:59に10分ごと) タイムゾーンはn8nインスタンスの設定に従う — ハードコードなし 前提条件 Google Sheets(ワークフロー内にコピー用リンクあり) n8nに接続済みのGmailアカウント OpenAI APIキー
by Mohammed Aljer
Who’s it for This template is for anyone who wants a quick, structured inbox review (founders, freelancers, support, operations) without reading every email. What it does Every 12 hours, the workflow checks Gmail for emails received in the last 12 hours, then summarizes each email and extracts the next action plus a priority level (high/normal/low). It generates a direct Gmail “Open message” link for each item (using rfc822msgid) and sends one clean HTML digest email. If there are no new emails, it sends a short “No new emails” message instead. How to set up 1) Connect your Gmail OAuth2 credentials in these nodes: Fetch Emails, Send Summary, No Emails. 2) Connect your Gemini credentials in Google Gemini Chat Model. 3) Update the recipient email in Send Summary and No Emails (sendTo), or replace it with a single variable in a Config/Set node. 4) Click Test workflow once, then activate the workflow. Requirements Gmail account + Gmail OAuth2 credentials in n8n. Google Gemini API access. Recommended model Use Gemini 2.5 Flash for fast, reliable structured output. If the model is not available in your region/account, select the closest available Gemini model. How to customize the workflow Change the lookback window (12 hours → 24 hours, etc.) by editing the Date/Time calculation. Adjust the AI prompt rules (priority criteria, action wording, output fields). Modify the HTML digest design to match your branding. Remove or change the Inbox-only filter if you want to include other labels (Sent/Other).
by vibin
Who it’s for This workflow is built for Shopify store owners using Magic Checkout (Razorpay). Since Shopify’s default abandoned cart recovery doesn’t work with third-party checkouts, you’re left without an easy way to track or follow up. This workflow solves that gap by sending you automatic Telegram alerts with every abandoned cart—so you stay on top of potential sales without lifting a finger. There's sticky notes inside for each node to explain everything you need! What it does The workflow runs automatically every 6 hours. Here’s the flow: Fetch abandoned cart data from Razorpay using the API. Format the data into clean, readable text (customer info, cart details, order value). Send the message directly to your Telegram bot. Instead of logging into Shopify or Razorpay to check, you’ll get a notification straight in Telegram. What you need A Telegram account with a bot (create one via BotFather). Your Telegram bot token and client ID. Your Razorpay API key from your Razorpay dashboard. How to set it up Import this workflow into your n8n workspace. Create a new API profile with your Razorpay credentials. Add your Telegram bot details to the Telegram nodes in n8n. (Optional) Adjust the trigger time—default is every 6 hours, but you can make it run more or less frequently. Why it helps This is a free, no-code workaround for a real Shopify limitation. By automating abandoned cart alerts, you save time, cut manual checks, and follow up faster with customers. Even a single recovered cart can pay off the effort. Plus, you can customize it—add more customer info, link directly to their checkout, or even connect it with Slack or email tools.
by Sheragim
This workflow automates the creation of a draft article for a blog Use Cases Rapidly generate blog content from simple prompts. Ensure content consistency and speed up time-to-publish. Automatically source and attach relevant featured images. Save your digital marketing team significant time. (Personalized touch based on your experience) Prerequisites/Requirements An OpenAI API Key (for GPT-4O). A Pixabay API Key (for image sourcing). A WordPress site URL and API credentials (username/password or application password). Customization Options Adjust the AI prompt in the AI Content Generation node to change the content tone and style. Modify the search query in the Pixabay Query HTTP node to influence the featured image selection. Change the reviewer email address in the final Send Review Notification node.
by Priyanka Rana
Overview This n8n workflow template automates your B2B marketing follow-up process. It tracks which introductory emails have received a reply, identifies leads who haven't responded within a set time, uses Gemini AI to draft a personalized, casual reminder, sends the follow-up as a reply on the original thread, and updates your lead tracker in Google Sheets. Best if used with preivously created workflow that sends an automated introductory email with templatized subject. Requirements To use this workflow, you need the following accounts and credentials: Gmail Account: To check for replies and send the reminder emails. Google Sheets Account: To manage your lead tracking spreadsheet (the workflow uses a sheet with ID). Below are the Sheet columns *First Name Last Name Email ID Company Name Company Information (optional) Designation (optional) Message - the main form enquiry Location (optional) Status (auto) Intro email Date (auto) Reminder 1 needed? (auto) Reminder 1 Email Date (auto)* Google Gemini (PaLM) API Key: For the AI Agent node to generate the personalized email content. How It Works This automation is broken down into three main stages: Stage 1: Check for Replies and Update Tracker This stage excludes leads who have already replied to your introductory email and updates the status in your tracker. When clicking ‘Execute workflow’ (Manual Trigger): The workflow starts manually or can be scheduled. Get many messages (Gmail): The node searches your inbox (CATEGORY_PERSONAL) for replies to your introductory email (using the search query subject: <template of your introductory email>). Update row in sheet (Google Sheets): For every incoming reply found, the workflow matches the lead by Email ID and updates the column Reminder 1 needed? to No. Stage 2: Identify Who Needs a Reminder This stage finds leads who have not yet received a reminder and checks if the introductory email was sent over 5 days ago. Get row(s) in sheet (Google Sheets): The workflow retrieves all leads from the tracker where the column Reminder 1 needed? is not set to No (i.e., they haven't replied and a reminder status hasn't been logged). If: A condition checks if the Intro email Date is older than 5 days (DateTime.now().minus({ days: 5 })). Only leads that meet this age criteria are passed forward. Stage 3: Send Personalized Reminder and Final Update For eligible leads, the AI generates a follow-up, finds the original email thread, sends the reply, and logs the action. AI Agent: The AI Agent acts as a B2B marketing assistant to write a short, friendly first reminder email. It uses lead data (First Name, Company Name, Message) to personalize the content, referencing the original introductory email and the client's pain point. Note: The AI is instructed to format its output into ClientEmail and ClientEmailBody using the Structured Output Parser. Edit Fields (Set): The structured output from the AI is mapped to workflow fields. Get many messages1 (Gmail): The workflow searches the SENT label for the original email using the client's email and the introductory subject line to find the correct threadId and messageId. Reply to a message (Gmail): The personalized body is sent as a reply on the original thread to maintain context. Update row in sheet1 (Google Sheets): The final step updates the lead's row in the tracker, setting Status to Reminder 1 Drafted, Reminder 1 needed? to Yes, and recording the current date in the Reminder 1 Email Date column. Customization Currently it has option to send first reminder. This can be extended to add another reminder. Write to priyanka@buildmyaiflow.agency for more customizations.
by WeblineIndia
AI-Powered Fake Review Detection Workflow Using n8n & Airtable This workflow automates the detection of potentially fake or manipulated product reviews using n8n, Airtable, OpenAI and Slack. It fetches reviews for a given product, standardizes the data, generates a unique hash to avoid duplicates, analyzes each review using an AI model, updates the record in Airtable and alerts the moderation team if the review appears suspicious. Quick Implementation Steps Add Airtable, OpenAI and Slack credentials to n8n. Create an Airtable Base with a reviews table. Connect the Webhook URL to your scraper or send sample JSON via Postman. Test the workflow by passing product and review URLs. Activate the workflow for continuous automated review screening. What It Does This workflow provides an automated pipeline to analyze product reviews and determine whether they may be fake or manipulated. It begins with a webhook that accepts product information and a scraper API URL. Using this information, the workflow fetches associated reviews. Each review is then expanded into separate items and normalized to maintain a consistent structure. The workflow generates a hash for deduplication, preventing multiple entries of the same review. New reviews are stored in Airtable and subsequently analyzed by OpenAI. The resulting risk score, explanation and classification are saved back into Airtable. If a review's score exceeds a predefined threshold, a structured Slack alert is sent to the moderation team. This ensures that high-risk reviews are escalated promptly while low-risk reviews are simply stored for recordkeeping. Who’s It For eCommerce marketplaces monitoring review integrity Sellers seeking automated fraud detection for product reviews SaaS platforms that accept user-generated reviews Trust & Safety and compliance teams Developers looking for an automated review-quality pipeline Requirements n8n (Cloud or Self-Hosted) Airtable Personal Access Token OpenAI API Key Slack Bot Token or Webhook Review Scraper API Basic understanding of Airtable field setup How It Works & How To Set Up 1. Receive Product Data The workflow starts with the Webhook – Receive Product Payload, which accepts a list of products and their scraper URLs. 2. Extract and Process Each Product Extract products separates the list into individual items. Process Each Product ensures that each product’s reviews are processed one at a time. 3. Fetch and Validate Reviews Fetch Product Reviews calls the scraper API. IF – Has Reviews? determines whether any reviews were returned. 4. Expand and Normalize Reviews Expand reviews[] to items splits reviews into individual items. Prepare Review Fields ensures consistent review structure. 5. Generate Review Hash Generate Review Hash1 produces a deterministic hash based on review text, reviewer ID, and date. 6. Airtable Deduplication Check Search Records by Hash checks whether the review already exists. Normalize Airtable Result cleans Airtable’s inconsistent empty output. Is New Review? decides if the review should be inserted or skipped. 7. Store New Reviews Create Review Record inserts new reviews into Airtable. 8. AI-Based Fake Review Analysis AI Fake Review Analysis sends relevant review fields to OpenAI. Parse AI Response ensures the output is valid JSON. 9. Update Airtable With AI Results Update Review Record stores the AI’s score, classification, and reasoning. 10. Moderation Alert Check Suspicious Score Threshold evaluates if the fake score exceeds a defined limit. If so, Send Moderation Alert posts a detailed message to Slack. How To Customize Nodes Fake Score Threshold Modify threshold in Check Suspicious Score Threshold. Slack Message Format Adjust text fields in Send Moderation Alert. AI Prompt Instructions Edit the instructions inside AI Fake Review Analysis. Airtable Fields Update mappings in both Create Review Record and Update Review Record. Additional Checks Insert enrichment steps before AI analysis, such as: reviewer profile metadata geolocation or reverse IP checks keyword density analysis Add-ons Notion integration for long-term review case tracking Jira or Trello integration for incident management Automated sentiment tagging Weekly review-risk summary reports Google Sheets backup for archived reviews Reviewer behavior modeling (number of reviews, frequency, patterns) Use Case Examples Detecting manipulated Amazon product reviews. Flagging repetitive or bot-like reviews for Shopify stores. Screening mobile app reviews for suspicious content. Quality-checking user reviews on multi-vendor marketplaces. Monitoring competitor-driven false negative or positive reviews. There can be many more scenarios where this workflow helps identify misleading product reviews. Troubleshooting Guide | Issue | Possible Cause | Solution | | ---------------------------- | ----------------------------------- | ------------------------------------------------------ | | No data after review fetch | Scraper API returned empty response | Validate scraper URL and structure | | Duplicate reviews inserted | Hash mismatch | Ensure Generate Review Hash1 uses the correct fields | | Slack alert not triggered | Bot not added to channel | Add bot to the target Slack channel | | AI response fails to parse | Model returned non-JSON response | Strengthen "JSON only" prompt in AI analysis | | Airtable search inconsistent | Airtable returns empty objects | Rely on Normalize Airtable Result for correction | Need Help If you need assistance customizing this workflow, integrating additional systems or designing advanced review moderation solutions, our n8n workflow development team at WeblineIndia is available to help. We offer support for: Workflow setup and scaling Custom automation logic AI-driven enhancements Integration with third-party platforms And so much more. Feel free to contact us for guidance, implementation or to build similar automated systems tailored to your needs.
by Paolo Ronco
This workflow automates the entire lifecycle of collecting, filtering, summarizing, and delivering the most important daily news in technology, artificial intelligence, cybersecurity, and the digital industry. It functions as a fully autonomous editorial engine, combining dozens of RSS feeds, structured data processing, and an LLM (Google Gemini) to transform a large volume of raw articles into a concise, high–value daily briefing delivered straight to your inbox. Read: Full setup Guide ✅ 1. Scheduled Automation The workflow begins with a Schedule Trigger, which runs at predefined intervals. Every execution generates a fresh briefing that reflects the most relevant news from the past 24 hours. ✅ 2. Massive Multi-Source RSS Collection The workflow gathers content from over 25 curated RSS feeds covering: 🔐 Cybersecurity (The Hacker News, Krebs on Security, SANS, CVE feeds, Google Cloud Threat Intelligence, Cisco Talos, etc.) 🤖 Artificial Intelligence (Google Research, MIT News, AI News, OpenAI News) 💻 Technology & Digital Industry (Il Sole 24 Ore, Cybersecurity360, Graham Cluley, and more) ⚙️ Nvidia Ecosystem (Nvidia Newsroom, Nvidia Developer Blog, Nvidia Blog) Each RSS feed is handled by a dedicated node, which ensures: source isolation easier debugging no single point of failure The feeds are grouped using category-specific Merge nodes (Cyber1/2/3, AI, Nvidia), enabling modular scalability. ✅ 3. Unified Feed Aggregation All category merges feed into the Merge_All node, creating a single combined dataset of articles from every source. ✅ 4. Intelligent Filtering (last 24 hours only) The Filter node removes: articles older than 24 hours (based on isoDate) invalid items duplicated or redundant entries This keeps the briefing strictly relevant to the current day. ✅ 5. Chronological Sorting The Sort – Articles by Date node orders all remaining items in descending date order. More recent or time-sensitive news is therefore prioritized. ✅ 6. Data Normalization (JavaScript Code) A dedicated Code node transforms all incoming items into one clean JSON object: { "articles": [ { "title": "...", "content": "...", "link": "...", "isoDate": "..." } ] } This standardized structure becomes the input for the LLM summarization stage. ✅ 7. AI Editorial Processing – Google Gemini The node LLM – News Summarizer is the workflow’s editorial brain. A complex prompt instructs Gemini to behave like the editor-in-chief of a major tech newspaper, enforcing strict rules: Selection rules: choose only 8–10 truly important stories ignore low-value content (minor product releases, clickbait, rumors…) Relevance criteria: AI research & foundation models Big Tech developments cybersecurity incidents regulation and digital policy semiconductors, cloud, and infrastructure digital rights, governance, sovereignty Deduplication: If multiple feeds report the same story, only one version is kept. Output format: Gemini must output a valid JSON object containing: subject: the email subject line html: a fully structured HTML body grouped into categories Each news item ends with a clickable HTML source link, NEVER plaintext URLs. This step condenses dozens of articles into a polished, editorial-grade briefing. ✅ 8. HTML Newsletter Assembly (Code Node) The Build Final Newsletter HTML node: safely parses the JSON from the LLM cleans any validates subject and html fields embeds the content into a modern, responsive HTML email template The output is a single item containing: the final email subject the final HTML body Ready to be sent. ✅ 9. Automatic Email Delivery The Send Final Digest Email (Gmail node): uses the generated subject sends the curated HTML newsletter delivers it to the configured recipient(s) uses a custom sender name (“n8n News”) The result is a fully automated Tech & AI Daily Briefing delivered with zero manual effort. In Summary: What This Workflow Achieves ✔ Collects news from 25+ high-quality RSS sources ✔ Normalizes, filters, and sorts all items automatically ✔ Uses Google Gemini to select only the stories that truly matter ✔ Generates a coherent, readable, professional-looking HTML newsletter ✔ Sends the result via email every day Perfect for: daily executive briefings technology and cybersecurity monitoring automated newsletter production internal knowledge distribution competitive intelligence workflows