by Praveena
Idea The idea for app came since I wanted to build a unique gift for my niece because she gets excited for her birthday (which Im going to miss this year). The web app has a simple countdown (in html and JS) but more importantly, there is an AI agent that will answer some specific questions and know her preferences. How it works The questions from app are sent via web hook to N8N which has pulls preferences file (about her likes, dislikes, personality) from postgre and AI Agent that will answer questions/respond. The current status is stored back in postgre (especially about status of cat and universe happenings) before responding back. Features Integrated AI chatbot via N8N webhook Persistent conversation history Minimizable chat interface Fallback support for offline testing Features: -- Wheres Mittens - This is a query to track her lost cat in multiverse. -- Multiverse updates with recent update stored Pre Requisites Postgre SQL database is available. Alternatively, use any other database but change the N8N nodes. LLM Api Key. Step by Step Instructions Export this N8N Workflow. Modify LLM API Key, I used openAI, 4.1 For web app scofflding,you will need Node, HTML and Javascript. I've created a mini version using Node and JS with web app and N8N connection settings here: <https://github.com/productiser/FiBirthdayAgent> PostgreSQL Database Script (1 table for memory and context storage): CREATE TABLE fifi_world_context ( id TEXT PRIMARY KEY, -- e.g., 'agent_fifi' cat_location TEXT, -- e.g., "Bubble Nebula" cat_activity TEXT, -- e.g., "Playing laser tag with moon mice" fifi_preferences JSONB, -- e.g., likes/dislikes/foods/shows world_history TEXT, -- Summary of narrative events last_updated TIMESTAMP DEFAULT CURRENT_TIMESTAMP ); 5.Modify system prompt as per your needs. Built With N8N Self hosted Self hosted web app Hosted on Vercel Total spend = <£1 (AI costs only) Total Time = <1 day Support Watch this video for web app overview and how it looks. <https://youtu.be/e7PlrTdvwoM> Contact me on info@pankstr.com/ superllmuser@gmail.com for any queries Hope you enjoy!!
by Pavel Duchovny
Building agentic AI workflows often requires multiple moving parts: memory management, document retrieval, vector similarity, and orchestration. Until now, these pieces had to be custom-wired. But with the new native n8n nodes for MongoDB Atlas, we reduce that overhead dramatically. With just a few clicks: Store and recall long-term memory from MongoDB Query vector embeddings stored in Atlas Vector Search Use these results in your LLM chains and automation logic In this example we present an ingestion and AI Agent flows that focus around Travel Planning. The different interest points that we want the agent to know about can be ingested into the vector store. The AI Agent will use the vector store tool to get relevant context about those points of interest if it needs to. Prerequisites MongoDB Atlas project and Cluster OpenAI Valid API Key for embeddings (can be other provider) Gemini API Key for the LLM (can be other provider) How it works: There are 2 main flows. One is ingesting flow: Gets a document from a webhook and use MongoDB Vector Atlas to embed the document title and description into points_of_interest collection. Embeddings are stored in a field named embedding Embeddings used are OpenAI's but it can be any type of supported embedders. Second flow is an AI Agent node with Chat Memory Stored in MongoDB Atlas and a Vector Search node as a tool: Chat Message Trigger**: Chatting with the AI Agent will trigger the conversation store in the MongoDB Chat Memory node. When data is necessary like a location search or details it will go to the "Vector Search" tool. Vector Search Tool** - uses Atlas Vector Search index created on the points_of_interest collection: // index name : "vector_index" // If you change an embedding provider make sure the numDimensions correspond to the model. { "fields": [ { "type": "vector", "path": "embedding", "numDimensions": 1536, "similarity": "cosine" } ] } Additional Resources MongoDB Atlas Vector Search n8n Atlas Vector Search docs
by Yang
Who is this for? This workflow is for digital marketers, small business owners, lead generation agencies, and VAs who need a scalable way to find and store local business leads using AI. It’s especially useful for teams that want to enrich leads with real-time news insights and save the structured data to Airtable. What problem is this workflow solving? Manually researching local businesses and staying up to date with relevant news is time-consuming and inefficient. This automation eliminates that burden by using Dumpling AI chat agents to generate leads and context, GPT-4o to summarize, and Airtable to store everything in one place. What this workflow does This AI workflow listens for a manual trigger in n8n and executes the following steps: Extracts local business leads using a Local Business Agent from Dumpling AI. Pulls current news related to the business type or location using a News Agent from Dumpling AI. Uses GPT-4o to combine both responses into a human-readable summary. Extracts structured lead data like name, category, and city. Saves the summary and lead data into Airtable for easy follow-up. Setup 1. Create AI Agents in Dumpling AI Sign in at Dumpling AI Create two separate agents: Local Business Agent: Designed to respond with structured lists of businesses by location and category. News Agent: Designed to fetch relevant recent news and summaries about a specific industry or region. After setting up each agent, copy the Agent Key from Dumpling AI. These keys will be required in the headers of your HTTP Request nodes in n8n. 2. Manual Trigger This workflow begins with a manual trigger inside n8n, Which is the When chat message is recieved. This makes it easy to test and reuse, especially during setup. 3. Get Local Business Data from Dumpling AI The first HTTP Request node sends a prompt like List 5 top real estate companies in Atlanta with full address and services. Include your Local Business Agent Key in the x-agent-key header. The response will return a structured list of business leads. 4. Get News Context from Dumpling AI The second HTTP Request node sends a prompt such as Give me the latest news related to the real estate market in Atlanta. Use your News Agent Key in the header. This fetches a brief set of recent news summaries relevant to the businesses being researched. 5. Use GPT-4o to Merge and Summarize The GPT node combines the list of businesses and news into one coherent summary. You can modify the prompt to output in paragraph format, bullet points, or structured notes. 6. Save Lead to Airtable The Airtable node sends all structured fields into your selected base and table. Be sure to connect your Airtable account and confirm the columns match exactly. How to customize this workflow Replace the prompt inside the HTTP node to focus on different types of businesses or cities. Expand the GPT output to include additional lead info like websites, phone numbers, or emails if the agent includes them. Add a webhook trigger to allow this flow to be run via a chatbot, external app, or button. Link to HubSpot or another CRM to sync the leads automatically. Duplicate the process to run for multiple industries in parallel. Final Notes You must create and configure your Dumpling AI agents first before running this workflow. The Agent Keys from Dumpling AI are required in both HTTP Request nodes. This flow is modular and flexible, ready for deeper CRM integrations. The manual trigger is great for testing, but you can add a Webhook node to automate it. This workflow helps you launch an intelligent lead gen process that combines location-targeted business discovery, AI-generated insights, and structured CRM-friendly output, all powered by Dumpling AI and OpenAI.
by Airtop
Define Your ICP from Customer LinkedIn Profiles Use Case This automation helps marketing and sales teams define their Ideal Customer Profile (ICP) using real LinkedIn profiles of current high-fit customers. By enriching and analyzing profile data, it generates a clear ICP definition and scoring methodology for future targeting. What This Automation Does This automation analyzes LinkedIn profiles of your existing customers and produces: A structured ICP definition A scoring model to evaluate future prospects A Google Boolean search string to find similar prospects Input: LinkedIn profile URLs of existing high-fit customers (e.g., https://www.linkedin.com/in/amirashkenazi/) Output: A Google Doc containing the ICP analysis and scoring methodology How It Works Trigger: Waits for a chat message containing one or more LinkedIn profile URLs. AI Agent: Parses and processes the URLs. Airtop Data Enrichment: Uses Airtop to extract structured information from each LinkedIn profile (e.g., job title, company, experience, skills). Memory: Maintains state between inputs for consistent analysis. LLM Analysis: Uses Claude 3.7 Sonnet to synthesize enriched data into a meaningful ICP. Google Docs: Automatically creates a new doc with a timestamped title and appends the ICP definition. Setup Requirements Airtop Profile connected to LinkedIn, Insert the profile name in the Airtop Tool Airtop API credentials. Get it free here If you choose to activate saving the profiles in Google Docs you will need OAuth2 credentials (or just copy the ICP definition from the chat) Next Steps Use the ICP for Scoring**: Feed new LinkedIn profiles through the same Airtop enrichment and use the scoring function to evaluate fit. Automate Target Discovery**: Plug the Boolean search output into LinkedIn, Google, or People Data Labs for ICP-matching lead generation. Refine Continuously**: Repeat the workflow as your customer base grows or segments evolve. Read more about how to Define ICP from Customer Examples
by Automate With Marc
✉️ Telegram Email Agent with GPT + Gmail Category: Messaging / AI Agent Level: Beginner-Friendly Tags: Telegram, Email Automation, AI Agent, Gmail, GPT Model Watch Step-by-step video guide here: https://www.youtube.com/watch?v=nyI40s9QOuw&t=420s&pp=0gcJCb4JAYcqIYzv 🤖 What This Workflow Does This workflow turns your Telegram bot into a personal email assistant powered by AI. With just a message on Telegram, users can: Send an email via Gmail Automatically generate the email content using OpenAI Models. Get confirmation or responses directly in Telegram It's like ChatGPT meets Gmail, inside your Telegram chat. 🔧 How It Works Telegram Trigger – Listens for incoming messages from your bot. AI Agent – Processes the input using OpenAI Model and converts it into structured email content (To, Subject, Body). Memory Node – Stores short-term context per user (via chat ID), so the agent can hold simple conversations. Gmail Node – Sends the generated email using your Gmail account. Telegram Node – Replies to the user confirming the output or status. 🧠 Why This is Useful Ever wanted to send an email while on the go, without typing the whole thing out in Gmail? This is a fast, intuitive, and AI-powered way to: Dictate or draft emails from anywhere Create an AI-powered virtual assistant via Telegram Integrate n8n's Langchain Agent with real-world productivity use cases 🪜 Setup Instructions Connect your Telegram bot via BotFather and add the credentials in n8n. Set up your OpenAI API key (GPT-4o-mini recommended). Add your Gmail OAuth credentials. Activate the workflow and start messaging your bot!
by Abdul Mir
Company Website Chatbot Agent Overview This workflow implements a modular Website AI Chatbot Assistant capable of handling multiple types of customer interactions autonomously. Instead of relying on a single large agent to handle all logic and tools, this system routes user queries to specialized sub-agents—each dedicated to a specific function. By using a manager-style orchestration layer, this approach prevents overloading a single AI model with excessive context, leading to cleaner routing, faster execution, and easier scaling as your automation needs grow. How It Works 1. Chat Trigger The flow is initiated when a chat message is received via the website widget. 2. Manager Agent (Ultimate Website AI Assistant) The central LLM-based agent is responsible for parsing the message and deciding which specialized sub-agent to route it to. It uses an OpenAI GPT model for natural language understanding and a lightweight memory system to preserve recent context. 3. Sub-Agent Routing calendarAgent: Handles availability checks and books meetings on connected calendars. RAGAgent: Searches company documentation or FAQs to provide accurate responses from your internal knowledge base. ticketAgent: Forwards requests to human support by generating and sending support tickets to a designated email. Setup Instructions Embed the Chatbot Use a custom HTML widget or script to embed the chatbot interface on your website. Connect the frontend to the webhook that triggers the When chat message received node. Configure Your OpenAI Key Insert your API key in the OpenAI Chat Model node. Adjust the model parameters for temperature, max tokens, etc., based on how formal or creative you want the bot to be. Customize Sub-Agents calendarAgent: Connect to your Google or Outlook calendar. RAGAgent: Link to a vector store or document database via API or native integration. ticketAgent: Set the destination email and format for ticket generation (e.g. via SendGrid or SMTP). Deploy in Production Host on n8n Cloud or your self-hosted instance. Monitor usage through the Executions tab and refine prompts based on user behavior. Benefits Modular system with dedicated logic per function Reduces token bloat by offloading complexity to sub-agents Easy to scale by adding more tools (e.g. CRM, analytics) Fast and responsive user experience for customers on your site Cleaner code structure and easier debugging
by Billy Christi
Who is this for? This workflow is perfect for: Companies that manage invoices through Google Drive Business owners who want to minimize manual data entry and maximize accuracy Accounting teams and finance departments seeking to automate invoice processing What problem is this workflow solving? Processing invoices manually is time-consuming, error-prone, and inconsistent. This workflow solves those issues by: Automating invoice processing** from detection to data extraction to storage Improving accuracy** by using AI to extract key invoice data fields reliably Reducing human workload** while maintaining compliance and consistency What this workflow does This workflow creates a fully automated invoice processing system by: Monitoring a Google Drive folder for new PDF invoices in real time Downloading the PDF files and extracting their content using OCR technology Using AI (OpenAI) to parse and extract key invoice fields such as invoice number, date, total amount, vendor name, itemized details, tax, and category Validating the extracted data to ensure compliance with a structured JSON schema Storing structured data in Google Sheets for easy access, review, and reporting Key Features: AI-powered extraction handles both text-based and scanned PDF invoices Provides a structured, searchable invoice database in Google Sheets Configured to run as frequently as the user needs, ensuring timely processing. Setup Copy the Google Sheet template here: 👉 PDF Invoice Parser – Google Sheet Template Connect your Google Drive account to the Drive Trigger and File Download nodes Add your OpenAI API key in the AI Parser node Link the Google Sheet in the final storage node Drop a test invoice PDF into the monitored Drive folder Required Credentials: OpenAI API Key** Google Drive Credentials** Google Sheets Credentials** How to customize this workflow to your needs Modify the polling interval** (default: every minute) for higher/lower frequency. Integrate with your accounting software** by adding nodes (e.g., QuickBooks, Xero). Use alternative LLM** such as Gemini, Claude.
by Yang
👤 Who is this for? This workflow is ideal for social media managers, personal brand strategists, ghostwriters, and founders who want to post regularly on LinkedIn without spending hours writing from scratch. It’s also useful for marketing agencies and assistants looking to automate consistent post creation using curated articles as source material. 🧩 What problem does this workflow solve? Manually reading multiple articles, extracting key insights, and writing a clean, professional LinkedIn post is a time-consuming process. This workflow automates everything: from pulling topics, finding related articles, summarizing them using AI, and even generating a matching image to accompany the post. It ensures faster content turnaround, more consistency, and less manual effort. 🔁 What this workflow does This workflow starts manually and retrieves one topic marked as “To do” from a Google Sheet. That topic is used as a search term for Dumpling AI’s search endpoint, which scrapes and returns the top three article contents related to the topic. These articles are sent to a LangChain agent powered by GPT-4o, which analyzes and summarizes the content into a LinkedIn post in a friendly, insightful tone. It also generates an image prompt for the post. After generating the post and image prompt, the data is extracted using a Set node. The prompt is sent to Dumpling AI’s image generation endpoint, which returns an image URL. Finally, the post text, image prompt, image URL, and status update (“created”) are saved back to the original row in Google Sheets. 🛠️ Workflow Breakdown Manual Trigger – Starts the automation. Google Sheets (Get Topic) – Searches for the first row in your content pipeline sheet where the “status” is “To do”. HTTP Request (Dumpling AI Search) – Uses the topic as a search query to pull 3 article contents using Dumpling AI’s API. Set LangChain GPT Model – Defines GPT-4o as the LLM for the LangChain Agent. LangChain Agent (Summarize & Generate) – Summarizes all 3 articles and generates a LinkedIn post and a related image prompt. Set (Extract Data) – Extracts postText and imagePrompt from the LangChain agent output. HTTP Request (Dumpling Image Gen) – Sends imagePrompt to Dumpling AI’s image generation endpoint. Update Google Sheets – Writes the post, image prompt, and image URL back to the sheet and changes the row status to “created”. ⚙️ Setup Instructions Dumpling AI Sign up at Dumpling AI Get your API key and connect it in the HTTP Request nodes (Search and Image endpoints) Use the /search endpoint to retrieve article content Use the /generate-image endpoint to create the image Google Sheets Create a spreadsheet with columns: topic, status, postText, imagePrompt, imageURL Add sample topics and set their status to To do LangChain (GPT-4o) Connect your OpenAI credentials to n8n Make sure GPT-4o is available in your OpenAI account Use the LangChain node to process multi-input summarization and generate a social media caption Customize the Prompt (Optional) Adjust the Set node to tweak the input format sent to the LangChain agent Add constraints like tone, hashtags, or emojis to fit your brand style 🧠 How to Customize This Workflow Change the content source (RSS feed, Notion DB, etc.) instead of Google Sheets Add a scheduler node to run this automatically every morning or weekly Use Airtable instead of Google Sheets for more control and filtering Send the final post to LinkedIn using the Buffer or LinkedIn API Add a Telegram or Slack notification when new content is ready for approval
by Lukas Kunhardt
Who is this for? This template is for any website owner, digital agency, or compliance officer operating within the European Union. It's designed for users who need to comply with the upcoming European Accessibility Act (EAA) but may not have deep technical or legal expertise. Disclaimer This workflow uses an npm package called "cheerio" to work with the specified URLs HTML code. Installing packages is only possible in self hosting. What problem is this workflow solving? / Use Case Starting June 28, 2025, the European Accessibility Act (EAA) mandates that most websites offering products or services in the EU must be accessible and publish a formal Accessibility Statement. Manually creating this legal document is complex, requiring both a technical site analysis and knowledge of specific legal requirements. This workflow automates the generation of a compliant first draft, saving significant time and effort. What this workflow does After you input your details (like website URL and API key) in a central configuration node, this workflow automatically: Scans your live website for accessibility issues using the powerful WAVE API. Processes the scan results to identify the main problem areas. Instructs a Google Gemini AI agent with a specialized legal prompt based on the European Accessibility Act. Generates a formal Accessibility Statement in your desired language. Saves the statement as an .html file and sends it to you as an email attachment. Setup This workflow is designed for a quick setup: Configure All Variables: Click the 'CHANGE THESE: dependencies' node. This is your central control panel. Fill in all the values, including your WAVE API Key, the URL to analyze, company details, and desired output language. Set Up Credentials: You will need to connect your Google accounts for the workflow to run. Gemini: Click the 'gemini 2.5 pro' node, click the gear icon (⚙️) next to the "Credential" field, and connect your Google Gemini API credentials. Gmail: Click the 'Send report by email' node and connect your Gmail account to allow sending the final report. Activate & Execute: Make sure the workflow is active in the top-right corner, then click 'Execute Workflow' to run your first analysis. How to customize this workflow to your needs This template is a great starting point for any EU country. Here's how to adapt it: Localize for Your Country (Important!):* The generated statement contains a placeholder for the "Enforcement Procedure". You *must* edit the prompt in the *'Accessibility Statement Generator'** node to replace this placeholder with the name and link to your specific country's official enforcement body. Change the AI:** Swap the Google Gemini node for any other AI model, like OpenAI or Anthropic Claude, by replacing the node and connecting it to the agent. Change the Trigger:* Replace the *'When clicking ‘Execute workflow’'** node with a Form Trigger or Webhook Trigger to run this workflow based on external inputs, for example, to offer this analysis as a service to your clients.
by Zain Ali
🧾 Generate Project Summary from meeting transcript Who’s it for 🤝 Project managers looking to automate client meeting summaries Client success teams needing structured deliverables from transcripts Agencies and consultants who want consistent, repeatable documentation How it works / What it does ⚙️ Trigger: Manual or webhook trigger kicks off the workflow. Get meeting transcript: Reads the raw transcript from a specified Google Docs file. Generate summary: Sends transcript + instructions to OpenAI (gpt-4.1-mini) to produce a structured project summary. Convert to HTML: Transforms the LLM-generated Markdown into styled HTML. Prepare request: Wraps HTML and metadata into a multipart request body. Create Google Doc: Uploads the new “Project Summary” document into your Drive folder. How to set up 🛠️ Credentials Google Docs & Drive OAuth2 credentials OpenAI API key (gpt-4.1-mini) Nodes configuration Manual Trigger / webhook node Google Docs “Get meeting transcript” node: set documentURL AI Chat Model node: select gpt-4.1-mini Markdown node: enable tables & emoji Google Drive “CreateGoogleDoc” node: set target folder ID Paste in your IDs Update documentURL to your transcript doc Update google_drive_folder_id in the Set node Execute Click “Execute Workflow” or call via webhook Requirements 📋 n8n Google OAuth2 scopes for Docs & Drive OpenAI account with GPT-4.1-mini access A Google Drive folder to store summaries How to customize ✨ Output format**: Edit the Markdown prompt in the ChainLlm node to adjust headings or tone Timeline section**: Extend LLM prompt template with your own phase table Styling**: Tweak inline CSS in the Code node (Prepare_Request) for fonts or margins Trigger**: Swap Manual Trigger for HTTP/Webhook trigger to integrate with other tools Language model**: Upgrade to a different model by changing model.value in the AI node
by Amit Mehta
How it Works This workflow fetches top news headlines every 10 minutes from NewsAPI, summarizes them using OpenAI's GPT-4o model, and sends a concise email digest to a list of recipients defined in a Google Spreadsheet. It's ideal for anyone who wants to stay updated with the latest news in a short, digestible format. 🎯 Use Case Professionals who want summarized daily news Newsletters or internal communication updates Teams that require contextual summaries of the latest events Setup Instructions 1. Upload the Spreadsheet File name: Emails Column: Email with recipient addresses 2. Configure Google Sheets Nodes Connect your Google account to: Email List Send Email 3. Add API Credentials NewsAPI Key** → for fetching top headlines OpenAI API Key** → for summarizing headlines Gmail Account** → for sending the email digest 4. Activate the Workflow Once active, the workflow runs every 10 minutes via a cron trigger Summarized news is sent to the list of emails in the spreadsheet 🔁 Workflow Logic Trigger: Every 10 minutes via Cron Fetch News: HTTP request to NewsAPI for top headlines Summarize: Headlines are passed to OpenAI's GPT-4o for 5-bullet summary Read Recipients: Google Sheet is used to collect email recipients Send Email: Summary is formatted and sent via Gmail 🧩 Node Descriptions | Node Name | Description | |-----------|-------------| | Cron | Triggers the workflow every 10 minutes. | | HTTP Request - NewsAPI | Fetches top news headlines using NewsAPI. | | Set | Formats or structures raw news data before processing. | | AI Agent | Summarizes the news content using OpenAI into 5 bullet points. | | Email List | Reads recipient email addresses from the 'Emails' Google Spreadsheet. | | Send Email | Sends the email digest to all recipients using Gmail. | 🛠️ Customization Tips Modify the AI prompt for tone, length, or content type Send summaries to Slack, Telegram, or Notion instead of Gmail Adjust cron interval for more/less frequent updates Change email formatting (HTML vs plain text) 📎 Required Files | File Name | Purpose | |-----------|---------| | Emails spreadsheet | Google Sheet containing the list of email recipients | | daily_news.json | Main n8n workflow file to automate daily news digest | 🧪 Testing Tips Add 1–2 test email addresses in your spreadsheet Temporarily change the Cron node to run every minute for testing Check email inbox for delivery and formatting Inspect the execution logs for API errors or formatting issues 🏷 Suggested Tags & Categories #News #OpenAI #Automation #Email #Digest #Marketing
by Zach @BrightWayAI
Daily Email Pulse Summary: This agent summarizes a user's daily emails into a clean, actionable summary. It uses OpenAI to analyze content and sends a formatted "Daily Pulse" email at the end of each day. Main use cases: Keep track of open loops and next steps across all email conversations Identify high-potential leads and flag conversations going nowhere Eliminate the need to manually review your inbox at day’s end Build a smart summary layer using AI without hallucination or noise How it works This workflow can be divided into eight core nodes, each serving a distinct purpose in helping a user stay on top of their day. The result is a curated, AI-generated summary delivered to your inbox — crafted from real message content, not guesswork. Schedule Trigger (Trigger Node – Runs Daily at Set Time) Kicks off the workflow at a specific time each day (e.g. 6:00 PM). Ensures you receive your Daily Pulse consistently, without needing to run it manually. Date Transformer (Function Node – Define Today & Tomorrow Range) Uses JavaScript to calculate the current day’s date range: today: Start of day (00:00:00) tomorrow: Start of next day (used as a cutoff) This ensures only emails from today are analyzed, keeping the summary focused and current. Get All Messages (Gmail Node – Fetch Filtered Emails) Pulls in all Gmail messages with internalDate between today and tomorrow. Outputs structured data: from, subject, and body text of each email. This forms the raw data for the daily business pulse. Aggregator (Function or Item Lists Node – Combine Message Fields) Aggregates each message into a readable format: From: John@example.com Subject: Demo Follow-up Body: Let’s schedule a time this week... All messages are stitched together into a single combinedText string for analysis. This gives the AI model full context for the day in one unified document. Email Cleanup (Function Node – Remove Noise & Normalize Text) Cleans the combinedText blob to remove: HTML tags Marketing footers (e.g., unsubscribe links) Redundant whitespace or formatting artifacts Ensures GPT gets clean, relevant message content with no distractions. Agent (OpenAI Node – Generate Structured Summary) Uses a System Prompt to define its role as an AI Chief of Staff. Uses a User Prompt that instructs it to categorize messages into sections: 📝 Open Loops / Pending Follow-Up 🚀 Next Steps You’ve Committed To 🧲 Leads Worth Following Up On 🛑 Conversations That Aren’t Leading Anywhere 🧠 Strategy Notes ✅ Top 3 Tasks for Tomorrow Built-in guardrails ensure the model only uses real content (no hallucination). Sections with no relevant data are omitted to keep it concise. HTML Formatter (Function Node – Wrap Markdown in Email-Ready HTML) Wraps the GPT-generated markdown summary in a simple <html><body> structure. Applies white-space: pre-wrap to preserve formatting and spacing. The result is a clean, readable email that renders well across all inboxes (especially Gmail). Email Send (Email Node – Deliver the Final Pulse) Sends the formatted summary to your email inbox. Subject: Your Daily Business Pulse – {{today}} HTML body: Uses the formatted output from the previous step. Final output: a well-organized, scannable summary of the day’s communication — focused on what matters. Why It Works Automates the end-of-day review ritual without effort Prioritizes follow-ups, action items, and time-sensitive leads Filters out noise and low-value conversations Leverages GPT without risk of hallucination or irrelevant output Delivers clarity, helping you focus on tomorrow’s most important tasks