by David Harvey
🚨 Emergency Alerts Reporter to iMessage This n8n template fetches real-time emergency incident alerts from PulsePoint for a specific agency and delivers them directly to any phone number via iMessage using the Blooio API. It's designed to keep users informed with clear, AI-summarized reports of emergency activity near them—automatically and reliably. Use cases are powerful and immediate: Get real-time fire/medical alerts for your neighborhood. Use it for family, local safety groups, or even emergency response teams. Convert technical dispatch data into readable updates with emojis and plain English. 🧠 Good to Know You’ll need a PulsePoint agency ID (see instructions below). iMessages are sent using Blooio’s API (which supports Apple’s iMessage and fallback RCS/SMS). Messages are AI-enhanced using OpenAI's o4-mini model to summarize incident reports with context and urgency. The workflow runs every hour, but this can be configured to match your needs. Each report is sent only once, thanks to persistent tracking of seen incident IDs in workflow static memory. ⚙️ How it Works Trigger: A Schedule Trigger (every hour) or manual start kicks off the flow. Get Alerts: A code node fetches the latest PulsePoint incidents for a specified agency and decrypts the data. Filter New Incidents: We store previously seen incident IDs to prevent duplicate alerts. Merge Incidents: All new incident details are merged into a single payload. Condition Check: If there are no new incidents, nothing is sent. AI Summary: The incident data is passed to an AI agent for summarization with human-friendly emojis and formatting. Send Message: The final summary is sent via Blooio’s API to your phone using iMessage. 📝 How to Use Get Your PulsePoint Agency ID: Visit https://web.pulsepoint.org. Find your agency by location or name. Inspect the API call or browser network log to get the agencyid (e.g. 19100 from a URL like ?agencyid=19100). Set Up Blooio for Messaging: Sign up at https://blooio.com. Go to your account and retrieve your Bearer API Key. Pricing details available on their pricing page. Add your key to the HTTP Request node as a Bearer Token. OpenAI API: Create or use an existing OpenAI account. Use the o4-mini model for efficient, readable summaries. Get your OpenAI API key from https://platform.openai.com/account/api-keys. Add Your Phone Number: Replace +1111112222 with your actual number (international format). You can also modify the message content or prepend special tags/emojis. ✅ Requirements PulsePoint agency ID** – See usage instructions above OpenAI API Key** – Get API Key Blooio Account & Bearer Token** – Get Started Phone number** for iMessage delivery 🔧 Customizing This Workflow Change the schedule** to get alerts more or less frequently Add filters** to only get alerts for specific incident types (e.g. fires, traffic accidents) Send to groups**: Expand to send alerts to multiple recipients or use Slack instead of iMessage Use different AI prompts** to get detailed, humorous, or abbreviated alerts depending on your audience With just a few credentials and a phone number, you’ll have real-time incident alerts with human-friendly summaries at your fingertips. 🛠️ Stay informed. Stay safe.
by Gavin
This Workflow does a HTTPs request to ConnectWise Manage through their REST API. It will pull all tickets in the "New" status or whichever status you like, and notify your dispatch team/personnel whenever a new ticket comes in using Microsoft Teams. Video Explanation https://youtu.be/yaSVCybSWbM
by Aitor | 1Node
Template Description This template creates a powerful Retrieval Augmented Generation (RAG) AI agent workflow in n8n. It monitors a specified Google Drive folder for new PDF files, extracts their content, generates vector embeddings using Cohere, and stores these embeddings in a Milvus vector database. Subsequently, it enables a RAG agent that can retrieve relevant information from the Milvus database based on user queries and generate responses using OpenAI, enhanced by the retrieved context. Functionality The workflow automates the process of ingesting documents into a vector database for use with a RAG system. Watch New Files: Triggers when a new file (specifically targeting PDFs) is added to a designated Google Drive folder. Download New: Downloads the newly added file from Google Drive. Extract from File: Extracts text content from the downloaded PDF file. Default Data Loader / Set Chunks: Processes the extracted text, splitting it into manageable chunks for embedding. Embeddings Cohere: Generates vector embeddings for each text chunk using the Cohere API. Insert into Milvus: Inserts the generated vector embeddings and associated metadata into a Milvus vector database. When chat message received: Adapt the trigger tool to fit your needs. RAG Agent: Orchestrates the RAG process. Retrieve from Milvus: Queries the Milvus database with the user's chat query to find the most relevant chunks. Memory: Manages conversation history for the RAG agent to optimize cost and response speed. OpenAI / Cohere embeddings: Uses ChatGPT 4o for text generation. Requirements To use this template, you will need: An n8n instance (cloud or self-hosted). Access to a Google Drive account to monitor a folder. A Milvus instance or access to a Milvus cloud service like Zilliz. A Cohere API key for generating embeddings. An OpenAI API key for the RAG agent's text generation. Usage Set up the required credentials in n8n for Google Drive, Milvus, Cohere, and OpenAI. Configure the "Watch New Files" node to point to the Google Drive folder you want to monitor for PDFs. Ensure your Milvus instance is running and the target cluster is set up correctly. Activate the workflow. Add PDF files to the monitored Google Drive folder. The workflow will automatically process them and insert their embeddings into Milvus. Interact with the RAG agent. The agent will use the data in Milvus to provide context-aware answers. Benefits Automates document ingestion for RAG applications. Leverages Milvus for high-performance vector storage and search. Uses Cohere for generating high-quality text embeddings. Enables building a context-aware AI agent using your own documents. Suggested improvements Support for More File Types:** Extend the "Watch New Files" node and subsequent extraction steps to handle various document types (e.g., .docx, .txt, .csv, web pages) in addition to PDFs. Error Handling and Notifications:** Implement robust error handling for each step of the workflow (e.g., failed downloads, extraction errors, Milvus insertion failures) and add notification mechanisms (e.g., email, Slack) to alert the user. Get in touch with us Contact us at https://1node.ai
by Antonio Trento
🤖 Auto-Publish SEO Blog Posts for Jekyll with AI + GitHub + Social Sharing This workflow automates the entire process of publishing SEO-optimized blog posts (e.g., recipes) to a Jekyll site hosted on GitHub. It uses LangChain + OpenAI to write long-form Markdown articles, and commits them directly to your repository. Optional steps include posting to X (Twitter) and LinkedIn. 🔧 Features 📅 Scheduled Execution: Runs daily or manually. 📥 CSV Input: Reads from a local CSV (/data/recipes.csv) with fields like title, description, keywords, and publish date. ✍️ AI Copywriting: Uses a GPT-4 model to generate a professional, structured blog post optimized for SEO in Markdown format. 🧪 Custom Prompting: Includes a detailed, structured prompt tailored for Italian food blogging and SEO rules. 🗂 Markdown Generation: Automatically builds the Jekyll front matter. Generates a clean SEO-friendly slug. Saves to _posts/YYYY-MM-DD-title.md. ✅ Commits to GitHub: Auto-commits new posts using GitHub node. 🧹 Post-Processing: Removes processed lines from the source CSV. 📣 (Optional) Social media sharing: Can post title to X (Twitter) and LinkedIn. 📁 CSV Format Example titolo;prompt_descrizione;keyword_principale;keyword_secondarie;data_pubblicazione Pasta alla Norma;Classic Sicilian eggplant pasta...;pasta alla norma;melanzane, ricotta salata;2025-07-04T08:00:00
by Agent Circle
This n8n template demonstrates how to use AI to generate custom images from scratch - fully automated, prompt-driven, and ready to deploy at scale. Use cases are many: You can use it for marketing visuals, character art, digital posters, storyboards, or even daily image generation for your personal purposes. How It Works The flow is triggered by a chat message in N8N or via Telegram. The default image size is 1080 x 1920 pixels. To use a different size, update the values in the “Fields - Set Values” node before triggering the workflow. The input is parsed into a clean, structured prompt using a multi-step transformation process. Our AI Agent sends the final prompt to Google Gemini’s image model for generation (you can also integrate with OpenAI or other chat models). The raw image data created by the AI Agent will be run through a number of codes to make sure it's feasible for your preview if needed and downloading. Then, we use an HTTP node to fetch the result so you can preview the image. You can send it back to the chat message in N8N or Telegram, or save it locally to your disk. How To Use Download the workflow package. Import the package into your N8N interface. Set up the credentials in the following nodes for tool access and usability: "Telegram Trigger"; "AI Agent - Create Image From Prompt"; "Telegram Response" or "Save Image To Disk" (based on your wish). Activate the "Telegram Response" OR "Save Image To Disk" node to specify where you want to save your image later. Open the chat interface (via N8N or Telegram). Type your image prompt or detailed descriptions and send. Wait for the process to run and finish in a few seconds. Check the result in your desired saving location. Requirements Google Gemini account with image generation access. Telegram bot access and chat setup (optional). Connection to local storage (optional). How To Customize We’re setting the default image size to 1080 x 1920 pixels and the default image model to "flux". You can customize both of these values in the “Fields – Set Values” node. Supported image model options include: "flux", "kontext", "turbo", and "gptimage". In the “AI Agent – Create Image From Prompt” node, you can also change the AI chat model. By default, it uses Google Gemini, but you can easily replace it with OpenAI ChatGPT, Microsoft AI Copilot, or any other compatible provider. Need Help? Join our community on different platforms for support, inspiration and tips from others. Website: https://www.agentcircle.ai/ Etsy: https://www.etsy.com/shop/AgentCircle Gumroad: http://agentcircle.gumroad.com/ Discord Global: https://discord.gg/d8SkCzKwnP FB Page Global: https://www.facebook.com/agentcircle/ FB Group Global: https://www.facebook.com/groups/aiagentcircle/ X: https://x.com/agent_circle YouTube: https://www.youtube.com/@agentcircle LinkedIn: https://www.linkedin.com/company/agentcircle
by Baptiste Fort
Who is it for? This workflow is perfect for marketers, sales teams, agencies, and local businesses who want to save time by automating lead generation from Google Maps. It’s ideal for real estate agencies, restaurants, service providers, and any local niche that needs a clean database of fresh contacts, including emails, websites, and phone numbers. ✅ Prerequisites Before starting, make sure you have: Apify account** → to scrape Google Maps data OpenAI API key** → for GPT-4 email extraction Airtable account & base** → for structured lead storage Gmail account with OAuth** → to send personalized outreach emails Your Airtable base should have these columns: | Title | Street | Website | Phone Number | Email | URL | |-------------------------|-------------------------|--------------------|-----------------|------------------------|----------------------| | Paris Real Estate Agency| 10 Rue de Rivoli, Paris | https://agency.fr | +33 1 23 45 67 | contact@agency.fr | maps.google.com/... | 🏡 Example Use Case To keep things clear, we’ll use real estate agencies in Paris as an example. But you can replace this with restaurants, plumbers, lawyers, or even hamster trainers (you never know). 🔄 How the workflow works Scrape Google Maps leads with Apify Clean & structure the data (name, phone, website) Visit each website & extract emails with GPT-4 Save all leads into Airtable Automatically send a personalized email via Gmail This works for any industry, keyword, or location. Step 1 – Scraping Google Maps with Apify Start simply: Open your n8n workflow and choose the trigger: “Execute Workflow” (manual trigger). Add an HTTP Request node (POST method). Now, head over to Apify Google Maps Extractor. Fill in the fields according to your needs: Keyword: e.g., "real estate agency" (or restaurant, plumber...) Location: "Paris, France" Number of results: 50 (or more) Optional: filters (with/without a website, by categories…) Click Run to test the scraper. Then click API → select API endpoints tab. Choose “Run Actor synchronously and get dataset items”. Copy the URL, go back to n8n, and paste it into your HTTP Request node (URL field). Then enable: Body Content Type → JSON Specify Body Using JSON Go back to Apify, click the JSON tab, copy everything, and paste it into the JSON field of your HTTP Request. If you now run your workflow, you'll get a nice structured table filled with Google Maps data. Pretty magical already—but we're just getting started! Step 2 – Cleaning Things Up (Edit Fields) Raw data is cool, but messy. Add an Edit Fields node next, using Manual Mapping mode. Here’s what you keep (copy-paste friendly): Title → {{ $json.title }} Address → {{ $json.address }} Website → {{ $json.website }} Phone → {{ $json.phone }} URL → {{ $json.url }} Now, you have a clean, readable table ready to use. Step 3 – Handling Each Contact Individually (Loop Over Items) Next, we process each contact one by one. Add the Loop Over Items node: Set Batch Size to 20 or more, depending on your needs. This node is simple but crucial to avoid traffic jams in the automation. Step 4 – Isolating Websites (Edit Fields again) Add another Edit Fields node (Manual Mapping). This time, keep just: Website → {{ $json.website }} We've isolated the websites for the next step: scraping them one by one. Step 5 – Scraping Each Website (HTTP Request) Now, we send our little robot to visit each website automatically. Add another HTTP Request node: Method: GET URL: {{ $json.website }} (from the previous node) This returns the raw HTML content of each site. Yes, it's ugly, but we won't handle it manually. We'll leave the next step to AI! Step 6 – Extracting Emails with ChatGPT We now use OpenAI (Message a Model) to politely ask GPT to extract only relevant emails. Configure as follows: Model: GPT-4-1-mini or higher Operation: Message a Model Simplify Output: ON Prompt to copy-paste: Look at this website content and extract only the email I can contact this business. In your output, provide only the email and nothing else. Ideally, this email should be of the business owner, so if you have 2 or more options, try for the most authoritative one. If you don't find any email, output 'Null'. Exemplary output of yours: name@examplewebsite.com {{ $json.data }} ChatGPT will kindly return the perfect email address (or 'Null' if none is found). Step 7 – Neatly Store Everything in Airtable Almost done! Add an Airtable → Create Record node. Fill your Airtable fields like this: | Airtable Field | Content | n8n Variable | | ------------------ | ------------------------------- | ------------------------------------------ | | Title | Business name | {{ $('Edit Fields').item.json.Title }} | | Street | Full address | {{ $('Edit Fields').item.json.Address }} | | Website | Website URL | {{ $('Edit Fields').item.json.Website }} | | Phone Number | Phone number | {{ $('Edit Fields').item.json.Phone }} | | Email | Email retrieved by the AI agent | {{ $json.message.content }} | | URL | Google Maps link | {{ $('Edit Fields').item.json.URL }} | Now, you have a tidy Airtable database filled with fresh leads, ready for action. Step 8 – Automated Email via Gmail (The Final Touch) To finalize the workflow, add a Gmail → Send Email node after your Airtable node. Here’s how to configure this node using the data pulled directly from your Airtable base (from the previous step): Recipient (To): Retrieve the email stored in Airtable ({{ $json.fields.Email }}). Subject: Use the company name stored in Airtable ({{ $json.fields.Title }}) to personalize the subject line. Body: You can include several fields directly from Airtable, such as: Company name: {{ $json.fields.Title }} Website URL: {{ $json.fields.Website }} Phone number: {{ $json.fields["Phone Number"] }} Link to the Google Maps listing: {{ $json.fields.URL }} All of this data is available in Airtable because it was automatically inserted in the previous step (Step 7). This ensures that each email sent is fully personalized and based on clear, reliable, and structured information.
by Yaron Been
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow automatically performs weekly keyword research and competitor analysis to discover trending keywords in your industry. It saves you time by eliminating the need to manually research keywords and provides a constantly updated database of trending search terms and opportunities. Overview This workflow automatically researches trending keywords for any specified topic or industry using AI-powered search capabilities. It runs weekly to gather fresh keyword data, analyzes search trends, and saves the results to Google Sheets for easy access and analysis. Tools Used n8n**: The automation platform that orchestrates the workflow Bright Data**: For accessing search engines and keyword data sources OpenAI**: AI agent for intelligent keyword research and analysis Google Sheets**: For storing and organizing keyword research data How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Bright Data: Add your Bright Data credentials to the MCP Client node Set Up OpenAI: Configure your OpenAI API credentials Configure Google Sheets: Connect your Google Sheets account and set up your keyword tracking spreadsheet Customize: Define your target topics or competitors for keyword research Use Cases SEO Teams**: Discover new keyword opportunities and track trending search terms Content Marketing**: Find trending topics for content creation and strategy PPC Teams**: Identify new keywords for paid advertising campaigns Competitive Analysis**: Monitor competitor keyword strategies and market trends Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Bright Data**: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #keywordresearch #seo #brightdata #webscraping #competitoranalysis #contentmarketing #n8nworkflow #workflow #nocode #seoresearch #keywordmonitoring #searchtrends #digitalmarketing #keywordtracking #contentautomation #marketresearch #trendingkeywords #keywordanalysis #seoautomation #keyworddiscovery #searchmarketing #keyworddata #contentplanning #seotools #keywordscraping #searchinsights #markettrends #keywordstrategy
by Sk developer
Automation Flow: Image to Image Using GPT Sora This flow automates the process of generating images using a provided prompt and reference image via the Sora GPT Image API from RapidAPI. The generated images are stored in Google Drive, and details are logged in Google Sheets. Nodes Overview 1. On Form Submission Type**: n8n-nodes-base.formTrigger Description**: This node triggers when a user submits the form containing the prompt and image URL. It ensures the form fields are filled in and ready for processing. Form Fields: Prompt: A text description of the desired image. Image URL: The URL of the reference image to be used. Webhook ID: Unique identifier for form submission. 2. HTTP Request to Sora GPT Image API Type**: n8n-nodes-base.httpRequest Description: Sends the prompt and image URL to the **Sora GPT Image API to generate a new image based on the provided inputs. API Endpoint: Sora GPT Image API (via RapidAPI) Method: POST Body Parameters: Prompt: User-provided text. Image URL: The reference image URL. Width & Height: Image size is set to 1024x1024. 3. Code (Base64 Conversion) Type**: n8n-nodes-base.code Description**: This node processes the base64-encoded image data returned from the API. It decodes and formats the image to be uploaded to Google Drive. Output: Converts the base64 string into a binary JPEG file. 4. Upload Image to Google Drive Type**: n8n-nodes-base.googleDrive Description: Uploads the generated image to **Google Drive, storing it in a designated folder. Authentication: Google Service Account. File Name: The image file name is dynamically set from the previous node. 5. Log Details to Google Sheets Type**: n8n-nodes-base.googleSheets Description: This node logs the **Prompt, Generated Image, and Generation Date into a Google Sheets document for tracking and auditing purposes. Columns Mapped: Prompt: The user’s input text. Image: The name of the generated image file. Generated Date: Date and time of image generation. Flow Summary User Submits Form: Triggered when the form with the prompt and image URL is submitted. Image Generation: The data is sent to the Sora GPT Image API from RapidAPI to generate the image. Image Processing: The generated image (base64 format) is decoded and saved as a file. Google Drive Upload: The image is uploaded to Google Drive for storage. Google Sheets Logging: All relevant details (Prompt, Image, Date) are saved in Google Sheets. Benefits Automated Image Creation: Quickly generate images using AI based on a simple prompt and reference image via **RapidAPI. Efficient Workflow**: The entire process from form submission to image generation and storage is automated, saving time and reducing manual work. Centralized Storage: Generated images are stored in **Google Drive, ensuring easy access and organization. Audit Trail: The details of each generated image are logged in **Google Sheets, making it easy to track, review, and manage past creations. Scalable and Reusable**: Can be adapted to multiple use cases, such as creative design, marketing materials, or social media content generation. Problems Solved Manual Image Editing**: Eliminates the need for manual image manipulation and creation, allowing for automatic generation based on user inputs. Disorganized File Storage: With automatic uploads to **Google Drive, the images are stored in a centralized and organized manner. Lack of Record-Keeping: By logging image generation details in **Google Sheets, there's always a record of past creations, improving tracking and management. Time-Consuming Processes**: The automation drastically reduces the time spent on manual tasks, allowing users to focus on other aspects of their work or creative processes. This flow simplifies the process of creating AI-generated images based on user inputs, leveraging the power of the Sora GPT Image API via RapidAPI, making it a powerful tool for creative, design, and marketing purposes.
by Encoresky
This workflow automates the process of handling conversation transcriptions and distributing key information across your organization. Here's what it does: Trigger: The workflow is initiated via a webhook that receives a transcription (e.g., from a call or meeting). Summarization & Extraction: Using AI, the transcription is summarized, and key information is extracted — such as action items, departments involved, and client details. Department Notifications: The relevant summarized information is automatically routed to specific departments via email based on content classification. CRM Sync: The summarized version is saved to the associated contact or deal in HubSpot for future reference and visibility. *Multi-Channel Alerts: *The summary is also sent via WhatsApp and Slack to keep internal teams instantly informed, regardless of platform. Use Case: Ideal for sales, customer service, or operations teams who manage client conversations and want to ensure seamless cross-departmental communication, documentation, and follow-up. Apps Used: Webhook (Trigger) OpenAI (or other AI/NLP for summarization) HubSpot Email Slack WhatsApp (via Twilio or third-party provider)
by Automate With Marc
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. 📊 Grok-4 with Perplexity Daily Portolio Advisor. This workflow acts as your personal AI stock analyst—powered by Grok-4, Perplexity, and Google Sheets—to give you daily, tailored market insights based on your actual investment portfolio. Every morning, the workflow: Fetches your current stock holdings from a connected Google Sheet Uses Perplexity to search and summarize the latest stock market news relevant to your portfolio Leverages Grok-4 AI to analyze how those news events impact your stocks Provides buy/sell/hold recommendations and AI-powered investment insights Emails you a clean, easy-to-read summary—perfect for busy investors To watch the step-by-step Tutorial build of this workflow, check out: https://youtu.be/OXzsh-Ba-8Y Google Sheet Template: https://docs.google.com/spreadsheets/d/1074dZk-vhwz6LML5zoiwHdxg89Z8u_mgl7wwzqf3A98/edit?usp=sharing 🧠 What’s Inside: AI Agent: Grok-4 Stock Analyst (via xAI Grok-4) Tool Integrations: 📄 Google Sheets (Portfolio input) 🔍 Perplexity (News search) ✍️ GPT Summary Agent (Readable output) 📧 Gmail (Automated delivery) Schedule: Runs daily at 10:00 AM by default (customizable) 💡 Use Cases: Retail investors seeking personalized news summaries Portfolio managers automating market analysis Fintech startups prototyping intelligent investment advisors Anyone wanting actionable stock updates without reading 10+ articles
by Rajneesh Gupta
Malicious File Detection & Threat Summary Automation using Wazuh + VirusTotal + n8n This workflow helps SOC teams automate the detection and reporting of potentially malicious files using Wazuh alerts, VirusTotal hash validation, and integrated summary/report generation. It's ideal for analysts who want instant context and communication for file-based threats — without writing a single line of code. What It Does When Wazuh detects a suspicious file: Ingests Wazuh Alert** A webhook node captures incoming alerts containing file hashes (SHA256/MD5). Parses IOCs** Extracts relevant indicators (file hash, filename, etc.). Validates with VirusTotal** Automatically checks the file hash reputation using VirusTotal's threat intelligence API. Generates Human-Readable Summary** Outputs a structured file report. Routes Alerts Based on Threat Level** Sends a formatted email with the file summary using Gmail. If the file is deemed malicious/suspicious: Creates a file-related incident ticket. Sends an instant Slack alert to notify the team. Tech Stack Used Wazuh** – For endpoint alerting VirusTotal API** – For real-time hash validation n8n** – To orchestrate, parse, enrich, and communicate Slack, Gmail, Incident Tool** – To notify and take action Ideal Use Case This template is designed for security teams looking to automate file threat triage, IOC validation, and alert-to-ticket escalation, with zero human delay. Included Nodes Webhook** (Wazuh) Function** (IOC extraction and summary) HTTP Request** (VirusTotal) If / Switch** (threat level check) Gmail, **Slack, Incident Creation Tips Make sure to add your VirusTotal API key in the HTTP node. Customize the incident creation node to fit your ticketing platform (Jira, ServiceNow, etc.). Add logic to enrich the file alert further using WHOIS or sandbox reports if needed.
by Swot.AI
This workflow automates document summarization directly from Google Drive, processes the content using Mistral AI, and delivers a clean, styled summary via Gmail. It's ideal for professionals who need quick insights from lengthy documents without manually reading through them. ✅ Key Features: Google Drive Integration: Fetches a file (PDF/DOCX) from your Drive. AI Summarization: Uses Mistral AI to extract key points efficiently. Styled Email Output: Delivers a formatted, easy-to-read summary to your inbox with a timestamp. Error Handling: Built to skip corrupted files or missing credentials. 🔧 Nodes Breakdown: 1️⃣ Manual Trigger — Starts the workflow manually for easy testing. 2️⃣ Google Drive Node — Downloads a specified file from Google Drive (supports PDF/DOCX). 3️⃣ Mistral Cloud Chat Model Node — Connects to Mistral AI for summarization. 4️⃣ Summarization Chain Node — Breaks the file into chunks, processes content, and generates a concise summary. 5️⃣ Gmail Node — Sends the styled summary directly to the user’s inbox, with custom formatting and current time in the Lagos timezone. Extra Features: Dynamic Time Formatting: Supports Lagos timezone (easily adjustable). HTML Styling: Beautiful email formatting with headers, icons, and line breaks for clarity. Custom Email Sender Name: Branded output (e.g., "Swot.AI"). Future Expansion: Can extend to WhatsApp or Slack with minor tweaks. Use Cases: Legal teams summarizing contracts. Content creators extracting highlights from research papers. Business analysts getting insights from reports on-the-go. Customization Tips: Change the timezone (Africa/Lagos) to match your preferred location. Add error-handling nodes for missing files or API failures. Swap Mistral AI with OpenAI for different summarization behavior. Change the "Send To" address(email to receive the Summarized texts) with your personal preffered address.** Change the "Sender Name" from Swot.AI to your preferred Sender Name.** Why To Use This Workflow? This automation saves hours of manual reading. It’s perfect for personal productivity, legal analysis, content creation, or business reporting. With clean formatting and a professional email summary — your team will get instant insights in seconds! I can make this much better and build others, If Interested: *Swot.ai25@gmail.com*