by Seb
An AI inbox labelling manager that has reasoning attached to the ChatGPT inbox manager within n8n. Super simple yet highly effective automation. How it works: • Monitors Gmail inbox → triggers workflow when a new unread email is received. • Fetches email details including subject, body, and sender information. • Sends email content to OpenAI → uses AI to determine the most relevant label based on predefined rules. • AI uses a think tool → justifies why it selected that specific label. • Retrieves Gmail label IDs → matches AI’s choice to correct Gmail label for that email. • Adds the chosen label (e.g., Positive reply, priority email, etc) to the email automatically → optionally marks it as read/starred. • Continues monitoring → every new email is processed automatically, keeping the inbox organized. Set Up Steps • Connect Gmail account to the Gmail Node • Create OpenAI account & API key → go to OpenAI and sign up or log in. Once logged in, click Dashboard in the top menu. On the left sidebar, find API Keys and click Create new key. Copy this key — you’ll need it for n8n. Check your account balance → in the top-right, click your profile icon → Your Profile → Billing. Make sure your account has funds (e.g., $5 USD is enough for testing) so the API requests can run. Do these steps through this link: https://platform.openai.com/ • Retrieve Gmail label IDs → use the Gmail “get labels” node to fetch IDs for all labels you want the AI to use. • Use OpenAI (ChatGPT) node → set up system and user prompts with rules describing each label, and include the label IDs (Important). • Test the workflow → send example emails, check labeling, and refine AI prompt or label rules if needed. • Tip: Pin trigger data for testing (Gmail node "Watch Incoming Emails") → re-use the same email record to speed up testing without sending multiple emails. About this automation Handles multiple labels → adding new labels only requires updating the AI prompt (no extra nodes). Scales easily → works for any number of Gmail labels without cluttering the workflow. For a complete rundown on how to set this up watch my YouTube tutorial linked below See full video tutorial here: https://www.youtube.com/watch?v=7nda4drHcWw My LinkedIn: https://www.linkedin.com/in/seb-gardner-5b439a260/
by Tomohiro Goto
🧠 How it works This workflow automatically translates messages between Japanese and English inside Slack — perfect for mixed-language teams. In our real-world use case, our 8-person team includes Arif, an English-speaking teammate from Indonesia, while the rest mainly speak Japanese. Before using this workflow, our daily chat often included: “Can someone translate this for Arif?” “I don’t understand what Arif wrote — can someone summarize it in Japanese?” “I need to post this announcement in both languages, but I don’t know the English phrasing.” This workflow fixes that communication gap without forcing anyone to change how they talk. Built with n8n and Google Gemini 2.5 Flash, it automatically detects the input language, translates to the opposite one, and posts the result in the same thread, keeping every channel clear and contextual. ⚙️ Features Unified translation system with three Slack triggers: 1️⃣ Slash Command /trans – bilingual posts for announcements. 2️⃣ Mention Trigger @trans – real-time thread translation for team discussions. 3️⃣ Reaction 🇯🇵 / 🇺🇸 – personal translation view for readers. Automatic JA ↔ EN detection and translation via Gemini 2.5 Flash 3-second instant ACK to satisfy Slack’s response timeout Shared Gemini translation core across all three modes Clean thread replies using chat.postMessage 💼 Use Cases Global teams** – Keep Japanese and English speakers in sync without switching tools. Project coordination** – Use mentions for mixed-language stand-ups and updates. Announcements** – Auto-generate bilingual company posts with /trans. Cross-cultural communication** – Help one-language teammates follow along instantly. 💡 Perfect for Global companies** with bilingual or multilingual teams Startups** collaborating across Japan and Southeast Asia Developers** exploring Slack + Gemini + n8n automation patterns 🧩 Notes You can force a specific translation direction (JA→EN or EN→JA) inside the Code node. Adjust the system prompt to match tone (“business-polite”, “casual”, etc.). Add glossary replacements for consistent terminology. If the bot doesn’t respond, ensure your app includes the following scopes: app_mentions:read, chat:write, reactions:read, channels:history, and groups:history. Always export your workflow with credentials OFF before sharing or publishing. ✨ Powered by Google Gemini 2.5 Flash × n8n × Slack API A complete multilingual layer for your workspace — all in one workflow. 🌍
by s3110
Title Japanese Document Translation Quality Checker with DeepL & Google Drive to Slack Who’s it for Localization teams, QA reviewers, and operations leads who need a fast, objective signal on Japanese document translation quality without manual checks. What it does / How it works This workflow watches a Google Drive folder for new Japanese documents, exports the text, translates JA→EN with DeepL, then back-translates EN→JA. It compares the original and back-translation to estimate a quality score and summarizes differences. A Google Docs report is generated, and a Slack message posts the score, difference count, and report link—so teams can triage quickly. How to set up Connect credentials for Google Drive, DeepL, and Slack. Point the Google Drive Trigger to your “incoming JP docs” folder. In the Workflow Configuration (Set) node, fill targetFolder (report destination) and slackChannel. Run once, then activate and drop a test doc. Requirements n8n (Cloud or self-hosted), Google Drive, DeepL, and Slack credentials; two Drive folders (incoming, reports). How to customize the workflow Tune the diff logic (character → token/line level, normalization rules), adjust score thresholds and Slack formatting, or add reviewer routing/Jira creation for low-score cases. Always avoid hardcoded secrets; keep user-editable variables in the Set node.
by Davide
This workflow automates the process of transforming user-submitted photos (also bad selfie) into professional CV and LinkedIn headshots using the Nano Banana Pro AI model. | From selfie | To CV/Linkedin Headshot | |:----------------:|:-----------------------------------------:| | | | Key Advantages 1. ✅ Fully Automated Professional Image Enhancement From receiving a photo to delivering a polished LinkedIn-style headshot, the workflow requires zero manual intervention. 2. ✅ Seamless Telegram Integration Users can simply send a picture via Telegram—no need to log into dashboards or upload images manually. 3. ✅ Secure Access Control Only the authorized Telegram user can trigger the workflow, preventing unauthorized usage. 4. ✅ Reliable API Handling with Auto-Polling The workflow includes a robust status-checking mechanism that: Waits for the Fal.ai model to finish Automatically retries until the result is ready Minimizes the chance of failures or partial results 5. ✅ Flexible Input Options You can run the workflow either: Via Telegram Or manually by setting the image URL if no FTP space is available This makes it usable in multiple environments. 6. ✅ Dual Storage Output (Google Drive + FTP) Processed images are automatically stored in: Google Drive (organized and timestamped) FTP (ideal for websites, CDN delivery, or automated systems) 7. ✅ Clean and Professional Output Thanks to detailed prompt engineering, the workflow consistently produces: Realistic headshots Studio-style lighting Clean backgrounds Professional attire adjustments Perfect for LinkedIn, CVs, or corporate profiles. 8. ✅ Modular and Easy to Customize Each step is isolated and can be modified: Change the prompt Replace the storage destination Add extra validation Modify resolution or output formats How It Works The workflow supports two input methods: Telegram Trigger Path: Users can send photos via Telegram, which are then processed through FTP upload and transformed into professional headshots. Manual Trigger Path: Users can manually trigger the workflow with an image URL, bypassing the Telegram/FTP steps for direct processing. The core process involves: Receiving an input image (from Telegram or manual URL) Sending the image to Fal.ai's Nano Banana Pro API with specific prompts for professional headshot transformation Polling the API for completion status Downloading the generated image and uploading it to both Google Drive and FTP storage Using a conditional check to ensure processing is complete before downloading results Set Up Steps Authorization Setup: Replace in the "Sanitaze" node with your actual Telegram user ID Configure Fal.ai API key in the "Create Image" node (Header Auth: Authorization: Key YOURAPIKEY) Set up Google Drive and FTP credentials in their respective nodes Storage Configuration: In the "Set FTP params" node, configure: ftp_path: Your server directory path (e.g., /public_html/images/) base_url: Corresponding base URL (e.g., https://website.com/images/) Configure Google Drive folder ID in the "Upload Image" node Input Method Selection: For Telegram usage: Ensure Telegram bot is properly configured For manual usage: Set the image URL in the "Fix Image Url" node or use the manual trigger API Endpoints: Ensure all Fal.ai API endpoints are correctly configured in the HTTP Request nodes for creating images, checking status, and retrieving results File Naming: Generated files use timestamp-based naming: yyyyLLddHHmmss-filename.ext Output format is set to PNG with 1K resolution The workflow handles the complete pipeline from image submission through AI processing to storage distribution, with proper error handling and status checking throughout. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Jason Krol
Using the power and ease of Telegram, send a simple text or audio message to a bot with a request to add a new Task to your Notion Tasks database. How it works ChatGPT is used to transacribe the audio or text message, parse it, and determine the title to add as a new Notion Task. You can optionally include a "do date" as well and ChatGPT will include that when creating the task. Once complete you will receive a simple confirmation message back. Minimal Setup Required Just follow n8n's instructions on how to connect to Telegram and create your own chatBot, provide the chatID in the 2 Telegram nodes, and you're finished! A few optional settings include tweaking the ChatGPT system prompt (unnecessary) and the timezone for your Notion Task(s).
by clearcue.ai
Who’s it for This workflow is for marketers, founders, and content strategists who want to identify business opportunities by analyzing Reddit discussions. It’s ideal for B2B, SaaS, and tech professionals looking for fresh LinkedIn post ideas or trend insights. How it works / What it does This workflow automatically: Fetches Reddit posts & comments based on a selected subreddit and keyword. Extracts pain points & insights using OpenAI (ChatGPT) to identify key frustrations and trends. Generates LinkedIn post ideas with headlines, hooks, and CTAs tailored for professional audiences. Saves all results into Google Sheets for easy tracking, editing, and sharing. It uses AI to turn unstructured Reddit conversations into actionable content marketing opportunities. How to set up Clone this workflow in your n8n instance. Configure credentials: Reddit OAuth2 (for fetching posts & comments) OpenAI API key (no hardcoding—use credentials in n8n) Google Sheets OAuth2 (for output) Run the workflow or trigger it using the built-in Form Trigger (provide subreddit & keyword). Check the generated Google Sheet for analyzed insights and post suggestions. Requirements n8n (self-hosted or cloud) Reddit account with API credentials OpenAI API key (GPT-4o recommended) Google Sheets account How to customize the workflow Change the AI prompt to adjust tone or depth of insights. Add filtering logic to target posts with higher engagement. Modify the Google Sheets output schema to include custom fields. Extend it with Slack/Email notifications to instantly share top insights.
by Pixcels Themes
Who’s it for This template is designed for podcasters, researchers, educators, product teams, and support teams who work with audio content and want to turn it into searchable knowledge. It is especially useful for users who need automated transcription, structured summaries, and conversational access to audio data. What it does / How it works This workflow starts with a public form where users upload an audio file. The audio is sent to AssemblyAI for speech-to-text processing, including speaker labels and bullet-point summarization. Once transcription is complete, the full text is converted into a document, split into chunks, and embedded using Google Gemini. The embeddings are stored in a Pinecone vector database along with metadata, making the content retrievable for future use. In parallel, the workflow logs uploaded file information into Google Sheets for tracking. A separate chat trigger allows users to ask questions about the uploaded audio files. An AI agent retrieves relevant context from Pinecone and responds using Gemini, enabling conversational search over audio transcripts. Requirements AssemblyAI API credentials Google Gemini (PaLM) API credentials Pinecone API credentials Google Sheets OAuth2 credentials A Pinecone index for storing audio embeddings How to set up Connect AssemblyAI, Gemini, Pinecone, and Google Sheets credentials in n8n. Configure the Pinecone index for storing transcripts. Verify the Google Sheet has columns for file name and status. Test by uploading an audio file through the form. Enable the workflow for continuous use. How to customize the workflow Change summary style or transcript options in AssemblyAI Adjust chunk size and overlap for better retrieval Add email or Slack notifications after processing Extend the chatbot to support multiple knowledge bases
by Yehor EGMS
🎙️ n8n Workflow: Voice Message Transcription with Access Control This n8n workflow enables automated transcription of voice messages in Telegram groups with built-in access control and intelligent fallback mechanisms. It's designed for teams that need to convert audio messages to text while maintaining security and handling various audio formats. 📌 Section 1: Trigger & Access Control ⚡ Receive Message (Telegram Trigger) Purpose: Captures incoming messages from users in your Telegram group. How it works: When a user sends a message (voice, audio, or text), the workflow is triggered and the sender's information is captured. Benefit: Serves as the entry point for the entire transcription pipeline. 🔐 Sender Verification Purpose: Validates whether the sender has permission to use the transcription service. Logic: Check sender against authorized users list If authorized → Proceed to next step If not authorized → Send "Access denied" message and stop workflow Benefit: Prevents unauthorized users from consuming AI credits and accessing the service. 📌 Section 2: Message Type Detection 🎵 Audio/Voice Recognition Purpose: Identifies the type of incoming message and audio format. Why it's needed: Telegram handles different audio types with different statuses: Voice notes (voice messages) Audio files (standard audio attachments) Text messages (no audio content) Process: Check if message contains audio/voice content If no audio file detected → Send "No audio file found" message If audio detected → Assign file ID and proceed to format detection 🧩 File Type Determination (IF Node) Purpose: Identifies the specific audio format for proper processing. Supported formats: OGG (Telegram voice messages) MPEG/MP3 MP4/M4A Other audio formats Logic: If format recognized → Proceed to transcription If format not recognized → Send "File format not recognized" message Benefit: Ensures compatibility with transcription services by validating file types upfront. 📌 Section 3: Primary Transcription (OpenAI) 📥 File Download Purpose: Downloads the audio file from Telegram for processing. 🤖 OpenAI Transcription Purpose: Transcribes audio to text using OpenAI's Whisper API. Why OpenAI: High-quality transcription with cost-effective pricing. Process: Send downloaded file to OpenAI transcription API Simultaneously send notification: "Transcription started" If successful → Assign transcribed text to variable and proceed If error occurs → Trigger fallback mechanism Benefit: Fast, accurate transcription with multi-language support. 📌 Section 4: Fallback Transcription (Gemini) 🛟 Gemini Backup Transcription Purpose: Provides a safety net if OpenAI transcription fails. Process: Receives file only if OpenAI node returns an error Downloads and processes the same audio file Sends to Google Gemini for transcription Assigns transcribed text to the same text variable Benefit: Ensures high reliability—if one service fails, the other takes over automatically. 📌 Section 5: Message Length Handling 📏 Text Length Check (IF Node) Purpose: Determines if the transcribed text exceeds Telegram's character limit. Logic: If text ≤ 4000 characters → Send directly to Telegram If text > 4000 characters → Split into chunks Why: Telegram has a 4,000-character limit per message. ✂️ Text Splitting (Code Node) Purpose: Breaks long transcriptions into 4,000-character segments. Process: Receives text longer than 4,000 characters Splits text into chunks of ≤4,000 characters Maintains readability by avoiding mid-word breaks Outputs array of text chunks 📌 Section 6: Response Delivery 💬 Send Transcription (Telegram Node) Purpose: Delivers the transcribed text back to the Telegram group. Behavior: Short messages:** Sent as a single message Long messages:** Sent as multiple sequential messages Benefit: Users receive complete transcriptions regardless of length, ensuring no content is lost. 📊 Workflow Overview Table | Section | Node Name | Purpose | |---------|-----------|---------| | 1. Trigger | Receive Message | Captures incoming Telegram messages | | 2. Access Control | Sender Verification | Validates user permissions | | 3. Detection | Audio/Voice Recognition | Identifies message type and audio format | | 4. Validation | File Type Check | Verifies supported audio formats | | 5. Download | File Download | Retrieves audio file from Telegram | | 6. Primary AI | OpenAI Transcription | Main transcription service | | 7. Fallback AI | Gemini Transcription | Backup transcription service | | 8. Processing | Text Length Check | Determines if splitting is needed | | 9. Splitting | Code Node | Breaks long text into chunks | | 10. Response | Send to Telegram | Delivers transcribed text | 🎯 Key Benefits 🔐 Secure access control: Only authorized users can trigger transcriptions 💰 Cost management: Prevents unauthorized credit consumption 🎵 Multi-format support: Handles various Telegram audio types 🛡️ High reliability: Dual-AI fallback ensures transcription success 📱 Telegram-optimized: Automatically handles message length limits 🌍 Multi-language: Both AI services support numerous languages ⚡ Real-time notifications: Users receive status updates during processing 🔄 Automatic chunking: Long transcriptions are intelligently split 🧠 Smart routing: Files are processed through the optimal path 📊 Complete delivery: No content loss regardless of transcription length 🚀 Use Cases Team meetings:** Transcribe voice notes from team discussions Client communications:** Convert client voice messages to searchable text Documentation:** Create text records of verbal communications Accessibility:** Make audio content accessible to all team members Multi-language teams:** Leverage AI transcription for various languages
by spencer owen
YNAB Super Budget Ever wish that Y.N.A.B was just a little smarter when auto-categorizing your transactions? Now you can supercharge your YNAB budget with ChatGPT! No more manual categorization. Setup Get a YNAB Api Key Get YNAB Budget ID & Account ID (They are part of the URL) https://app.ynab.com/BUDGETID/accounts/ACCOUNTID Additional information Every transaction that this workflow modifies will be tagged with n8n and color yellow. You can easily review all changes by selecting just that tag. Customization By default it pulls transactions from the last 30 days. This workflow will post a message in a discord channel showing which transactions it modified and what categories it chose. Discord notifications are optional. Considerations YNAB allows for 200 api calls per hour. If you have more than 200 Uncategorized transactions, consider reducing the previous_days value.
by Keith Uy
What it's for: This is a base template for anyone trying to develop a Slack bot AI Agent. This base allows for multiple inputs (Voice, Picture, Video, and Text inputs) to be processed by an AI model of their choosing to a get a User started. From here, the User may connect any tools that they see fit to the AI Agent for their n8n workflows. NOTE: This build is specifically for integrating a Slack bot into a CHAT Channel If you want to allow the Slack bot to be integrated into the whole workspace, you'll need to adjust some of the nodes and bot parameters How it works: Input: Slack message mentioning a bot in a chat channel n8n Processing: Switch node determines the type: Voice Message Picture Message Video Message Text Message (Currently uses OpenAI and Gemini to analyze Voice/Photo/Video content but feel free to change these nodes with other models) AI Agent Proccessing: LLM of your choosing examines message and based on system prompt, generates an output Output: AI Output is sent back in Slack Message How to use: 1. Create your Slack bot and generate access token This part will be longest part of the guide but feel free to Youtube search "How to install Slack AI agent" or soemthing similar in case it's hard to follow Sign in to the Slack website then go to: https://api.slack.com/apps/ Click "Create App" (Top Right Corner) Choose "From Scratch" Enter desired name of App (bot) and desired workspace Go to "OAuth and Permissions" tab on the left side of the webpage Scroll to "Bot Token Scopes" and Add Permissions: app_metions:read channels:history channels:join channels:read chat:write files:read links:read links:write (Feel free to add other permissions here. These are just the ones that will be needed for the automation to work) Next, go to "Event Subscriptions" and paste your n8n webhook URL (Find webhook URL by clicking on the Slack trigger node and there should be a dropdown for webhook URL at the very top) Go back to "OAuth & Permissions" Tab and install your bot to the Slack workspace (should be a green button under the "Bot User OAuth Token" (Remember where this token is for later because you'll need it to create the n8n credentials) Add the bot to your channel by going to your channel, then type "@[your bot name]" Should be a message from Slack to add bot to Channel Congrats for following along, you've added the bot to your channel! 2. Create Credentials in n8n Open Slack trigger node Click create credential Paste access token (If you followed the steps above, it'll be under "OAuth & Permissions" -> Copy the "Bot User OAuth Token" and paste it in n8n accesss Save 3. Add Bot Token to HTTP Request nodes Open HTTP Request Nodes (Nodes under the "Downlaod" Note - Scroll down and paste your Bot Access token under "Header Parameters". Should be a placeholder "[Your bot access token goes here]". NOTE**: Replace everything, including the square brackets Do not** remove "Bearer". Only replace the placeholder. Finalized Authorization value should be: "Bearer + [Your bot access token]" NOT "[Your bot access token ONLY]" 4. Change ALL Slack nodes to your Slack Workspace and Channel Open the nodes, change workspace to your workspace Change channel to your channel Do this for all nodes 5. Create LLM access token (Different per LLM but search your LLM + API in google) (You will have to create an account with the LLM platform) Buy credits to use LLM API Generate Access token Paste token in LLM node Choose your model Requirements: Slack Bot Access Token Google Gemini Access Token (For Picture and Video messages) OpenAI Access Token (For Voice messages) LLM Access Token (Your preference for the AI Agent) Customizing this workflow: To personalize the AI Output, adjust the system prompt (give context or directions on the AI's role) Add tools to the AI agent to give it more utility besides a personalied LLM (Example: Calendars, Databases, etc).
by Omer Fayyaz
This workflow automates news aggregation and summarization by fetching relevant articles from Gnews.io and using AI to create concise, professional summaries delivered via Slack What Makes This Different: Real-Time News Aggregation** - Fetches current news articles from Gnews.io API based on user-specified topics AI-Powered Summarization** - Uses GPT-4.1 to intelligently select and summarize the most relevant articles Professional Formatting** - Generates clean, readable summaries with proper dates and article links Form-Based Input** - Simple web form interface for topic specification Automated Delivery** - Sends summarized news directly to Slack for immediate consumption Intelligent Filtering** - AI selects the top 15 most relevant articles from search results Key Benefits of Automated News Summarization: Time Efficiency** - Transforms hours of news reading into minutes of focused summaries Comprehensive Coverage** - AI ensures all important financial and business developments are captured Professional Quality** - Generates publication-ready summaries with proper formatting Real-Time Updates** - Always delivers the latest news on any topic Centralized Access** - All news summaries delivered to one Slack channel Customizable Topics** - Search for news on any subject matter Who's it for This template is designed for business professionals, financial analysts, content creators, journalists, and anyone who needs to stay updated on specific topics without spending hours reading through multiple news sources. It's perfect for professionals who want to stay informed about industry developments, market trends, or any specific subject matter while maintaining productivity. How it works / What it does This workflow creates an automated news summarization system that transforms topic searches into professional news summaries. The system: Receives topic input through a simple web form interface Fetches news articles from Gnews.io API based on the specified topic Maps article data to prepare for AI processing Uses AI to select the 15 most relevant articles related to financial advancements, tools, research, and applications Generates professional summaries with clear, readable language and proper formatting Includes article links and current date for complete context Delivers summaries via Slack notification for immediate review Key Innovation: Intelligent News Curation - Unlike basic news aggregators, this system uses AI to intelligently filter and summarize only the most relevant articles, saving time while ensuring comprehensive coverage of important developments. How to set up 1. Configure Form Trigger Set up n8n form trigger with "topic" field (required) Configure form title as "News Search" Test form submission functionality Ensure proper data flow to subsequent nodes 2. Configure Gnews.io API Get your API key**: Sign up at gnews.io and obtain your API key from the dashboard Add API key to workflow**: In the "Get GNews articles" HTTP Request node, replace "ADD YOUR API HERE" with your actual Gnews.io API key Example configuration**: { "q": "{{ $json.topic }}", "lang": "en", "apikey": "your-actual-api-key-here" } Configure search parameters**: Ensure language is set to "en" for English articles Test API connectivity**: Run a test execution to verify news articles are returned correctly 3. Configure OpenAI API Set up OpenAI API credentials in n8n Ensure proper API access and quota limits Configure the GPT-4.1 Model node for AI summarization Test AI model connectivity and response quality 4. Configure Slack Integration Set up Slack API credentials in n8n Configure Slack channel ID for news delivery Set up proper message formatting for news summaries Test Slack notification delivery 5. Test the Complete Workflow Submit test form with sample topic (e.g., "artificial intelligence") Verify Gnews.io returns relevant articles Check that AI generates appropriate news summaries Confirm Slack notification contains formatted news summary Requirements n8n instance** with form trigger and HTTP request capabilities OpenAI API** access for AI-powered news summarization Gnews.io API** credentials for news article fetching Slack workspace** with API access for news delivery Active internet connection** for real-time API interactions How to customize the workflow Modify News Search Parameters Adjust the number of articles to summarize (currently set to 15) Add more search depth options or date ranges Implement language filtering for different regions Add news source filtering or preferences Enhance AI Capabilities Customize AI prompts for specific industries or niches Add support for multiple languages Implement different summary styles (brief, detailed, bullet points) Add content quality scoring and relevance filtering Expand News Sources Integrate with additional news APIs (NewsAPI, Bing News, etc.) Add support for RSS feed integration Implement trending topic detection Add competitor news monitoring Improve News Delivery Add email notifications alongside Slack Implement news scheduling capabilities Add news approval workflows Implement news performance tracking Business Features Add news analytics and engagement metrics Implement A/B testing for different summary formats Add news calendar integration Implement team collaboration features for news sharing Key Features Real-time news aggregation** - Fetches current news articles from Gnews.io API AI-powered summarization** - Uses GPT-4.1 to intelligently select and summarize relevant articles Professional formatting** - Generates clean, readable summaries with proper dates and links Form-based input** - Simple interface for topic specification Automated workflow** - End-to-end automation from topic input to news delivery Intelligent filtering** - AI selects the most relevant articles from search results Slack integration** - Centralized delivery of news summaries Scalable news processing** - Handles multiple topic searches efficiently Technical Architecture Highlights AI-Powered News Summarization OpenAI GPT-4.1 integration** - Advanced language model for intelligent news summarization Content filtering** - AI selects the 15 most relevant articles from search results Professional formatting** - Generates clean, readable summaries with proper structure Quality consistency** - Maintains professional tone and formatting standards News API Integration Gnews.io API** - Comprehensive news search with article extraction Real-time data** - Access to current, relevant news articles Content mapping** - Efficiently processes article data for AI analysis Search optimization** - Efficient query construction for better news results Form-Based Input System n8n form trigger** - Simple, user-friendly input interface for topic specification Data validation** - Ensures required topic field is properly filled Parameter extraction** - Converts form data to search parameters Error handling** - Graceful handling of incomplete or invalid inputs News Delivery System Slack integration** - Professional news summary delivery Formatted output** - Clean, readable summaries with dates and article links Centralized access** - All news summaries delivered to one location Real-time delivery** - Immediate notification of news summaries Use Cases Financial analysts** needing to stay updated on market developments and industry news Business professionals** requiring daily news summaries on specific topics Content creators** needing current information for articles and social media posts Journalists** requiring comprehensive news coverage on specific subjects Research teams** needing to track developments in their field of expertise Investment professionals** requiring real-time updates on market trends Academic researchers** needing to stay informed about industry developments Corporate communications** teams requiring news monitoring for crisis management Business Value Time Efficiency** - Reduces news reading time from hours to minutes Cost Savings** - Eliminates need for manual news monitoring and summarization Comprehensive Coverage** - AI ensures all important developments are captured Scalability** - Handles unlimited topic searches without additional resources Quality Assurance** - AI ensures professional-quality summaries every time Real-Time Updates** - Always delivers the latest news on any topic Research Integration** - Incorporates current information for relevant, timely insights This template revolutionizes news consumption by combining AI-powered summarization with real-time news aggregation, creating an automated system that delivers professional-quality news summaries on any topic from a simple form submission.
by Bohdan Saranchuk
This n8n template automates your customer support workflow by connecting Gmail, OpenAI, Supabase, and Slack. It listens for new incoming emails, classifies them using AI, routes them to the appropriate Slack channel based on category (e.g., support or new requests), logs each thread to Supabase for tracking, and marks the email as read once processed. Good to know • The OpenAI API is used for automatic email classification, which incurs a small per-request cost. See OpenAI Pricing for up-to-date info. • You can easily expand the categories or connect more Slack channels to fit your workflow. • The Supabase integration ensures you don’t process the same thread twice. How it works Gmail Trigger checks for unread emails. Supabase Get Row verifies if the thread already exists. If it’s a new thread, the OpenAI node classifies the email into categories such as “support” or “new-request.” The Switch node routes messages to the correct Slack channel based on classification. Supabase Create Row logs thread details (sender, subject, IDs) to your database. Finally, the Gmail node marks the message as read to prevent duplication. How to use • The workflow uses a manual Gmail trigger by default, but you can adjust the polling frequency. • Modify category names or Slack channels to match your internal setup. • Extend the workflow to include auto-replies or ticket creation in your CRM. Requirements • Gmail account (with OAuth2 credentials) • Slack workspace (with channel access) • OpenAI account for classification • Supabase project for storing thread data Customizing this workflow Use this automation to triage incoming requests, route sales leads to specific teams, or even filter internal communications. You can add nodes for auto-responses, CRM logging, or task creation in Notion or ClickUp.