by Milan Vasarhelyi - SmoothWork
Video Introduction Want to automate your inbox or need a custom workflow? 📞 Book a Call | 💬 DM me on Linkedin Workflow Overview This workflow creates an intelligent AI chatbot that retrieves recipes from an external API through natural conversation. When users ask for recipes, the AI agent automatically determines when to use the recipe lookup tool, fetches real-time data from the API Ninjas Recipe API, and provides helpful, conversational responses. This demonstrates the powerful capability of API-to-API integration within n8n, allowing AI agents to access external data sources on demand. Key Features Intelligent Tool Calling:** The AI agent automatically decides when to use the HTTP Request Tool based on user queries External API Integration:** Connects to API Ninjas Recipe API using Header Authentication for secure access Conversational Memory:** Maintains context across multiple turns for natural dialogue Dynamic Query Generation:** The AI model automatically generates the appropriate search query parameters based on user input Common Use Cases Build AI assistants that need access to real-time external data Create chatbots with specialized knowledge from third-party APIs Demonstrate API-to-API integration patterns for custom automation Prototype AI agents with tool-calling capabilities Setup & Configuration Required Credentials: OpenAI API: Sign up at OpenAI and obtain an API key for the language model. Configure this in n8n's credential manager. API Ninjas: Register at API Ninjas to get your free API key for the Recipe API (supports 400+ calls/day). This API uses Header Authentication with the header name "X-Api-Key". Agent Configuration: The AI Agent includes a system message instructing it to "Always use the recipe tool if i ask you for recipe." This ensures the agent leverages the external API when appropriate. The HTTP Request Tool is configured with the API endpoint (https://api.api-ninjas.com/v1/recipe) and set to accept query parameters automatically from the AI model. The tool description "Use the query parameter to specify the food, and it will return a recipe" helps the AI understand when and how to use it. Language Model: Currently configured to use OpenAI's gpt-5-mini, but you can change this to other compatible models based on your needs and budget. Memory: Uses a window buffer to maintain conversation context, enabling natural multi-turn conversations where users can ask follow-up questions.
by Yaron Been
Automate Financial Operations with O3 CFO & GPT-4.1-mini Finance Team This workflow builds a virtual finance department inside n8n. At the center is a CFO Agent (O3 model) who acts like a strategic leader. When a financial request comes in, the CFO interprets it, decides the strategy, and delegates to the specialist agents (each powered by GPT-4.1-mini for cost efficiency). 🟢 Section 1 – Entry & Leadership Nodes: 💬 When chat message received → Entry point for user financial requests. 💼 CFO Agent (O3) → Acts as the Chief Financial Officer. Interprets the request, decides the approach, and delegates tasks. 💡 Think Tool → Helps the CFO brainstorm and refine financial strategies. 🧠 OpenAI Chat Model CFO (O3) → High-level reasoning engine for strategic leadership. ✅ Beginner view: Think of this as your finance CEO’s desk — requests land here, the CFO figures out what needs to be done, and the right specialists are assigned. 📊 Section 2 – Specialist Finance Agents Each specialist is powered by GPT-4.1-mini (fast + cost-effective). 📈 Financial Planning Analyst → Builds budgets, forecasts, and financial models. 📚 Accounting Specialist → Handles bookkeeping, tax prep, and compliance. 🏦 Treasury & Cash Management Specialist → Manages liquidity, banking, and cash flow. 📊 Financial Analyst → Runs KPI tracking, performance metrics, variance analysis. 💼 Investment & Risk Analyst → Performs investment evaluations, capital allocation, and risk management. 🔍 Internal Audit & Controls Specialist → Checks compliance, internal controls, and audits. ✅ Beginner view: This section is your finance department — every role you’d find in a real company, automated by AI. 📋 Section 3 – Flow of Execution User sends a request (e.g., “Create a financial forecast for Q1 2026”). CFO Agent (O3) interprets it → “We need planning, analysis, and treasury.” Delegates tasks to the relevant specialists. Specialists process in parallel, generating plans, numbers, and insights. CFO Agent compiles and returns a comprehensive financial report. ✅ Beginner view: The CFO is the conductor, and the specialists are the musicians. Together, they produce the financial “symphony.” 📊 Summary Table | Section | Key Roles | Model | Purpose | Beginner Benefit | | ---------------------- | ------------------------------------------------------- | ----------------- | ------------------- | -------------------------------------- | | 🟢 Entry & Leadership | CFO Agent, Think Tool | O3 | Strategic direction | Acts like a real CFO | | 📊 Finance Specialists | FP Analyst, Accounting, Treasury, FA, Investment, Audit | GPT-4.1-mini | Specialized tasks | Each agent = finance department role | | 📋 Execution Flow | All connected | O3 + GPT-4.1-mini | Collaboration | Output = complete financial management | 🌟 Why This Workflow Rocks Full finance department in n8n** Strategic + execution separation** → O3 for CFO, GPT-4.1-mini for team Cost-optimized** → Heavy lifting done by mini models Scalable** → Easily add more finance roles (tax, payroll, compliance, etc.) Practical outputs** → Reports, budgets, risk analyses, audit notes 👉 Example Use Case: “Generate a Q1 financial forecast with cash flow analysis and risk report.” CFO reviews request. Financial Planning Analyst → Budget + Forecast. Treasury Specialist → Cash flow modeling. Investment Analyst → Risk review. Audit Specialist → Compliance check. CFO delivers a packaged financial report back to you.
by Robin
💸 HOW IT WORKS — AI TELEGRAM EXPENSE TRACKER This workflow transforms natural Telegram messages into structured expenses using AI — without forms, manual typing, or complex inputs. Simply send a message like: Groceries 23€ yesterday The workflow validates the sender, understands the intent, extracts structured data, and prepares the expense for approval before saving. ──────────────── 🔄 WORKFLOW OVERVIEW 🟩 1. Secure Input Layer Incoming Telegram messages are checked against a list of approved Chat IDs to ensure only authorized users can create expenses. 🟦 2. AI Expense Detection An AI layer analyzes the message and decides whether it represents a real financial transaction. Non-expense messages are safely ignored to avoid noise in your data. 🟨 3. Smart Category Intelligence Existing categories are loaded from Google Sheets and compared with the message content. If no suitable category exists, the workflow can suggest and learn new categories over time. 🟪 4. Structured Data Extraction AI converts natural language into structured fields: date amount category description shared vs personal expense Supports German and English input. 🟥 5. Human Approval & Storage Before saving, the user confirms the extracted result directly via Telegram. After approval, the expense is appended to Google Sheets automatically. ──────────────── 📋 SETUP REQUIREMENTS Before using this workflow, make sure the following components are ready: 1️⃣Telegram Bot Create a Telegram bot using BotFather and connect it to the Telegram Trigger node in n8n. Detailed setup instructions can be found here. 2️⃣LLM API Access An API Key for a Large Language Model (LLM) is required for: expense detection category matching structured data extraction Add your API credentials inside the AI node configuration. 3️⃣Google Sheets Create two Google Sheets before importing the workflow. EXPENSES* Required columns: date, amount, category, description, common_expense, Person EXPENSE_CATEGORIES* Required columns: category, description, examples The workflow reads existing data and appends new entries automatically. ──────────────── 💡KEY FEATURES • AI-powered expense detection from natural language • Self-learning category system • Human-in-the-loop approval step • Multi-language support (DE & EN) • Clean Google Sheets integration • Designed for real-life shared finance tracking ──────────────── 👥MULTI-USER SUPPORT Built for couples, roommates, or teams. Add multiple Chat IDs in: Security — Allow Approved Chat IDs Each expense is automatically tagged with the sender. Shared expenses are stored as true in the common_expense column, while personal expenses default to false unless shared spending is detected. This allows easy downstream analysis, dashboards, or automation.
by Daiki Takayama
[Workflow Overview] ⚠️ Self-Hosted Only: This workflow uses the gotoHuman community node and requires a self-hosted n8n instance. Who's It For Content teams, bloggers, news websites, and marketing agencies who want to automate content creation from RSS feeds while maintaining editorial quality control. Perfect for anyone who needs to transform news articles into detailed blog posts at scale. What It Does This workflow automatically converts RSS feed articles into comprehensive, SEO-optimized blog posts using AI. It fetches articles from your RSS source, generates detailed content with GPT-4, sends drafts for human review via gotoHuman, and publishes approved articles to Google Docs with automatic Slack notifications to your team. How It Works Schedule Trigger runs every 6 hours to check for new RSS articles RSS Read node fetches the latest articles from your feed Format RSS Data extracts key information (title, keywords, description) Generate Article with AI creates a structured blog post using OpenAI GPT-4 Structure Article Data formats the content with metadata Request Human Review sends the article for approval via gotoHuman Check Approval Status routes the workflow based on review decision Create Google Doc and Add Article Content publish approved articles Send Slack Notification alerts your team with article details Requirements OpenAI API key** with GPT-4 access Google account** for Google Docs integration gotoHuman account** for human-in-the-loop approval workflow Slack workspace** for team notifications RSS feed URL** from your preferred source How to Set Up Configure RSS Feed: In the "RSS Read" node, replace the example URL with your RSS feed source Connect OpenAI: Add your OpenAI API credentials to the "OpenAI Chat Model" node Set Up Google Docs: Connect your Google account and optionally specify a folder ID for organized storage Configure gotoHuman: Add your gotoHuman credentials and create a review template for article approval Connect Slack: Authenticate with Slack and select the channel for notifications Customize Content: Modify the AI prompt in "Generate Article with AI" to match your brand voice and article structure Adjust Schedule: Change the trigger frequency in "Schedule Trigger" based on your content needs How to Customize Article Style**: Edit the AI prompt to change tone, length, or structure Keywords & SEO**: Modify the "Format RSS Data" node to adjust keyword extraction logic Publishing Destination**: Change from Google Docs to other platforms (WordPress, Notion, etc.) Approval Workflow**: Customize the gotoHuman template to include specific review criteria Notification Format**: Adjust the Slack message template to include additional metadata Processing Volume**: Modify the Code node to process multiple RSS articles instead of just one
by Piotr Sikora
Automatically Assign Categories and Tags to Blog Posts with AI This workflow streamlines your content organization process by automatically analyzing new blog posts in your GitHub repository and assigning appropriate categories and tags using OpenAI. It compares new posts against existing entries in a Google Sheet, updates the metadata for each new article, and records the suggested tags and categories for review — all in one automated pipeline. Who’s It For Content creators and editors** managing a static website (e.g., Astro or Next.js) who want AI-driven tagging. SEO specialists** seeking consistent metadata and topic organization. Developers or teams** managing a Markdown-based blog stored in GitHub who want to speed up post curation. How It Works Form Trigger – Starts the process manually with a form that initiates article analysis. Get Data from Google Sheets – Retrieves existing post records to prevent duplicate analysis. Compare GitHub and Google Sheets – Lists all .md or .mdx blog posts from the GitHub repository (piotr-sikora.com/src/content/blog/pl/) and identifies new posts not yet analyzed. Check New Repo Files – Uses a code node to filter only unprocessed files for AI tagging. Switch Node – If there are no new posts, the workflow stops and shows a confirmation message. If new posts exist, it continues to the next step. Get Post Content from GitHub – Downloads the content of each new article. AI Agent (LangChain + OpenAI GPT-4.1-mini) – Reads each post’s frontmatter (--- section) and body. Suggests new categories and tags based on the article’s topic. Returns a JSON object with proposed updates (Structured Output Parser) Append to Google Sheets – Logs results, including: File name Existing tags and categories Proposed tags and categories (AI suggestions) Completion Message – Displays a success message confirming the categorization process has finished. Requirements GitHub account** with repository access to your website content. Google Sheets connection** for storing metadata suggestions. OpenAI account** (credential stored in openAiApi). How to Set Up Connect your GitHub, Google Sheets, and OpenAI credentials in n8n. Update the GitHub repository path to match your project (e.g., src/content/blog/en/). In Google Sheets, create columns: FileName, Categories, Proposed Categories, Tags, Proposed Tags. Adjust the AI model or prompt text if you want different tagging behavior. Run the workflow manually using the Form Trigger node. How to Customize Swap OpenAI GPT-4.1-mini for another LLM (e.g., Claude or Gemini) via the LangChain node. Modify the prompt in the AI Agent to adapt categorization style or tone. Add a GitHub commit node if you want AI-updated metadata written back to files automatically. Use the Schedule Trigger node to automate this process daily. Important Notes All API keys and credentials are securely stored — no hardcoded keys. The workflow includes multiple sticky notes explaining: Repository setup File retrieval and AI tagging Google Sheet data structure It uses a LangChain memory buffer to improve contextual consistency during multiple analyses. Summary This workflow automates metadata management for blogs or documentation sites by combining GitHub content, AI categorization, and Google Sheets tracking. With it, you can easily maintain consistent tags and categories across dozens of articles — boosting SEO, readability, and editorial efficiency without manual tagging.
by Ryan Nolan
This template and YouTube video goes over 5 different implementations of evaluations within n8n. Categorization Correctness Tools used String similarity Helpfulness You’ll learn when to use each type, how to set up test datasets in Google Sheets or data tables, and how to track your results over time. I also explain best practices like only changing one variable at a time, documenting your prompts and model settings, and building proper training datasets with enough examples to confidently validate your workflow. YouTube Video: https://www.youtube.com/watch?v=-4LXYOhQ-Z0 Thank you for downloading our free n8n Evaluations template. If you enjoyed the template + tutorial please subscribe to the YouTube channel. We are uploading weekly content on AI/n8n Connect With Us Check out the links down below. If you need help with this template, want 1:1 coaching, or have a n8n project you want to build, reach out at ryannolandata@gmail.com Free Skool AI/n8n Group: https://www.skool.com/data-and-ai LinkedIn: https://www.linkedin.com/in/ryan-p-nolan/ Twitter/X:https://x.com/RyanMattDS Website: https://ryanandmattdatascience.com/
by Bhautik Trambadia
AI SQL Analytics Agent Chat With Any PostgreSQL Database Using Natural Language This template turns your PostgreSQL database into an AI-powered analytics assistant. Instead of exporting data or uploading huge CSV files to an LLM, this workflow lets AI query your database directly using SQL, returning clean insights instantly. Just type your table name once and start asking questions in plain English. No SQL required.
by Emilio Loewenstein
Description Save hours of manual reporting with this end-to-end automation. This workflow pulls campaign performance data (demo or live), generates a clear AI-powered executive summary, and compiles everything into a polished weekly report. The report is formatted in Markdown, automatically stored in Google Docs, and instantly shared with your team via Slack — no spreadsheets, no copy-paste, no delays. What it does ⏰ Runs on a schedule (e.g. every Monday morning) 📊 Collects performance metrics (Google Ads, Meta, TikTok, YouTube – demo data included) 🤖 Uses AI to summarize wins, issues, and recommendations 📝 Builds a structured Markdown report (totals, channel performance, top campaigns) 📄 Creates and updates a Google Doc with the report 💬 Notifies your team in Slack with topline numbers + direct report link 📧 Optionally email the report to stakeholders or clients Why it’s valuable Saves time** – no manual data aggregation Standardizes reporting** – same format and quality every week Adds insights** – AI highlights what matters most Improves transparency** – instant access via Docs, Slack, or Email Scales easily** – adapt to multiple clients or campaigns Professional delivery** – branded, polished reports on autopilot 💡 Extra recommendation: Connect to a Google Docs template to give your reports a professional, branded look.
by Rahul Joshi
📘 Description This workflow automates AI-driven Facebook Messenger product inquiry handling, connecting Facebook DMs with Airtable inventory and returning instant automated replies based on product availability. It runs hourly, fetches new messages, extracts the latest customer query, uses GPT-4o to identify the product and intent, merges this with the Airtable inventory dataset, performs an AI-assisted product match, and replies automatically inside the same Facebook conversation. Invalid or malformed messages are logged to Google Sheets for review. ⚙️ What This Workflow Does (Step-by-Step) ▶️ Trigger – Fetch New Facebook Messages (Every Hour) Schedules hourly polling of new conversations from Facebook Messenger. 🟦 Fetch Facebook Conversation List (Graph API) Retrieves conversation threads from the connected Facebook Page. 💬 Fetch Facebook Conversation Messages (Graph API) Loads message details (content, sender, timestamp) for the selected conversation. 📩 Extract Latest Facebook Message (Code) Sorts all messages and picks the latest one → this is the message analyzed by AI. 🔍 Validate Record Structure (IF) Ensures the incoming message has required fields. Valid → AI analysis Invalid → logged to Google Sheets. 📄 Log Invalid Records to Google Sheet Stores malformed or unprocessable messages for audit and correction. 🧠 Configure GPT-4o — Message Classification Model Defines AI model used to extract product details and intent from the customer’s message. 🤖 AI – Extract Product & Customer Intent AI identifies: product name (standardized) customer intent (availability, pricing, inquiry) cleaned query always returns structured JSON No inventory lookup happens here. 📦 Fetch Inventory Records from Airtable Pulls complete product inventory list to cross-match with customer request. 🔁 Merge AI Output With Inventory Dataset Combines: AI-interpreted message data Airtable inventory records This prepares a unified object for product lookup. 📝 Build Combined AI + Inventory Payload (Code) Constructs { ai: {...}, inventory: [...] } for the product-matching AI agent. 🧠 Configure GPT-4o — Product Matching Model Sets strict rules for identifying whether the requested product exists in inventory. 🤖 AI – Match Requested Product in Inventory AI checks: exact / close match to product name whether item exists generates structured JSON reply text + confidence score. 🧹 Parse AI Product Match JSON (Code) Ensures the AI output is valid JSON before making decisions. 🔍 Check If Product Exists (IF) If found → sends “product available” reply If not → sends “product not found” reply. 📨 Send Facebook Reply — Product Found (Graph API) Sends a personalized Messenger reply including matched product details. 📨 Send Facebook Reply — Product Not Found (Graph API) Replies politely informing customer that the product is not available. 🧩 Prerequisites Facebook Graph API access token Airtable API token Azure OpenAI GPT-4o credentials Google Sheets OAuth 💡 Key Benefits ✔ Fully automated Facebook DM handling ✔ AI-powered product identification even with typos or unclear wording ✔ Real-time product availability responses ✔ Unified Airtable-driven catalog lookup ✔ Automatic fallback for invalid messages ✔ Zero manual intervention for customer support 👥 Perfect For Ecommerce stores Catalog-based product businesses Teams handling large volumes of Facebook DM inquiries Businesses wanting instant customer replies without agents
by Davide
This workflow is an AI-powered text-to-speech production pipeline designed to generate highly expressive audio using ElevenLabs v3. It automates the entire process from raw text input to final audio distribution and upload the mp3 file to Google Drive and an FTP space. Key Advantages 1. ✅ Cinematic-quality audio output By combining AI-driven emotional tagging with ElevenLabs v3, the workflow produces audio that feels acted, not simply read. 2. ✅ Fully automated pipeline From raw text to hosted audio file, everything is handled automatically: No manual tagging No manual uploads No post-processing 3. ✅ Multi-input flexibility The workflow supports: Manual testing Chat-based usage API/Webhook integrations This makes it ideal for apps, CMSs, games, and content platforms. 4. ✅ Language-agnostic The agent preserves the original language of the input text and applies tags accordingly, making it suitable for international projects. 5.✅ Consistent and correct tagging The use of Context7 ensures that all audio tags follow the official ElevenLabs v3 specifications, reducing errors and incompatibilities. 6. ✅ Scalable and production-ready Automatic uploads to Drive and FTP make this workflow ready for: Large content volumes CDN delivery Team collaboration 7.✅ Perfect for storytelling and media The workflow is especially effective for: Horror and cinematic storytelling Audiobooks and podcasts Games and immersive narratives Voiceovers with emotional depth How it Works Text Input & Processing: The workflow accepts text input through multiple triggers - manual execution via "Set text" node, webhook POST requests, or chat message inputs. This text is passed to the Audio Tagger Agent. AI-Powered Audio Tagging: The Audio Tagger Agent uses Claude Sonnet 4.5 to analyze the input text and intelligently insert ElevenLabs v3 audio tags. The agent follows strict rules: maintaining original meaning, adding tags for pauses, rhythm, emphasis, emotional tones, breathing, laughter, and delivery variations while keeping the output in the original language. Reference Validation: During tagging, the agent consults the Context7 MCP tool, which provides access to the official ElevenLabs v3 audio tags guide to ensure correct and consistent tag usage. Text-to-Speech Conversion: The tagged text is sent to ElevenLabs' v3 (alpha) model, which converts it into speech using a specific voice with customized voice settings including stability, similarity boost, style, speaker boost, and speed controls. Dual Output Distribution: The generated audio file is simultaneously uploaded to two destinations: Google Drive (in a specified "Elevenlabs" folder) and an FTP server (BunnyCDN), ensuring the file is stored in both cloud storage platforms. Set Up Steps Prerequisite Configuration: Configure Anthropic API credentials for Claude Sonnet access Set up ElevenLabs API credentials with access to v3 (alpha) models Configure Google Drive OAuth2 credentials with access to the target folder Set up FTP credentials for BunnyCDN or alternative storage Configure Context7 MCP tool with appropriate authentication headers Workflow-Specific Setup: In the "Set text" node, replace "YOUR TEXT" with the default text you want to process (for manual execution) In the "Upload to FTP" node, update the path from "/YOUR_PATH/" to your actual FTP directory structure Verify the Google Drive folder ID points to your intended destination folder Ensure the webhook path is correctly configured for external integrations Adjust voice parameters in the ElevenLabs node if different voice characteristics are desired Execution Options: For one-time processing: Use the manual trigger and set text in the "Set text" node For API integration: Use the webhook endpoint to receive text via POST requests For chat-based interaction: Use the chat trigger for conversational text input 👉 Subscribe to my new YouTube channel. Here I’ll share videos and Shorts with practical tutorials and FREE templates for n8n. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Atta
What it does Instead of manually checking separate apps for your calendar, weather, and news each morning, this workflow consolidates the most important information into a single, convenient audio briefing. The "Good Morning Podcast" is designed to be a 3-minute summary of your day ahead, delivered directly to you. It's multi-lingual and customizable, allowing you to start your day informed and efficiently. How it works The workflow executes in three parallel branches before merging the data to generate the final audio file. Weather Summary: It starts by taking a user-provided city and fetching the current 15-hour forecast from the OpenWeatherMap. It formats this information into a concise weather report. Calendar Summary: It securely connects to your Google Calendar to retrieve all of today's scheduled meetings and events. It then formats the schedule into a clear, readable summary. News Summary: It connects to the NewsAPI to perform two tasks: it fetches the top general headlines and also searches for articles based on user-defined keywords (e.g., "AI", "automation", "space exploration"). The collected headlines are then summarized using a Google Gemini node to create a brief news digest. Audio Generation and Delivery: All three text summaries (weather, calendar, and news) are merged into a single script. The workflow uses Google's Text-to-Speech (TTS) to generate the raw multi-speaker audio. A dedicated FFmpeg node then processes and converts this audio into the final MP3 format. The completed podcast is then sent directly to you via a Telegram Bot. Setup Instructions To get this workflow running, you will need to configure credentials for each of the external services and set your initial parameters. ⚠️ Important Prerequisite Install FFmpeg: The workflow requires the FFmpeg software package to be installed on the machine running your n8n instance (local or server). Please ensure it is installed and accessible in your system's PATH before running this workflow. Required Credentials OpenWeatherMap: Sign up for a free account at OpenWeatherMap and get your API key. Add the API key to your n8n OpenWeatherMap credentials. Google Calendar & Google AI (Gemini/TTS): You will need Google OAuth2 credentials for the Google Calendar node. You will also need credentials for the Google AI services (Gemini and Text-to-Speech). Follow the n8n documentation to create and add these credentials. NewsAPI: Get a free API key from NewsAPI.org. Add the API key to your n8n NewsAPI credentials. Telegram: Create a new bot by talking to the BotFather in your Telegram app. Copy the Bot Token it provides and add it to your n8n Telegram credentials. Send a message to your new bot and get your Chat ID from the Telegram Trigger node or another method. You will need this for the Telegram send node. Workflow Inputs In the first node (or when you run the workflow manually), you must provide the following initial data: name: Your first name for a personalized greeting. city: The city for your local weather forecast (e.g., "Amsterdam"). language: The language for the entire podcast output (e.g., "en-US", "nl-NL", "fa-IR"). news_keywords: A comma-separated list of topics you are interested in for the news summary (e.g., "n8n,AI,technology"). How to Adapt the Template This workflow is highly customizable. Here are several ways you can adapt it to fit your needs: Triggers Automate It:* The default trigger is manual. Change it to a *Schedule Trigger** to have your podcast automatically generated and sent to you at the same time every morning (e.g., 7:00 AM). Content Sources Weather:** In the "User Weather Map" node, you can change the forecast type or switch the units from metric to imperial. Calendar:** In the "Get Today Meetings" node, you can select a different calendar from your Google account (e.g., a shared work calendar instead of your personal one). News:** In the "Get Headlines From News Sources" node, change the country or category to get different top headlines. In the "Get Links From Keywords" node, update your keywords to track different topics. In the "Aggregate Headlines" (Gemini) node, you can modify the prompt to change the tone or length of the AI-generated news summary. Audio Generation Voice & Language:** The language is a starting parameter, but you can go deeper into the Google TTS nodes (Generate Virtual Parts, etc.) to select specific voices, genders, and speaking rates to create a unique podcast host style. Scripting:** Modify the Set and Merge nodes that construct the final script. You can easily change the greeting, the transition phrases between sections, or the sign-off message. Delivery Platform:** Don't use Telegram? Swap the Telegram node for a Slack node, Discord node, or even an Email node to send the MP3 file to your preferred platform. Message:** Customize the text message that is sent along with the audio file in the final node.
by Tomohiro Goto
🧠 How it works This workflow automatically transcribes and translates voice messages from Telegram to Slack, enabling seamless communication between Japanese and English speakers. In our real-world use case, our distributed team often sends short voice updates on Telegram — but most discussion happens on Slack. Before this workflow, we constantly asked: “Can someone write a summary of that voice message?” “I can’t understand what was said — is there a transcript?” “Can we translate this audio for our English-speaking teammates?” This workflow fixes that problem without changing anyone’s communication habits. Built with n8n, OpenAI Whisper, and GPT-4o-mini, it automatically: Detects when a voice message is posted on Telegram Downloads and transcribes it via Whisper Translates the text with GPT-4o-mini Posts the result in Slack — with flags 🇯🇵→🇺🇸 and username attribution ⚙️ Features 🎧 Voice-to-text transcription using OpenAI Whisper 🌐 Automatic JA ↔ EN detection and translation via GPT-4o-mini 💬 Clean Slack message formatting with flags, username, and original text 🔧 Easy to customize: adjust target languages, tone, or message style ⚡ Typical end-to-end time: under 10 seconds for short audio clips 💼 Use Cases Global teams** – Send quick voice memos in Telegram and share readable translations in Slack Project coordination** – Record updates while commuting and post bilingual notes automatically Remote check-ins** – Replace daily written reports with spoken updates Cross-language collaboration** – Let English and Japanese teammates stay perfectly synced 💡 Perfect for Bilingual creators and managers** working across Japan and Southeast Asia AI automation enthusiasts** who love connecting voice and chat platforms Teams using Telegram for fast communication** and Slack for structured workspaces 🧩 Notes Requires three credentials: TELEGRAM_BOT_TOKEN OPENAI_API_KEY_HEADER SLACK_BOT_TOKEN_HEADER Slack scopes: chat:write, files:write, channels:history You can change translation direction or add languages in the “Detect Language” → “Translate (OpenAI)” nodes. Keep audio files under 25 MB for Whisper processing. Always export your workflow with credentials OFF before sharing or publishing. ✨ Powered by OpenAI Whisper × GPT-4o-mini × n8n × Telegram Bot API × Slack API A complete multilingual voice-to-text bridge — connecting speech, translation, and collaboration across platforms. 🌍