by David Olusola
Overview This n8n workflow automates the process of sending a daily weather report for a specified city directly to your email inbox. It's a simple yet effective way to stay informed about the weather without manually checking. How It Works This workflow operates in four main steps: Daily Schedule Trigger: The workflow begins with a Cron node, which acts as a scheduler. It's configured to trigger the workflow automatically at a specific time each day (e.g., every morning at 8:00 AM). Fetch Weather Data: Next, an HTTP Request node is used to interact with the OpenWeatherMap API. This node constructs a URL to query the API for current weather conditions in a specified city (e.g., "London"). It includes parameters like units=metric (for Celsius temperatures) and your unique appid (API key) for authentication with OpenWeatherMap. Format Weather Report: A Code node receives the raw JSON weather data from the OpenWeatherMap API. Inside this node, JavaScript code extracts relevant information such as city name, weather description, temperature (current and "feels like"), humidity, and wind speed. It then formats this information into a human-readable text string, creating the actual weather report message. Send Email Report: Finally, a Gmail node takes the formatted weather report from the Code node. It uses your configured Gmail credentials to send an email to your specified recipient address. The email's subject line dynamically includes the city name, and the body contains the full formatted weather report. Setup Steps To get this workflow up and running, follow these instructions: Step 1: Get Your OpenWeatherMap API Key Go to the OpenWeatherMap website. Sign up for a free account if you don't already have one. Once logged in, navigate to the "API keys" section of your profile. Copy your unique API key. You will need this in Step 4. Step 2: Create Gmail Credentials in n8n In your n8n instance, click on Credentials in the left sidebar. Click New Credential. Search for and select "Gmail OAuth2 API" or "Gmail API" (OAuth2 is generally preferred). Follow the on-screen instructions to connect your Gmail account. This usually involves clicking an "Authenticate with Google" button and granting n8n the necessary permissions. Save the credential. Make note of the Credential Name (e.g., "My Gmail Account").
by Fahmi Fahreza
This template sets up a weekly ETL (Extract, Transform, Load) pipeline that pulls financial data from QuickBooks Online into Google BigQuery. It not only transfers data, but also cleans, classifies, and enriches each transaction using your own business logic. Who It's For Data Analysts & BI Developers** Need structured financial data in a warehouse to build dashboards (e.g., Looker Studio, Tableau) and run complex queries. Financial Analysts & Accountants** Want to run custom SQL queries beyond QuickBooks’ native capabilities. Business Owners** Need a permanent, historical archive of transactions for reporting and tracking. What the Workflow Does 1. Extract Fetches transactions from the previous week every Monday from your QuickBooks Online account. 2. Transform Applies custom business logic: Cleans up text fields Generates stable transaction IDs Classifies transactions (income, expense, internal transfer) 3. Format Prepares the cleaned data as a bulk-insert-ready SQL statement. 4. Load Inserts the structured and enriched data into a Google BigQuery table. Setup Guide 1. Prepare BigQuery Create a dataset (e.g., quickbooks) and table (e.g., transactions) The table schema must match the SQL query in the "Load Data to BigQuery" node 2. Add Credentials Add QuickBooks Online and Google BigQuery credentials to your n8n instance 3. Configure Business Logic Open the Clean & Classify Transactions node Update the JavaScript arrays: internalTransferAccounts expenseCategories incomeCategories Ensure these match your QuickBooks Chart of Accounts exactly 4. Configure BigQuery Node Open the Load Data to BigQuery node Select the correct Google Cloud project Ensure the SQL query references the correct dataset and table 5. Activate the Workflow Save and activate it The workflow will now run weekly Requirements A running n8n instance (Cloud or Self-Hosted) A QuickBooks Online account A Google Cloud Platform project with BigQuery enabled A BigQuery table with a matching schema Customization Options Change Schedule**: Modify the schedule node to run daily, monthly, or at a different time Adjust Date Range**: Change the date macro in the Get Last Week's Transactions node Refine Classification Rules**: Add custom logic in the Clean & Classify Transactions node to handle specific edge cases
by Cuong Nguyen
Who is this for? This workflow is designed for Content Marketing Teams, Agencies, and Professional Editors who prefer writing in Google Docs but need a seamless way to publish to WordPress. Unlike generic "AI Writers" that generate content from scratch (which often fails AI detection), this workflow focuses on "Document Ops"—automating the tedious task of moving, cleaning, and optimizing existing human-written content. Why use this workflow? (The SEO Advantage) Most automation templates leave your SEO score at 0/100 because they fail to map RankMath metadata. This workflow hits the ground running with an immediate 65-70/100 RankMath Score. By using a Gemini AI Agent to analyze your content and mapping it to hidden RankMath API fields, it automatically passes these critical checks: ✅ Focus Keyword in SEO Title: AI automatically inserts the target keyword at the beginning. ✅ Focus Keyword in Meta Description: AI crafts a compelling description containing the keyword. ✅ Focus Keyword in URL: AI generates a clean, short, keyword-rich slug. ✅ Focus Keyword at the Start: The workflow intelligently injects a "hook" sentence containing the keyword at the very top of your post. ✅ Content Length: Preserves your original long-form content. How it works Monitors Google Drive: Watches for new HTML/Doc files in a specific "Drafts" folder. Cleans Content: Sanitizes raw HTML from Google Docs (removing messy styles and tags). Smart Duplicate Check: Checks if the post already exists on WordPress (via slug) to decide whether to Create a new draft or Update an existing one. AI Analysis (Gemini): Extracts the best Focus Keyword, SEO Title, and Meta Description from your content. RankMath Integration: Pushes these SEO values directly into RankMath's custom meta keys. Archiving: Moves processed files to a "Published" folder to keep your Drive organized. Critical Prerequisites (Must Read) To allow n8n to update RankMath SEO data and prevent 401 Unauthorized errors, you MUST add a helper snippet to your WordPress site. Access your WordPress files via FTP/File Manager. Navigate to wp-content/mu-plugins/ (Create the folder mu-plugins if it doesn't exist). Create a file named n8n-rankmath-helper.php and paste the following code: <?php /* Plugin Name: n8n RankMath & Auth Helper Description: Fixes Basic Auth Header for n8n and exposes RankMath meta keys to REST API. */ // 1. Fix Authorization Header (Solves 401 Errors on Apache/LiteSpeed) add_filter('wp_is_application_passwords_available', '__return_true'); if ( !function_exists('aiops_enable_basic_auth') ) { function aiops_enable_basic_auth() { if ( isset( $_SERVER['HTTP_AUTHORIZATION'] ) ) { $auth = $_SERVER['HTTP_AUTHORIZATION']; if ( strpos( $auth, 'Basic ' ) === 0 ) { list( $username, $password ) = explode( ':', base64_decode( substr( $auth, 6 ) ) ); $_SERVER['PHP_AUTH_USER'] = $username; $_SERVER['PHP_AUTH_PW'] = $password; } } } add_action('init', 'aiops_enable_basic_auth'); } // 2. Expose RankMath Meta Keys to REST API add_action( 'rest_api_init', function () { $meta_keys = [ 'rank_math_title', 'rank_math_description', 'rank_math_focus_keyword', 'rank_math_robots', 'rank_math_canonical_url' ]; foreach ( $meta_keys as $meta_key ) { register_meta( 'post', $meta_key, [ 'show_in_rest' => true, 'single' => true, 'type' => 'string', 'auth_callback' => function() { return current_user_can( 'edit_posts' ); } ] ); } }); ?>; How to set up 1. Configure Credentials: Google Drive OAuth2** (Drive scopes). Google Gemini (PaLM)** API Key. WordPress: Connect using **Application Passwords (Users > Profile > Application Passwords). 2. Global Configuration (First Node): Open the node named CONFIG - Edit Settings Here. wp_base_url**: Enter your site URL (e.g., https://your-site.com - no trailing slash). drive_published_folder_id**: Enter the ID of the Google Drive folder where you want to move published files. 3. Trigger Setup: Open the Google Drive Trigger node. Select your specific "Drafts" folder in the Folder to Watch field. Future Roadmap We are actively improving this template. Upcoming V2 will feature: AI Featured Image Generation: Auto-create branded thumbnails. Content Illustrations: Auto-insert relevant images into the body content. Need Help or Want to Customize This? Contact me for consulting and support: Email: cuongnguyen@aiops.vn
by Robert Breen
This n8n automation notifies you whenever someone books a meeting with you via Calendly. It sends a customized email via Outlook and a Slack message using details from the event. 📌 What This Workflow Does Listens for new Calendly meeting bookings (invitee.created). Extracts key details (name, email, reason for meeting, start time). Uses an AI agent to generate both: A HTML email sent to you via Outlook. A Slack message sent to your chosen channel. ⚙️ Step-by-Step Setup Instructions 1. 🔗 Calendly API Setup In n8n: Go to Credentials → Add Credential → Calendly API. Connect your account with the personal access token. In the Calendly Trigger node, set event to invitee.created. 2. 📧 Microsoft Outlook Credential Add Microsoft Outlook credentials via OAuth2 in n8n → Credentials. Select it in the "Send a message" node. 3. 💬 Slack Setup Add Slack OAuth2 credentials. Select your Slack workspace and choose the channel (e.g., #leads). 4. 🧠 Configure the AI Agent (OpenAI) Provide your OpenAI API key under Credentials → OpenAI API. The AI Agent node is pre-configured to: Format a custom Slack message Format a custom Outlook HTML email 5. 🛠 Node Details Calendly Event** (Trigger): Listens for new bookings Edit Fields**: Extracts values like name, email, start time, and form answers Email Generator** (AI Agent): Creates formatted email + Slack message Send a message** (Outlook): Sends the formatted email to your inbox Slack Message**: Sends the AI-generated Slack alert 🧪 Example Output Slack Message:
by Summer
Website Leads to Voice Demo and Scheduling Creator: Summer Chang AI Booking Agent Setup Guide Overview This automation turns your website into an active booking agent. When someone fills out your form, it automatically: Adds their information to Notion AI researches their business from their website Calls them immediately with a personalized pitch Updates Notion with call results Total setup time: 30-45 minutes What You Need Before starting, create accounts and gather these: n8n account (cloud or self-hosted) Notion account - Free plan works duplicate my notion template OpenRouter API key - Get from openrouter.ai Vapi account - Get from vapi.ai Create an AI assistant Set up a phone number Copy your API key, Assistant ID, and Phone Number ID How It Works The Complete Flow Visitor fills form on your website Form submission creates new record in Notion with Status = "New" Notion Trigger detects new record (checks every minute) Main Workflow executes: Fetches lead's website AI analyzes their business Updates Notion with analysis Makes Vapi call with personalized intro Call happens between your AI agent and the lead When call ends, Vapi sends webhook to n8n Webhook Workflow executes: Fetches call details from Vapi AI generates call summary Updates Notion with results and recording
by Matt Chong
Who is this for? Teams using Gmail and Slack who want to streamline email handling. Customer support, sales, and operations teams that want emails sorted by topic and priority automatically. Anyone tired of manually triaging customer emails. What does it solve? Stops important messages from slipping through the cracks. Automatically identifies the nature and urgency of incoming emails. Routes emails to the right Slack channel with a clear, AI-generated summary. How it works The workflow watches for unread emails in your Gmail inbox. It fetches the full email content and passes it to OpenAI for classification. The AI returns structured JSON with the email’s category, priority, summary, and sender. Based on the AI result, it assigns a label and Slack channel. A message is sent to the right Slack channel with the details. How to setup? Connect credentials: Gmail (OAuth2) Slack (OAuth2) OpenAI (API Key) Adjust email polling: Open the Gmail Trigger node and set how frequently it should check for new emails. Verify routing settings: In the “Routing Map” node, update Slack channel IDs for each category if needed. Customize AI behavior (optional): Tweak the AI Agent prompt to better match your internal categorization rules. How to customize this workflow to your needs Add more categories:** Update the AI prompt and the schema in the “Structured Output Parser.” Change Slack formatting:** Modify the message text in the Slack node to include links, emojis, or mentions. Use different routing logic:** Expand the Routing Map to assign based on keywords, domains, or even sentiment. Add escalation workflows:** Trigger follow-up actions for high-priority or complaint emails.
by Mohammad
Telegram ticket intake and status tracking with Postgres Who’s it for Anyone running support requests through Telegram, Email, Webhooks, and so on who needs a lightweight ticketing system without paying Zendesk prices. Ideal for small teams, freelancers, or businesses that want tickets logged in a structured database (Postgres) and tracked automatically. I'm using Telegram since it's the most convenient one. How it works / What it does This workflow turns (Telegram) into a support desk: Receives new requests via a Telegram bot command. Creates a ticket in a Postgres database with a correlation ID, requester details, and status. Auto-confirms back to the requester with the ticket ID. Provides ticket updates (status changes, resolutions) when queried. Keeps data clean using dedupe keys and controlled input handling. How to set up Create a Telegram bot using @BotFather and grab the token. Connect your Postgres database to n8n and create a tickets table: CREATE TABLE tickets ( id BIGSERIAL PRIMARY KEY, correlation_id UUID, source TEXT, external_id TEXT, requester_name TEXT, requester_email TEXT, requester_phone TEXT, subject TEXT, description TEXT, status TEXT, priority TEXT, dedupe_key TEXT, chat_id TEXT, created_at TIMESTAMP DEFAULT NOW(), updated_at TIMESTAMP DEFAULT NOW() ); Add your Telegram and Postgres credentials in n8n (via the Credentials tab, not hardcoded). Import the workflow JSON and replace the placeholder credentials with yours. Test by sending /new in Telegram and follow the prompts. Requirements n8n (latest version recommended) Telegram bot token Postgres instance (local, Docker, or cloud) How to customize the workflow Change database fields if you need more requester info. Tweak the Switch node and Comands for multiple status types. Extend with Slack, Discord, or email nodes for broader notifications. Integrate with external systems (CRM, project management) by adding more branches.
by Hans Wilhelm Radam
Description: This workflow automates personalized email outreach to a list of hospitals. It uses a chat-based interface to accept a region and a list of hospital names, looks up their specific contact details from a structured Google Sheet, and sends a tailored email via Gmail. Who’s it for This template is perfect for healthcare startups, medical device sales representatives, or IT consultants who need to conduct targeted outreach to hospital administrators. It's designed for anyone looking to automate a personalized, region-specific email campaign without manual data entry. How it works Trigger: You provide input via a chat message. The first line is the region (e.g., LUZON), and each subsequent line is a hospital name. Parsing: A Code node splits your message into a structured list of items for processing. Batching: The workflow processes each hospital one by one for reliable execution. Data Lookup: Based on the region, the workflow queries the corresponding sheet in a Google Sheets document to find the hospital's specific contact details. Email Delivery: A personalized email is sent to the hospital's email address using Gmail, pulling data from the spreadsheet to customize the message. How to set up Credentials: Set up n8n credentials for Google Sheets and Gmail (using OAuth2 recommended). Google Sheet: Duplicate the provided template Sheet structure. Your sheet must have columns like Hospital Name and Main Email. Workflow Configuration: Replace the placeholder Google Sheet ID in the Set Configuration node with the ID of your own sheet. Requirements An n8n instance (cloud or self-hosted). A Google account with access to Google Sheets and Gmail. The provided Google Sheets template structure. How to customize Email Template:* Modify the email subject and body in the *Send Gmail Message** node. Use placeholders like {{ $json["Your Field"] }} to insert data from your Google Sheet. Data Source:** Replace the Google Sheets node with another data source (e.g., Airtable, PostgreSQL) by ensuring it outputs data in a similar JSON format. Output:** Instead of Gmail, use the SendBlue node to send an SMS or the Slack node to send a DM.
by Vlad Arbatov
Summary Every day at a set time, this workflow fetches yesterday’s newsletters from Gmail, summarizes each email into concise topics with an LLM, merges all topics, renders a clean HTML digest, and emails it to your inbox. What this workflow does Triggers on a daily schedule (default 16:00, server time) Fetches Gmail messages since yesterday using a custom search query with optional sender filters Retrieves and decodes each email’s HTML, subject, sender name, and date Prompts an LLM (GPT‑4.1‑mini) to produce a consistent JSON summary of topics per email Merges topics from all emails into a single list Renders a styled HTML email with enumerated items Sends the HTML digest to a specified recipient via Gmail Apps and credentials Gmail OAuth2: Gmail account (read and send) OpenAI: OpenAi account Typical use cases Daily/weekly newsletter rollups delivered as one email Curated digests from specific media or authors Team briefings that are easy to read and forward How it works (node-by-node) Schedule Trigger Fires at the configured hour (default 16:00). Get many messages (Gmail → getAll, returnAll: true) Uses a filter like: =(from:@.com) OR (from:@.com) OR (from:@.com -"__") after:{{ $now.minus({ days: 1 }).toFormat('yyyy/MM/dd') }} Returns a list of message IDs from the past day. Loop Over Items (Split in Batches) Iterates through each message ID. Get a message (Gmail → get) Retrieves the full message/payload for the current email. Get message data (Code) Extracts HTML from Gmail’s MIME parts. Normalizes the sender to just the display name. Formats the date as DD.MM.YYYY. Passes html, subject, from, date forward. Clean (Code) Converts DD.MM.YYYY → MM.DD (for prompt brevity). Passes html, subject, from, date to the LLM. Message a model (OpenAI, model: gpt‑4.1‑mini, JSON output) Prompt instructs: Produce JSON: { "topics": [ { "title", "descr", "subject", "from", "date" } ] } Split multi-news blocks into separate topics Combine or ignore specific blocks for particular senders (placeholders __) Keep subject untranslated; other values in __ language Injects subject/from/date/html from the current email Loop Over Items (continues) Processes all emails for the time window. Merge (Code) Flattens the topics arrays from all processed emails into one combined topics list. Create template (Code) Builds a complete HTML email: Enumerated items with title, one-line description Original subject and “from — date” Safely escapes HTML and preserves line breaks Inline, email-friendly styles Send a message (Gmail → send) Sends the final HTML to your recipient with a custom subject. Node map | Node | Type | Purpose | |---|---|---| | Schedule Trigger | Trigger | Run at a specific time each day | | Get many messages | Gmail (getAll) | Search emails since yesterday with filters | | Loop Over Items | Split in Batches | Iterate messages one-by-one | | Get a message | Gmail (get) | Fetch full message payload | | Get message data | Code | Extract HTML/subject/from/date; normalize sender and date | | Clean | Code | Reformat date and forward fields to LLM | | Message a model | OpenAI | Summarize email into JSON topics | | Merge | Code | Merge topics from all emails | | Create template | Code | Render a styled HTML email digest | | Send a message | Gmail (send) | Deliver the digest email | Before you start Connect Gmail OAuth2 in n8n (ensure it has both read and send permissions) Add your OpenAI API key Import the provided workflow JSON into n8n Setup instructions 1) Schedule Schedule Trigger node: Set your preferred hour (server time). Default is 16:00. 2) Gmail Get many messages: Adjust filters.q to your senders/labels and window: Example: =(from:news@publisher.com) OR (from:briefs@media.com -"promo") after:{{ $now.minus({ days: 1 }).toFormat('yyyy/MM/dd') }} You can use label: or category: to narrow scope. Send a message: sendTo = your email subject = your subject line message = set to {{ $json.htmlBody }} (already produced by Create template) The HTML body uses inline styles for broad email client support. 3) OpenAI Message a model: Model: gpt‑4.1‑mini (swap to gpt‑4o‑mini or your preferred) Update prompt placeholders: __ language → your target language __ sender rules → special cases (combine blocks, ignore sections) How to use The workflow runs daily at the scheduled time, compiling a digest from yesterday’s emails. You’ll receive one HTML email with all topics neatly listed. Adjust the time window or filters to change what gets included. Customization ideas Time window control: after: {{ $now.minus({ days: X }) }} and/or add before: Filter by labels: q = label:Newsletters after:{{ $now.minus({ days: 1 }).toFormat('yyyy/MM/dd') }} Language: Set the __ language in the LLM prompt Template: Edit “Create template” to add a header, footer, hero section, logo/branding Include links parsed from HTML (add an HTML parser step in “Get message data”) Subject line: Make dynamic, e.g., “Digest for {{ $now.toFormat('dd.MM.yyyy') }}” Sender: Use a dedicated Gmail account or alias for deliverability and separation Limits and notes Gmail size limit for outgoing emails is ~25 MB; large digests may need pruning LLM usage incurs cost and latency proportional to email size and count HTML rendering varies across clients; inline styles are used for compatibility Schedule uses the n8n server’s timezone; adjust if your server runs in a different TZ Privacy and safety Emails are sent to OpenAI for summarization—ensure this aligns with your data policies Limit the Gmail search scope to only the newsletters you want processed Avoid including sensitive emails in the search window Sample output (email body) Title 1 One-sentence description Original Subject → Sender — DD.MM.YYYY Title 2 One-sentence description Original Subject → Sender — DD.MM.YYYY Tips and troubleshooting No emails found? Check filters.q and the time window (after:) Model returns empty JSON? Simplify the prompt or try another model Odd characters in output? The template escapes HTML and preserves line breaks; verify your input encoding Delivery issues? Use a verified sender, set a clear subject, and avoid spammy keywords Tags gmail, openai, llm, newsletters, digest, summarization, email, automation Changelog v1: Initial release with scheduled time window, sender filters, LLM summarization, topic merging, and HTML email template rendering
by Yehor EGMS
🎙️ n8n Workflow: Voice Message Transcription with Access Control This n8n workflow enables automated transcription of voice messages in Telegram groups with built-in access control and intelligent fallback mechanisms. It's designed for teams that need to convert audio messages to text while maintaining security and handling various audio formats. 📌 Section 1: Trigger & Access Control ⚡ Receive Message (Telegram Trigger) Purpose: Captures incoming messages from users in your Telegram group. How it works: When a user sends a message (voice, audio, or text), the workflow is triggered and the sender's information is captured. Benefit: Serves as the entry point for the entire transcription pipeline. 🔐 Sender Verification Purpose: Validates whether the sender has permission to use the transcription service. Logic: Check sender against authorized users list If authorized → Proceed to next step If not authorized → Send "Access denied" message and stop workflow Benefit: Prevents unauthorized users from consuming AI credits and accessing the service. 📌 Section 2: Message Type Detection 🎵 Audio/Voice Recognition Purpose: Identifies the type of incoming message and audio format. Why it's needed: Telegram handles different audio types with different statuses: Voice notes (voice messages) Audio files (standard audio attachments) Text messages (no audio content) Process: Check if message contains audio/voice content If no audio file detected → Send "No audio file found" message If audio detected → Assign file ID and proceed to format detection 🧩 File Type Determination (IF Node) Purpose: Identifies the specific audio format for proper processing. Supported formats: OGG (Telegram voice messages) MPEG/MP3 MP4/M4A Other audio formats Logic: If format recognized → Proceed to transcription If format not recognized → Send "File format not recognized" message Benefit: Ensures compatibility with transcription services by validating file types upfront. 📌 Section 3: Primary Transcription (OpenAI) 📥 File Download Purpose: Downloads the audio file from Telegram for processing. 🤖 OpenAI Transcription Purpose: Transcribes audio to text using OpenAI's Whisper API. Why OpenAI: High-quality transcription with cost-effective pricing. Process: Send downloaded file to OpenAI transcription API Simultaneously send notification: "Transcription started" If successful → Assign transcribed text to variable and proceed If error occurs → Trigger fallback mechanism Benefit: Fast, accurate transcription with multi-language support. 📌 Section 4: Fallback Transcription (Gemini) 🛟 Gemini Backup Transcription Purpose: Provides a safety net if OpenAI transcription fails. Process: Receives file only if OpenAI node returns an error Downloads and processes the same audio file Sends to Google Gemini for transcription Assigns transcribed text to the same text variable Benefit: Ensures high reliability—if one service fails, the other takes over automatically. 📌 Section 5: Message Length Handling 📏 Text Length Check (IF Node) Purpose: Determines if the transcribed text exceeds Telegram's character limit. Logic: If text ≤ 4000 characters → Send directly to Telegram If text > 4000 characters → Split into chunks Why: Telegram has a 4,000-character limit per message. ✂️ Text Splitting (Code Node) Purpose: Breaks long transcriptions into 4,000-character segments. Process: Receives text longer than 4,000 characters Splits text into chunks of ≤4,000 characters Maintains readability by avoiding mid-word breaks Outputs array of text chunks 📌 Section 6: Response Delivery 💬 Send Transcription (Telegram Node) Purpose: Delivers the transcribed text back to the Telegram group. Behavior: Short messages:** Sent as a single message Long messages:** Sent as multiple sequential messages Benefit: Users receive complete transcriptions regardless of length, ensuring no content is lost. 📊 Workflow Overview Table | Section | Node Name | Purpose | |---------|-----------|---------| | 1. Trigger | Receive Message | Captures incoming Telegram messages | | 2. Access Control | Sender Verification | Validates user permissions | | 3. Detection | Audio/Voice Recognition | Identifies message type and audio format | | 4. Validation | File Type Check | Verifies supported audio formats | | 5. Download | File Download | Retrieves audio file from Telegram | | 6. Primary AI | OpenAI Transcription | Main transcription service | | 7. Fallback AI | Gemini Transcription | Backup transcription service | | 8. Processing | Text Length Check | Determines if splitting is needed | | 9. Splitting | Code Node | Breaks long text into chunks | | 10. Response | Send to Telegram | Delivers transcribed text | 🎯 Key Benefits 🔐 Secure access control: Only authorized users can trigger transcriptions 💰 Cost management: Prevents unauthorized credit consumption 🎵 Multi-format support: Handles various Telegram audio types 🛡️ High reliability: Dual-AI fallback ensures transcription success 📱 Telegram-optimized: Automatically handles message length limits 🌍 Multi-language: Both AI services support numerous languages ⚡ Real-time notifications: Users receive status updates during processing 🔄 Automatic chunking: Long transcriptions are intelligently split 🧠 Smart routing: Files are processed through the optimal path 📊 Complete delivery: No content loss regardless of transcription length 🚀 Use Cases Team meetings:** Transcribe voice notes from team discussions Client communications:** Convert client voice messages to searchable text Documentation:** Create text records of verbal communications Accessibility:** Make audio content accessible to all team members Multi-language teams:** Leverage AI transcription for various languages
by vinci-king-01
How it works This workflow automatically monitors supplier health and supply chain risks, providing real-time alerts and daily reports to procurement teams. Key Steps Daily Risk Check - Runs the workflow every morning at 9:00 AM to assess supplier health. Multi-Source Data Collection - Scrapes supplier websites, investor relations pages, and industry news for risk indicators. AI-Powered Risk Analysis - Uses ScrapeGraphAI to extract and analyze financial status, operational issues, and regulatory problems. Risk Scoring Engine - Calculates comprehensive risk scores (1-10) based on multiple factors including financial health, operational disruptions, and news sentiment. Alternative Supplier Discovery - Automatically searches for backup suppliers when high-risk situations are detected. Smart Alert System - Routes notifications based on risk levels: immediate alerts for high-risk suppliers, daily summaries for normal operations. Multi-Channel Notifications - Sends alerts via Slack and detailed reports via email to procurement teams. Set up steps Setup time: 10-15 minutes Configure ScrapeGraphAI credentials - Add your ScrapeGraphAI API key for web scraping capabilities. Set up Slack integration - Connect your Slack workspace and configure the #procurement-alerts and #supply-chain-updates channels. Configure email settings - Set up email credentials for detailed reports to procurement teams. Customize supplier URLs - Update the supplier website URLs to monitor your specific suppliers. Adjust risk thresholds - Modify the risk scoring parameters based on your industry and risk tolerance. Set notification preferences - Configure alert conditions and message formatting for your team's needs.
by Davide
This workflow automates the process of extracting structured, usable information from unstructured email messages across multiple platforms. It connects directly to Gmail, Outlook, and IMAP accounts, retrieves incoming emails, and sends their content to an AI-powered parsing agent built on OpenAI GPT models. The AI agent analyzes each email, identifies relevant details, and returns a clean JSON structure containing key fields: From** – sender’s email address To** – recipient’s email address Subject** – email subject line Summary** – short AI-generated summary of the email body The extracted information is then automatically inserted into an n8n Data Table, creating a structured database of email metadata and summaries ready for indexing, reporting, or integration with other tools. Key Benefits ✅ Full Automation: Eliminates manual reading and data entry from incoming emails. ✅ Multi-Source Integration: Handles data from different email providers seamlessly. ✅ AI-Driven Accuracy: Uses advanced language models to interpret complex or unformatted content. ✅ Structured Storage: Creates a standardized, query-ready dataset from previously unstructured text. ✅ Time Efficiency: Processes emails in real time, improving productivity and response speed. ✅ *Scalability:** Easily extendable to handle additional sources or extract more data fields. How it works This workflow automates the transformation of unstructured email data into a structured, queryable format. It operates through a series of connected steps: Email Triggering: The workflow is initiated by one of three different email triggers (Gmail, Microsoft Outlook, or a generic IMAP account), which constantly monitor for new incoming emails. AI-Powered Parsing & Structuring: When a new email is detected, its raw, unstructured content is passed to a central "Parsing Agent." This agent uses a specified OpenAI language model to intelligently analyze the email text. Data Extraction & Standardization: Following a predefined system prompt, the AI agent extracts key information from the email, such as the sender, recipient, subject, and a generated summary. It then forces the output into a strict JSON structure using a "Structured Output Parser" node, ensuring data consistency. Data Storage: Finally, the clean, structured data (the from, to, subject, and summarize fields) is inserted as a new row into a specified n8n Data Table, creating a searchable and reportable database of email information. Set up steps To implement this workflow, follow these configuration steps: Prepare the Data Table: Create a new Data Table within n8n. Define the columns with the following names and string type: From, To, Subject, and Summary. Configure Email Credentials: Set up the credential connections for the email services you wish to use (Gmail OAuth2, Microsoft Outlook OAuth2, and/or IMAP). Ensure the accounts have the necessary permissions to read emails. Configure AI Model Credentials: Set up the OpenAI API credential with a valid API key. The workflow is configured to use the model, but this can be changed in the respective nodes if needed. Connect the Nodes: The workflow canvas is already correctly wired. Visually confirm that the email triggers are connected to the "Parsing Agent," which is connected to the "Insert row" (Data Table) node. Also, ensure the "OpenAI Chat Model" and "Structured Output Parser" are connected to the "Parsing Agent" as its AI model and output parser, respectively. Activate the Workflow: Save the workflow and toggle the "Active" switch to ON. The triggers will begin polling for new emails according to their schedule (e.g., every minute), and the automation will start processing incoming messages. Need help customizing? Contact me for consulting and support or add me on Linkedin.