by Automate With Marc
Step-By-Step AI Stock Market Research Agent (Beginner) Build your own AI-powered daily stock market digest — automatically researched, summarized, and delivered straight to your inbox. This beginner-friendly n8n workflow shows how to combine OpenAI GPT-5, Decodo scraping tool, and Gmail to produce a concise daily financial update without writing a single line of code. 🎥 Watch a full tutorial and walkthrough on how to build and customize similar workflows at: https://www.youtube.com/watch?v=DdnxVhUaQd4 What this template does Every day, this agent automatically: Triggers on schedule (e.g., 9 a.m. daily). Uses Decodo Tool to fetch real market headlines from Bloomberg, CNBC, Reuters, Yahoo Finance, etc. Passes the information to GPT-5, which summarizes key events into a clean daily report covering: Major indices (S&P 500, Nasdaq, Dow) Global markets (Europe & Asia) Sector trends and earnings Congressional trading activity Major financial and regulatory news Emails the digest to you in a neat, ready-to-read HTML format. Why it’s useful (for beginners) Zero coding: everything configured through n8n nodes. Hands-on AI Agent logic: learn how a language-model node, memory, and web-scraping tool work together. Practical use case: a real-world agent that automates market intelligence for investors, creators, or business analysts. Requirements OpenAI API Key (GPT-4/5 compatible) Decodo API Key (for market data scraping) Gmail OAuth2 Credential (to send daily digest) Credentials to set in n8n OpenAI API (Chat Model) → Connect your OpenAI key. Decodo API → Paste your Decodo access key. Gmail OAuth2 → Connect your Google Account and edit “send to” email address. How it works (nodes overview) Schedule Trigger Starts the workflow at a preset time (default: daily). AI Research Agent Acts as a Stock Market Research Assistant. Uses GPT-5 via OpenAI Chat Model. Uses Decodo Tool to fetch real-time data from trusted finance sites. Applies custom system rules for concise summaries and email-ready HTML output. Simple Memory Maintains short-term context for clean message passing between nodes. Decodo Tool Handles all data scraping and extraction using the AI’s tool calls. Gmail Node Emails the final daily digest to the user (default subject: “Daily AI News Update”). Setup (step-by-step) Import template into n8n. Open each credential node → connect your accounts. In the Gmail node, replace “sendTo” with your email. Adjust Schedule Trigger → e.g., every weekday 8:30 a.m. (Optional) Edit the system prompt in AI Research Agent to focus on different sectors (crypto, energy, tech). Click Execute Workflow Once to test — you’ll receive an AI-curated digest in your inbox. Customization tips 🕒 Change frequency: adjust Schedule Trigger to run multiple times daily or weekly. 📰 Add sources: extend the Decodo Tool input with new URLs (e.g., Seeking Alpha, MarketWatch). 📈 Switch topic: modify prompt to track crypto, commodities, or macroeconomic data. 💬 Alternative delivery: send digest via Slack, Telegram, or Notion instead of Gmail. Troubleshooting 401 errors: verify OpenAI/Decodo credentials. Empty output: ensure Decodo Tool returns valid data; inspect the agent’s log. Email not sent: confirm Gmail OAuth2 scope and recipient email. Formatting issues: keep output in HTML mode; avoid Markdown.
by Incrementors
Paste any customer interview or testimonial video URL into a simple form — and the workflow handles everything from there. WayinVideo AI scans the full video and automatically cuts the most impactful moments into up to 5 vertical clips. Each clip is downloaded and saved directly to your Google Drive folder, named and ready to use. Built for marketing teams, agencies, and founders who collect customer stories and need polished clips for ads, websites, and social media — without video editing. What This Workflow Does Form-based input** — Collects the video URL, client name, industry, and intended usage through a hosted form anyone on your team can fill out AI clip detection** — Submits the testimonial video to WayinVideo, which automatically finds the highest-impact moments and creates export-ready clips Vertical video output** — Generates clips in 9:16 ratio with AI reframing enabled, making them ready for Instagram Reels, TikTok, and YouTube Shorts with no editing Caption generation** — Adds captions to every clip automatically using the original language and a styled caption template Batch file download** — Downloads each clip as an individual video file directly from the WayinVideo export link Auto-named Drive upload** — Saves every clip to your Google Drive folder using the AI-generated clip title as the filename — no manual renaming needed Setup Requirements Tools You'll Need Active n8n instance (self-hosted or n8n Cloud) WayinVideo account with API access (wayin.ai) Google account with Google Drive OAuth2 access Estimated Setup Time: 5–10 minutes Step-by-Step Setup 1. Get Your WayinVideo API Key WayinVideo is the AI engine that finds and extracts the best testimonial moments from your video. Go to WayinVideo and log in or create an account Navigate to your Dashboard → API section Copy your Bearer token Open the 2. WayinVideo — Submit Clipping Task node in n8n Find the Authorization header value and replace YOUR_WAYIN_API_KEY_HERE with your token Open the 4. WayinVideo — Get Clip Results node and replace the same placeholder there > ⚠️ This API key appears in two nodes — node 2 and node 4. You must replace it in both. If you only update one, the result-fetching step will fail with an authentication error. 2. Connect Google Drive (OAuth2) In n8n, go to Credentials → Add Credential → Google Drive OAuth2 API Follow the Google authentication flow to grant access Open the 7. Google Drive — Upload Clip node Select your newly created credential from the dropdown 3. Set Your Google Drive Folder ID Open Google Drive in your browser and navigate to the folder where you want clips saved Look at the URL bar — copy the string of characters that appears after /folders/ Open the 7. Google Drive — Upload Clip node in n8n Replace YOUR_GOOGLE_DRIVE_FOLDER_ID with the ID you just copied Update YOUR_FOLDER_NAME with a recognisable label for your own reference 4. Activate the Workflow Save the workflow in n8n Toggle it Active using the switch at the top of the editor Open the form URL shown in the 1. Form — Testimonial Video + Details node trigger settings Submit a test testimonial video URL to confirm everything runs correctly How It Works (Step by Step) Step 1 — Form: Collect Testimonial Details The workflow starts when someone fills out a hosted n8n form. You provide four pieces of information: the testimonial video URL, the client or customer name, the industry or niche, and where the clips will be used (such as ads, a website, or social media). All this data is passed automatically to the next step. Step 2 — Submit Clipping Task to WayinVideo The workflow sends your video URL to the WayinVideo API. Along with the URL, it sends settings including the client name as the project label, a target clip length of 30–60 seconds, a maximum of 5 clips, HD 720p resolution, captions turned on, and vertical 9:16 reframing enabled. WayinVideo processes these settings and returns a Job ID used to retrieve results later. Step 3 — Wait 90 Seconds The workflow pauses for 90 seconds. This gives WayinVideo time to analyse the video and generate clips before the next step attempts to retrieve them. Skipping or shortening this wait on longer videos may result in an empty response. > ⚠️ If your testimonial videos are longer than 20–30 minutes, 90 seconds may not be enough processing time. Increase the wait duration in 3. Wait — 90 Seconds to 180 or 240 seconds to account for longer videos. Step 4 — Fetch Clip Results Using the Job ID from Step 2, the workflow calls the WayinVideo results endpoint to retrieve the completed clips. If WayinVideo has finished processing, the response contains the full list of clips with titles, export links, engagement scores, descriptions, and timestamps. Step 5 — Extract Each Clip A code step reads the clips array from the API response and splits it into individual items — one per clip. For each clip, it extracts the title, export link, score, tags, description, and start and end timestamps. Each clip then flows through the remaining steps independently. Step 6 — Download the Clip File For each clip, the workflow fetches the export link and downloads the video file as a binary attachment. This happens one clip at a time, in sequence. Step 7 — Upload to Google Drive Each downloaded clip file is uploaded to your configured Google Drive folder. The filename is set automatically using the AI-generated clip title from Step 5. Your Drive folder stays organised without any manual work. Key Features ✅ Zero editing required — WayinVideo selects the best moments automatically using AI engagement scoring, no timeline scrubbing needed ✅ Vertical-ready output — AI reframing and 9:16 ratio are enabled by default, making every clip mobile and social media ready straight from Drive ✅ Auto-captioned clips — Captions are added to every clip in the original language using a styled caption template — no subtitle software required ✅ Form-based submission — Any team member can submit a testimonial video without accessing n8n or any other tool ✅ AI-generated filenames — Each clip is named using the WayinVideo-generated title, so your Drive folder is always easy to browse ✅ Client-labelled projects — The client name from the form is sent to WayinVideo as the project label, keeping clip jobs organised in your WayinVideo dashboard ✅ Batch clip processing — All clips from a single video are downloaded and uploaded in one workflow run — no manual repeat steps Customisation Options Increase the clip limit In the 2. WayinVideo — Submit Clipping Task node, change "limit": 5 to a higher number to extract more clips per video — useful for longer testimonial recordings. Switch to landscape output In the same Submit node, change "ratio": "RATIO_9_16" to "RATIO_16_9" and set "enable_ai_reframe" to false to generate widescreen clips for YouTube, website embeds, or sales decks. Change clip length Modify "target_duration" in node 2 from "DURATION_30_60" to "DURATION_15_30" for shorter clips suited to Instagram Stories or TikTok, or "DURATION_60_90" for longer testimonial segments. Organise clips by client in Drive Add a Google Drive node before the upload step to create a new subfolder named after the client — pulling the client name directly from the form — and point the upload to that folder instead of a single shared folder. Notify your team when clips are ready Add a Gmail or Slack node after the 7. Google Drive — Upload Clip step to send a message with the clip title and Drive link each time a new clip is saved. Add a retry loop for long videos Replace the single 90-second wait with a loop — using a Wait node, an HTTP Request poll, and an IF node — so the workflow keeps checking every 60 seconds until all clips are confirmed ready, regardless of video length. Troubleshooting API key not working / WayinVideo returns an authentication error: Confirm you have replaced YOUR_WAYIN_API_KEY_HERE in both node 2 and node 4 — not just one Check that the Authorization header value starts with Bearer followed by your key with no extra spaces or line breaks Verify your WayinVideo account is active and has available processing credits Clip files downloading but not saving correctly to Drive: Check n8n execution logs for binary data errors in the 6. HTTP — Download Clip File step — some WayinVideo export links have a short expiry window If the export link has expired, rerun the workflow with the same video URL to generate a fresh set of export links Support Need help setting this up or want a custom version built for your team or agency? 📧 Email: info@incrementors.com 🌐 Website: https://www.incrementors.com/contact-us/
by David Olusola
📧 Auto-Send AI Follow-Up Emails to Zoom Attendees This workflow automatically emails personalized follow-ups to every Zoom meeting participant once the meeting ends. ⚙️ How It Works Zoom Webhook → Captures meeting.ended event + participant list. Normalize Data → Extracts names, emails, and transcript (if available). AI (GPT-4) → Drafts short, professional follow-up emails. Gmail → Sends thank-you + recap email to each participant. 🛠️ Setup Steps 1. Zoom App Enable meeting.ended event. Include participant email/name in webhook payload. Paste workflow webhook URL. 2. Gmail Connect Gmail OAuth in n8n. Emails are sent automatically per participant. 3. OpenAI Add your OpenAI API key. Uses GPT-4 for personalized drafting. 📊 Example Output Email Subject: Follow-Up: Marketing Strategy Session Email Body: Hi Sarah, Thank you for joining our Marketing Strategy Session today. Key points we discussed: Campaign launch next Monday Budget allocation approved Need design assets by Thursday Next steps: I'll follow up with the creative team and share the updated timeline. Best, David ⚡ With this workflow, every attendee feels valued and aligned after each meeting.
by Matt Chong
Who is this for? Teams using Gmail and Slack who want to streamline email handling. Customer support, sales, and operations teams that want emails sorted by topic and priority automatically. Anyone tired of manually triaging customer emails. What does it solve? Stops important messages from slipping through the cracks. Automatically identifies the nature and urgency of incoming emails. Routes emails to the right Slack channel with a clear, AI-generated summary. How it works The workflow watches for unread emails in your Gmail inbox. It fetches the full email content and passes it to OpenAI for classification. The AI returns structured JSON with the email’s category, priority, summary, and sender. Based on the AI result, it assigns a label and Slack channel. A message is sent to the right Slack channel with the details. How to setup? Connect credentials: Gmail (OAuth2) Slack (OAuth2) OpenAI (API Key) Adjust email polling: Open the Gmail Trigger node and set how frequently it should check for new emails. Verify routing settings: In the “Routing Map” node, update Slack channel IDs for each category if needed. Customize AI behavior (optional): Tweak the AI Agent prompt to better match your internal categorization rules. How to customize this workflow to your needs Add more categories:** Update the AI prompt and the schema in the “Structured Output Parser.” Change Slack formatting:** Modify the message text in the Slack node to include links, emojis, or mentions. Use different routing logic:** Expand the Routing Map to assign based on keywords, domains, or even sentiment. Add escalation workflows:** Trigger follow-up actions for high-priority or complaint emails.
by Mohammad
Telegram ticket intake and status tracking with Postgres Who’s it for Anyone running support requests through Telegram, Email, Webhooks, and so on who needs a lightweight ticketing system without paying Zendesk prices. Ideal for small teams, freelancers, or businesses that want tickets logged in a structured database (Postgres) and tracked automatically. I'm using Telegram since it's the most convenient one. How it works / What it does This workflow turns (Telegram) into a support desk: Receives new requests via a Telegram bot command. Creates a ticket in a Postgres database with a correlation ID, requester details, and status. Auto-confirms back to the requester with the ticket ID. Provides ticket updates (status changes, resolutions) when queried. Keeps data clean using dedupe keys and controlled input handling. How to set up Create a Telegram bot using @BotFather and grab the token. Connect your Postgres database to n8n and create a tickets table: CREATE TABLE tickets ( id BIGSERIAL PRIMARY KEY, correlation_id UUID, source TEXT, external_id TEXT, requester_name TEXT, requester_email TEXT, requester_phone TEXT, subject TEXT, description TEXT, status TEXT, priority TEXT, dedupe_key TEXT, chat_id TEXT, created_at TIMESTAMP DEFAULT NOW(), updated_at TIMESTAMP DEFAULT NOW() ); Add your Telegram and Postgres credentials in n8n (via the Credentials tab, not hardcoded). Import the workflow JSON and replace the placeholder credentials with yours. Test by sending /new in Telegram and follow the prompts. Requirements n8n (latest version recommended) Telegram bot token Postgres instance (local, Docker, or cloud) How to customize the workflow Change database fields if you need more requester info. Tweak the Switch node and Comands for multiple status types. Extend with Slack, Discord, or email nodes for broader notifications. Integrate with external systems (CRM, project management) by adding more branches.
by Hans Wilhelm Radam
Description: This workflow automates personalized email outreach to a list of hospitals. It uses a chat-based interface to accept a region and a list of hospital names, looks up their specific contact details from a structured Google Sheet, and sends a tailored email via Gmail. Who’s it for This template is perfect for healthcare startups, medical device sales representatives, or IT consultants who need to conduct targeted outreach to hospital administrators. It's designed for anyone looking to automate a personalized, region-specific email campaign without manual data entry. How it works Trigger: You provide input via a chat message. The first line is the region (e.g., LUZON), and each subsequent line is a hospital name. Parsing: A Code node splits your message into a structured list of items for processing. Batching: The workflow processes each hospital one by one for reliable execution. Data Lookup: Based on the region, the workflow queries the corresponding sheet in a Google Sheets document to find the hospital's specific contact details. Email Delivery: A personalized email is sent to the hospital's email address using Gmail, pulling data from the spreadsheet to customize the message. How to set up Credentials: Set up n8n credentials for Google Sheets and Gmail (using OAuth2 recommended). Google Sheet: Duplicate the provided template Sheet structure. Your sheet must have columns like Hospital Name and Main Email. Workflow Configuration: Replace the placeholder Google Sheet ID in the Set Configuration node with the ID of your own sheet. Requirements An n8n instance (cloud or self-hosted). A Google account with access to Google Sheets and Gmail. The provided Google Sheets template structure. How to customize Email Template:* Modify the email subject and body in the *Send Gmail Message** node. Use placeholders like {{ $json["Your Field"] }} to insert data from your Google Sheet. Data Source:** Replace the Google Sheets node with another data source (e.g., Airtable, PostgreSQL) by ensuring it outputs data in a similar JSON format. Output:** Instead of Gmail, use the SendBlue node to send an SMS or the Slack node to send a DM.
by Vlad Arbatov
Summary Every day at a set time, this workflow fetches yesterday’s newsletters from Gmail, summarizes each email into concise topics with an LLM, merges all topics, renders a clean HTML digest, and emails it to your inbox. What this workflow does Triggers on a daily schedule (default 16:00, server time) Fetches Gmail messages since yesterday using a custom search query with optional sender filters Retrieves and decodes each email’s HTML, subject, sender name, and date Prompts an LLM (GPT‑4.1‑mini) to produce a consistent JSON summary of topics per email Merges topics from all emails into a single list Renders a styled HTML email with enumerated items Sends the HTML digest to a specified recipient via Gmail Apps and credentials Gmail OAuth2: Gmail account (read and send) OpenAI: OpenAi account Typical use cases Daily/weekly newsletter rollups delivered as one email Curated digests from specific media or authors Team briefings that are easy to read and forward How it works (node-by-node) Schedule Trigger Fires at the configured hour (default 16:00). Get many messages (Gmail → getAll, returnAll: true) Uses a filter like: =(from:@.com) OR (from:@.com) OR (from:@.com -"__") after:{{ $now.minus({ days: 1 }).toFormat('yyyy/MM/dd') }} Returns a list of message IDs from the past day. Loop Over Items (Split in Batches) Iterates through each message ID. Get a message (Gmail → get) Retrieves the full message/payload for the current email. Get message data (Code) Extracts HTML from Gmail’s MIME parts. Normalizes the sender to just the display name. Formats the date as DD.MM.YYYY. Passes html, subject, from, date forward. Clean (Code) Converts DD.MM.YYYY → MM.DD (for prompt brevity). Passes html, subject, from, date to the LLM. Message a model (OpenAI, model: gpt‑4.1‑mini, JSON output) Prompt instructs: Produce JSON: { "topics": [ { "title", "descr", "subject", "from", "date" } ] } Split multi-news blocks into separate topics Combine or ignore specific blocks for particular senders (placeholders __) Keep subject untranslated; other values in __ language Injects subject/from/date/html from the current email Loop Over Items (continues) Processes all emails for the time window. Merge (Code) Flattens the topics arrays from all processed emails into one combined topics list. Create template (Code) Builds a complete HTML email: Enumerated items with title, one-line description Original subject and “from — date” Safely escapes HTML and preserves line breaks Inline, email-friendly styles Send a message (Gmail → send) Sends the final HTML to your recipient with a custom subject. Node map | Node | Type | Purpose | |---|---|---| | Schedule Trigger | Trigger | Run at a specific time each day | | Get many messages | Gmail (getAll) | Search emails since yesterday with filters | | Loop Over Items | Split in Batches | Iterate messages one-by-one | | Get a message | Gmail (get) | Fetch full message payload | | Get message data | Code | Extract HTML/subject/from/date; normalize sender and date | | Clean | Code | Reformat date and forward fields to LLM | | Message a model | OpenAI | Summarize email into JSON topics | | Merge | Code | Merge topics from all emails | | Create template | Code | Render a styled HTML email digest | | Send a message | Gmail (send) | Deliver the digest email | Before you start Connect Gmail OAuth2 in n8n (ensure it has both read and send permissions) Add your OpenAI API key Import the provided workflow JSON into n8n Setup instructions 1) Schedule Schedule Trigger node: Set your preferred hour (server time). Default is 16:00. 2) Gmail Get many messages: Adjust filters.q to your senders/labels and window: Example: =(from:news@publisher.com) OR (from:briefs@media.com -"promo") after:{{ $now.minus({ days: 1 }).toFormat('yyyy/MM/dd') }} You can use label: or category: to narrow scope. Send a message: sendTo = your email subject = your subject line message = set to {{ $json.htmlBody }} (already produced by Create template) The HTML body uses inline styles for broad email client support. 3) OpenAI Message a model: Model: gpt‑4.1‑mini (swap to gpt‑4o‑mini or your preferred) Update prompt placeholders: __ language → your target language __ sender rules → special cases (combine blocks, ignore sections) How to use The workflow runs daily at the scheduled time, compiling a digest from yesterday’s emails. You’ll receive one HTML email with all topics neatly listed. Adjust the time window or filters to change what gets included. Customization ideas Time window control: after: {{ $now.minus({ days: X }) }} and/or add before: Filter by labels: q = label:Newsletters after:{{ $now.minus({ days: 1 }).toFormat('yyyy/MM/dd') }} Language: Set the __ language in the LLM prompt Template: Edit “Create template” to add a header, footer, hero section, logo/branding Include links parsed from HTML (add an HTML parser step in “Get message data”) Subject line: Make dynamic, e.g., “Digest for {{ $now.toFormat('dd.MM.yyyy') }}” Sender: Use a dedicated Gmail account or alias for deliverability and separation Limits and notes Gmail size limit for outgoing emails is ~25 MB; large digests may need pruning LLM usage incurs cost and latency proportional to email size and count HTML rendering varies across clients; inline styles are used for compatibility Schedule uses the n8n server’s timezone; adjust if your server runs in a different TZ Privacy and safety Emails are sent to OpenAI for summarization—ensure this aligns with your data policies Limit the Gmail search scope to only the newsletters you want processed Avoid including sensitive emails in the search window Sample output (email body) Title 1 One-sentence description Original Subject → Sender — DD.MM.YYYY Title 2 One-sentence description Original Subject → Sender — DD.MM.YYYY Tips and troubleshooting No emails found? Check filters.q and the time window (after:) Model returns empty JSON? Simplify the prompt or try another model Odd characters in output? The template escapes HTML and preserves line breaks; verify your input encoding Delivery issues? Use a verified sender, set a clear subject, and avoid spammy keywords Tags gmail, openai, llm, newsletters, digest, summarization, email, automation Changelog v1: Initial release with scheduled time window, sender filters, LLM summarization, topic merging, and HTML email template rendering
by Yehor EGMS
🎙️ n8n Workflow: Voice Message Transcription with Access Control This n8n workflow enables automated transcription of voice messages in Telegram groups with built-in access control and intelligent fallback mechanisms. It's designed for teams that need to convert audio messages to text while maintaining security and handling various audio formats. 📌 Section 1: Trigger & Access Control ⚡ Receive Message (Telegram Trigger) Purpose: Captures incoming messages from users in your Telegram group. How it works: When a user sends a message (voice, audio, or text), the workflow is triggered and the sender's information is captured. Benefit: Serves as the entry point for the entire transcription pipeline. 🔐 Sender Verification Purpose: Validates whether the sender has permission to use the transcription service. Logic: Check sender against authorized users list If authorized → Proceed to next step If not authorized → Send "Access denied" message and stop workflow Benefit: Prevents unauthorized users from consuming AI credits and accessing the service. 📌 Section 2: Message Type Detection 🎵 Audio/Voice Recognition Purpose: Identifies the type of incoming message and audio format. Why it's needed: Telegram handles different audio types with different statuses: Voice notes (voice messages) Audio files (standard audio attachments) Text messages (no audio content) Process: Check if message contains audio/voice content If no audio file detected → Send "No audio file found" message If audio detected → Assign file ID and proceed to format detection 🧩 File Type Determination (IF Node) Purpose: Identifies the specific audio format for proper processing. Supported formats: OGG (Telegram voice messages) MPEG/MP3 MP4/M4A Other audio formats Logic: If format recognized → Proceed to transcription If format not recognized → Send "File format not recognized" message Benefit: Ensures compatibility with transcription services by validating file types upfront. 📌 Section 3: Primary Transcription (OpenAI) 📥 File Download Purpose: Downloads the audio file from Telegram for processing. 🤖 OpenAI Transcription Purpose: Transcribes audio to text using OpenAI's Whisper API. Why OpenAI: High-quality transcription with cost-effective pricing. Process: Send downloaded file to OpenAI transcription API Simultaneously send notification: "Transcription started" If successful → Assign transcribed text to variable and proceed If error occurs → Trigger fallback mechanism Benefit: Fast, accurate transcription with multi-language support. 📌 Section 4: Fallback Transcription (Gemini) 🛟 Gemini Backup Transcription Purpose: Provides a safety net if OpenAI transcription fails. Process: Receives file only if OpenAI node returns an error Downloads and processes the same audio file Sends to Google Gemini for transcription Assigns transcribed text to the same text variable Benefit: Ensures high reliability—if one service fails, the other takes over automatically. 📌 Section 5: Message Length Handling 📏 Text Length Check (IF Node) Purpose: Determines if the transcribed text exceeds Telegram's character limit. Logic: If text ≤ 4000 characters → Send directly to Telegram If text > 4000 characters → Split into chunks Why: Telegram has a 4,000-character limit per message. ✂️ Text Splitting (Code Node) Purpose: Breaks long transcriptions into 4,000-character segments. Process: Receives text longer than 4,000 characters Splits text into chunks of ≤4,000 characters Maintains readability by avoiding mid-word breaks Outputs array of text chunks 📌 Section 6: Response Delivery 💬 Send Transcription (Telegram Node) Purpose: Delivers the transcribed text back to the Telegram group. Behavior: Short messages:** Sent as a single message Long messages:** Sent as multiple sequential messages Benefit: Users receive complete transcriptions regardless of length, ensuring no content is lost. 📊 Workflow Overview Table | Section | Node Name | Purpose | |---------|-----------|---------| | 1. Trigger | Receive Message | Captures incoming Telegram messages | | 2. Access Control | Sender Verification | Validates user permissions | | 3. Detection | Audio/Voice Recognition | Identifies message type and audio format | | 4. Validation | File Type Check | Verifies supported audio formats | | 5. Download | File Download | Retrieves audio file from Telegram | | 6. Primary AI | OpenAI Transcription | Main transcription service | | 7. Fallback AI | Gemini Transcription | Backup transcription service | | 8. Processing | Text Length Check | Determines if splitting is needed | | 9. Splitting | Code Node | Breaks long text into chunks | | 10. Response | Send to Telegram | Delivers transcribed text | 🎯 Key Benefits 🔐 Secure access control: Only authorized users can trigger transcriptions 💰 Cost management: Prevents unauthorized credit consumption 🎵 Multi-format support: Handles various Telegram audio types 🛡️ High reliability: Dual-AI fallback ensures transcription success 📱 Telegram-optimized: Automatically handles message length limits 🌍 Multi-language: Both AI services support numerous languages ⚡ Real-time notifications: Users receive status updates during processing 🔄 Automatic chunking: Long transcriptions are intelligently split 🧠 Smart routing: Files are processed through the optimal path 📊 Complete delivery: No content loss regardless of transcription length 🚀 Use Cases Team meetings:** Transcribe voice notes from team discussions Client communications:** Convert client voice messages to searchable text Documentation:** Create text records of verbal communications Accessibility:** Make audio content accessible to all team members Multi-language teams:** Leverage AI transcription for various languages
by Davide
This workflow automates the process of extracting structured, usable information from unstructured email messages across multiple platforms. It connects directly to Gmail, Outlook, and IMAP accounts, retrieves incoming emails, and sends their content to an AI-powered parsing agent built on OpenAI GPT models. The AI agent analyzes each email, identifies relevant details, and returns a clean JSON structure containing key fields: From** – sender’s email address To** – recipient’s email address Subject** – email subject line Summary** – short AI-generated summary of the email body The extracted information is then automatically inserted into an n8n Data Table, creating a structured database of email metadata and summaries ready for indexing, reporting, or integration with other tools. Key Benefits ✅ Full Automation: Eliminates manual reading and data entry from incoming emails. ✅ Multi-Source Integration: Handles data from different email providers seamlessly. ✅ AI-Driven Accuracy: Uses advanced language models to interpret complex or unformatted content. ✅ Structured Storage: Creates a standardized, query-ready dataset from previously unstructured text. ✅ Time Efficiency: Processes emails in real time, improving productivity and response speed. ✅ *Scalability:** Easily extendable to handle additional sources or extract more data fields. How it works This workflow automates the transformation of unstructured email data into a structured, queryable format. It operates through a series of connected steps: Email Triggering: The workflow is initiated by one of three different email triggers (Gmail, Microsoft Outlook, or a generic IMAP account), which constantly monitor for new incoming emails. AI-Powered Parsing & Structuring: When a new email is detected, its raw, unstructured content is passed to a central "Parsing Agent." This agent uses a specified OpenAI language model to intelligently analyze the email text. Data Extraction & Standardization: Following a predefined system prompt, the AI agent extracts key information from the email, such as the sender, recipient, subject, and a generated summary. It then forces the output into a strict JSON structure using a "Structured Output Parser" node, ensuring data consistency. Data Storage: Finally, the clean, structured data (the from, to, subject, and summarize fields) is inserted as a new row into a specified n8n Data Table, creating a searchable and reportable database of email information. Set up steps To implement this workflow, follow these configuration steps: Prepare the Data Table: Create a new Data Table within n8n. Define the columns with the following names and string type: From, To, Subject, and Summary. Configure Email Credentials: Set up the credential connections for the email services you wish to use (Gmail OAuth2, Microsoft Outlook OAuth2, and/or IMAP). Ensure the accounts have the necessary permissions to read emails. Configure AI Model Credentials: Set up the OpenAI API credential with a valid API key. The workflow is configured to use the model, but this can be changed in the respective nodes if needed. Connect the Nodes: The workflow canvas is already correctly wired. Visually confirm that the email triggers are connected to the "Parsing Agent," which is connected to the "Insert row" (Data Table) node. Also, ensure the "OpenAI Chat Model" and "Structured Output Parser" are connected to the "Parsing Agent" as its AI model and output parser, respectively. Activate the Workflow: Save the workflow and toggle the "Active" switch to ON. The triggers will begin polling for new emails according to their schedule (e.g., every minute), and the automation will start processing incoming messages. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Yang
Who’s it for This template is perfect for digital agencies, SDRs, lead generators, or outreach teams that want to automatically convert LinkedIn company profiles into high-quality cold emails. If you spend too much time researching and writing outreach messages, this workflow does all the heavy lifting for you. What it does Once a LinkedIn company profile URL is submitted via a web form, the workflow: Scrapes detailed company data using Dumpling AI Enriches the contact (email, name, country) using Dropcontact Sends company data and contact info to GPT-4, which generates: A personalized subject line (max 8 words) A short HTML cold email (4–6 sentences) Sends the cold email via Gmail Logs the lead details to Airtable for tracking All AI-generated content follows strict formatting and tone guidelines, ensuring it's professional, human, and clean. How it works Form Trigger: Collects the LinkedIn URL Dumpling AI: Extracts company name, description, size, location, website, etc. Dropcontact: Finds the contact's email and name based on enriched company details GPT-4: Writes a structured cold email and subject line in JSON format Gmail: Sends the personalized email to a fixed recipient Airtable: Logs the lead into a specified base/table for follow-up Requirements ✅ Dumpling AI API key (stored in HTTP header credentials) ✅ Dropcontact API key ✅ OpenAI GPT-4 credentials ✅ Gmail account (OAuth2) ✅ Airtable base & table set up with at least these fields: Name LinkedIn Company URL People website How to customize Modify the GPT prompt to reflect your brand tone or service offering Replace Gmail with Slack, Outlook, or another communication tool Add a “review and approve” step before sending emails Add logic to avoid duplicates (e.g., check Airtable first) > This workflow lets you go from LinkedIn profile to inbox-ready cold email in less than a minute—with full AI support.
by Wessel Bulte
Description This workflow is a practical, “dirty” solution for real-world scenarios where frontline workers keep using Excel in their daily processes. Instead of forcing change, we take their spreadsheets as-is, clean and normalize the data, generate embeddings, and store everything in Supabase. The benefit: frontline staff continue with their familiar tools, while data analysts gain clean, structured, and vectorized data ready for analysis or RAG-style AI applications. How it works Frontline workers continue with Excel** – no disruption to their daily routines. Upload & trigger** – The workflow runs when a new Excel sheet is ready. Read Excel rows** – Data is pulled from the specified workbook and worksheet. Clean & normalize** – HTML is stripped, Excel dates are fixed, and text fields are standardized. Batch & switch** – Rows are split and routed into Question/Answer processing paths. Generate embeddings** – Cleaned Questions and Answers are converted into vectors via OpenAI. Merge enriched records** – Original business data is combined with embeddings. Write into Supabase** – Data lands in a structured table (excel_records) with vector and FTS indexes. Why it’s “dirty but useful” No disruption** – frontline workers don’t need to change how they work. Analyst-ready data** – Supabase holds clean, queryable data for dashboards, reporting, or AI pipelines. Bridge between old and new** – Excel remains the input, but the backend becomes modern and scalable. Incremental modernization** – paves the way for future workflow upgrades without blocking current work. Outcome Frontline workers keep their Excel-based workflows, while data can immediately be structured, searchable, and vectorized in Supabase — enabling AI-powered search, reporting, and retrieval-augmented generation. Required setup Supabase account Create a project and enable the pgvector extension. OpenAI API Key Required for generating embeddings (text-embedding-3-small). Microsoft Excel credentials Needed to connect to your workbook and worksheet. Need Help 🔗 LinkedIn – Wessel Bulte
by Mohammad Abubakar
This n8n template automatically scans 10 subreddits every Monday, filters ~1000 posts for genuine frustration signals, and delivers a structured startup opportunity report to your inbox — powered by Groq AI. Perfect for indie hackers, product builders, and founders who want to stay on top of what people are actually begging someone to build — without spending hours manually browsing Reddit. Good to know Uses Reddit's public JSON API — no Reddit account or API key required Groq's free tier is generous enough to run this weekly at zero cost Each run analyzes up to 1000 posts and completes in under 60 seconds How it works A Schedule Trigger fires every Monday at 8AM to kick off the workflow A Code node defines 10 target subreddits (entrepreneur, SaaS, freelance, startups, and more) An HTTP Request node fetches the 100 newest posts from each subreddit using Reddit's public JSON endpoint A Code node filters all posts against 27 frustration-signal keywords like "why doesn't X exist", "sick of manually", "wish there was a tool for this" An Aggregate node merges all matched posts from all 10 subreddits into a single dataset A Code node builds a structured AI prompt embedding all posts with specific instructions for analysis An HTTP Request node sends the dataset to Groq's API (llama-3.3-70b-versatile) for deep analysis A Code node wraps the AI output in a clean HTML email template A Gmail node delivers the weekly report directly to your inbox How to use Import the workflow and connect your Groq API key as an HTTP Header Auth credential Connect your Gmail account via OAuth2 Change the recipient email in the Gmail node to your own address Run manually first to verify the full flow end to end, then activate the schedule Requirements Groq account for AI analysis (free at console.groq.com) Gmail account for delivery via OAuth2 Customising this workflow Edit the subreddit list in the Define Subreddits node to focus on your specific niche or industry Add or remove keywords in the Filter Posts node to tune how sensitive the pain detection is Swap the Gmail node for Slack, Telegram, or Outlook if you prefer a different delivery channel Change the schedule from weekly to daily for higher-frequency monitoring Replace Groq with OpenAI GPT-4o by swapping the HTTP Request URL and auth header — the prompt format is identical