by Ahmed Salama
Description Categories Customer Support Automation, AI Agents, CRM Integration, SaaS Operations Build an AI-Driven Cross-Platform Support Context Engine with n8n This workflow creates an AI-powered middleware layer that unifies customer context across Help Scout, HubSpot, and SMS platforms. When a new support ticket or reply is received, it fetches the customer's CRM deal stage, onboarding status, and recent text messages. It then generates an AI response, runs it through a secondary QA audit for brand safety, and routes it as a human-reviewed draft in Help Scout. The result is a highly contextual, zero-blind-spot support system that protects brand voice without sacrificing efficiency. Benefits 100% Contextual Replies Agents (and AI) see the full customer journey across all three platforms before responding. Built-in Brand Protection Dual-LLM QA gate prevents off-brand, hallucinated, or inappropriate auto-responses. Human-in-the-Loop Safety AI drafts are saved, never auto-sent, keeping humans in complete control of final delivery. Smart Escalation Routing High-value accounts or angry customers are instantly routed to senior agents with sentiment tags. Zero Platform Lock-in Uses standard webhooks and APIs, easily adaptable to other CRMs or ticketing tools. How It Works Help Scout Webhook Listener Triggered via webhook when a new conversation or customer reply is created in Help Scout Filters out noise (e.g., internal notes, tag changes) to save API calls Cross-Platform Data Fetching Simultaneously pulls CRM data from HubSpot (deal value, stage) Pulls recent message history from SMS platforms (e.g., Sales Messenger) Shared Context Layer Construction Merges ticket payload with CRM and SMS data Formats into a structured "Customer 360" prompt string AI Draft Generation (LLM 1) Uses GPT-4o to draft a highly empathetic, context-aware reply Restricted to using ONLY the provided shared context to prevent hallucinations AI QA & Sentiment Audit (LLM 2) Uses a lightweight model (GPT-4o-mini) to evaluate the draft for brand safety Extracts a strict JSON sentiment score (positive/neutral/negative/angry) Smart Routing & Action If angry/negative → Escalates to a human agent and tags the ticket If high-value but approved → Saves as a draft for an Account Manager Otherwise → Saves as a standard draft for fast agent review Required Setup Help Scout API credentials (OAuth2 or App ID/Secret) Webhooks configured in Help Scout (subscribed to convo.created) Permissions to create drafts and assign conversations HubSpot Private App Token Permissions to search contacts and read deal/custom properties SMS Platform API access (Sales Messenger, Twilio, or similar) Ability to fetch message history by email or contact ID AI Model OpenAI API key Configured for GPT-4o (Draft) and GPT-4o-mini (QA) n8n Self-hosted or cloud Environment variables configured for highValueThreshold and humanAgentId Business Use Cases B2B SaaS Support Teams Eliminate the "tell me your account email" friction by arming agents with immediate context Customer Success Managers Proactively handle onboarding stalls or high-value renewals with full history visibility Founders & COOs Scale support quality across 1M+ users without risking brand reputation via careless AI auto-replies Agencies & Consultants Deliver high-end "AI-powered unified inbox" architectures to enterprise clients Difficulty Level Advanced Estimated Build Time 60–90 minutes Monthly Operating Cost Help Scout: Existing plan HubSpot: Existing plan SMS API: Existing plan AI Model: Usage-based (typically very low for QA/Generation) n8n: Self-hosted or cloud Typical range: $5–$50/month (highly dependent on ticket volume) Why This Workflow Works Merging API data into a single context string solves the "disconnected tools" problem natively The two-step LLM approach (Draft + QA) makes AI safe for front-line customer communication Help Scout drafts provide the perfect human-in-the-loop UI without custom frontend builds Sentiment-based routing ensures high-churn-risk tickets get immediate human empathy Possible Extensions Auto-pause HubSpot email sequences when a negative Help Scout ticket is detected Trigger proactive SMS outreach if a HubSpot onboarding status stalls for X days Log all AI drafts and QA scores to a PostgreSQL database for monthly brand-audit reporting Auto-translate drafts based on the contact's locale before saving the Help Scout draft Use Slack to ping the assigned agent with a summary of the generated draft Details Nodes used in workflow Webhook Code (Parse Event & Extract Data) HTTP Request (Fetch HubSpot Context) HTTP Request (Fetch SMS History) Merge Code (Build Shared Context Layer) OpenAI (AI Draft Generator) OpenAI (AI QA & Sentiment Check) Code (Parse QA Output) Switch (Sentiment Router) If (High Value + Approved?) HTTP Request (Save Draft) HTTP Request (Escalate to Human) Respond to Webhook Error Trigger Sticky Note `
by Oneclick AI Squad
This workflow builds a fully private, self-hosted AI chatbot using Meta Llama models. Unlike cloud-based AI APIs, every conversation stays on your infrastructure — no data leaves your environment. The chatbot remembers conversation history per session, routes different query types to specialized Llama prompts, logs all interactions, and can escalate unresolved queries to a human agent via Slack. Powered by Ollama (local) or Groq/Together AI (cloud Llama endpoints) — configurable in one node. What's the Goal? To give businesses a production-grade private AI chatbot that: Runs on their own servers with zero data exposure Handles customer support, internal helpdesk, sales FAQs, and onboarding Remembers context across a full conversation session Routes intelligently: support vs sales vs general vs escalation Logs every turn for quality review, training, and compliance Why Does It Matter? Most businesses cannot send sensitive conversations to OpenAI or Anthropic due to: GDPR, HIPAA, SOC2, or internal data governance policies Confidential customer data in support queries Proprietary internal knowledge that must stay private Llama models run fully on-premise. This workflow gives those businesses the same quality AI chatbot experience with complete data sovereignty. Monetization: sell this as a private AI chatbot deployment package to enterprises. Setup fee plus monthly hosting — recurring revenue. How It Works Stage A — Message Intake Webhook receives incoming chat message with session ID and user message text. Set node stores Llama endpoint config and normalizes the payload. Stage B — Session Memory Code node loads conversation history for the session from an in-memory store. Appends the new user message to build the full context window for Llama. Stage C — Intent Router IF node checks the message for keywords to classify intent: support issue, sales inquiry, general question, or escalation request. Routes to the matching Llama system prompt branch. Stage D — Llama Inference HTTP Request calls the Llama API (Ollama local, Groq, or Together AI). Sends full conversation history plus the matched system prompt. Returns the assistant reply. Stage E — Response Handling Code node parses the Llama output, updates the session memory, checks if escalation is needed, and formats the final response. Stage F — Logging and Delivery Google Sheets logs every turn. Slack fires only when escalation is flagged. Webhook responds with the chatbot reply and session metadata. Configuration Requirements LLAMA_ENDPOINT — Your Ollama URL (http://localhost:11434) or Groq/Together AI base URL LLAMA_API_KEY — API key if using Groq or Together AI (leave blank for local Ollama) LLAMA_MODEL — Model name e.g. llama3, llama3.1:8b, llama3.1:70b, mixtral SLACK_WEBHOOK_URL — For human escalation alerts GOOGLE_SHEET_ID — Conversation audit log Setup Guide Option A (Local / Private): Install Ollama: curl -fsSL https://ollama.ai/install.sh | sh Pull model: ollama pull llama3.1 Set LLAMA_ENDPOINT to http://localhost:11434 Leave LLAMA_API_KEY blank Option B (Cloud Llama via Groq — fastest): Sign up at groq.com and copy your API key Set LLAMA_ENDPOINT to https://api.groq.com/openai/v1 Set LLAMA_MODEL to llama-3.1-8b-instant or llama-3.1-70b-versatile Paste your Groq API key in LLAMA_API_KEY Option C (Together AI): Sign up at together.ai Set endpoint to https://api.together.xyz/v1 Set model to meta-llama/Llama-3.1-8B-Instruct-Turbo Steps for all options: Open Set Llama Config node — fill in all values Set SLACK_WEBHOOK_URL and GOOGLE_SHEET_ID Activate and POST to /webhook/llama-chat Sample Payload { sessionId: user-abc-123, message: My order arrived damaged and I need a refund, userId: user_123, botPersona: support, userName: Sarah } Explore More Automation: Contact us to design AI-powered lead nurturing, content engagement, and multi-platform reply workflows tailored to your growth strategy.
by Davide
This workflow is an AI-powered text-to-speech production pipeline designed to generate highly expressive audio using ElevenLabs v3. It automates the entire process from raw text input to final audio distribution and upload the mp3 file to Google Drive and an FTP space. Key Advantages 1. ✅ Cinematic-quality audio output By combining AI-driven emotional tagging with ElevenLabs v3, the workflow produces audio that feels acted, not simply read. 2. ✅ Fully automated pipeline From raw text to hosted audio file, everything is handled automatically: No manual tagging No manual uploads No post-processing 3. ✅ Multi-input flexibility The workflow supports: Manual testing Chat-based usage API/Webhook integrations This makes it ideal for apps, CMSs, games, and content platforms. 4. ✅ Language-agnostic The agent preserves the original language of the input text and applies tags accordingly, making it suitable for international projects. 5.✅ Consistent and correct tagging The use of Context7 ensures that all audio tags follow the official ElevenLabs v3 specifications, reducing errors and incompatibilities. 6. ✅ Scalable and production-ready Automatic uploads to Drive and FTP make this workflow ready for: Large content volumes CDN delivery Team collaboration 7.✅ Perfect for storytelling and media The workflow is especially effective for: Horror and cinematic storytelling Audiobooks and podcasts Games and immersive narratives Voiceovers with emotional depth How it Works Text Input & Processing: The workflow accepts text input through multiple triggers - manual execution via "Set text" node, webhook POST requests, or chat message inputs. This text is passed to the Audio Tagger Agent. AI-Powered Audio Tagging: The Audio Tagger Agent uses Claude Sonnet 4.5 to analyze the input text and intelligently insert ElevenLabs v3 audio tags. The agent follows strict rules: maintaining original meaning, adding tags for pauses, rhythm, emphasis, emotional tones, breathing, laughter, and delivery variations while keeping the output in the original language. Reference Validation: During tagging, the agent consults the Context7 MCP tool, which provides access to the official ElevenLabs v3 audio tags guide to ensure correct and consistent tag usage. Text-to-Speech Conversion: The tagged text is sent to ElevenLabs' v3 (alpha) model, which converts it into speech using a specific voice with customized voice settings including stability, similarity boost, style, speaker boost, and speed controls. Dual Output Distribution: The generated audio file is simultaneously uploaded to two destinations: Google Drive (in a specified "Elevenlabs" folder) and an FTP server (BunnyCDN), ensuring the file is stored in both cloud storage platforms. Set Up Steps Prerequisite Configuration: Configure Anthropic API credentials for Claude Sonnet access Set up ElevenLabs API credentials with access to v3 (alpha) models Configure Google Drive OAuth2 credentials with access to the target folder Set up FTP credentials for BunnyCDN or alternative storage Configure Context7 MCP tool with appropriate authentication headers Workflow-Specific Setup: In the "Set text" node, replace "YOUR TEXT" with the default text you want to process (for manual execution) In the "Upload to FTP" node, update the path from "/YOUR_PATH/" to your actual FTP directory structure Verify the Google Drive folder ID points to your intended destination folder Ensure the webhook path is correctly configured for external integrations Adjust voice parameters in the ElevenLabs node if different voice characteristics are desired Execution Options: For one-time processing: Use the manual trigger and set text in the "Set text" node For API integration: Use the webhook endpoint to receive text via POST requests For chat-based interaction: Use the chat trigger for conversational text input 👉 Subscribe to my new YouTube channel. Here I’ll share videos and Shorts with practical tutorials and FREE templates for n8n. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by vinci-king-01
Public Transport Schedule & Delay Tracker with Microsoft Teams and Dropbox ⚠️ COMMUNITY TEMPLATE DISCLAIMER: This is a community-contributed template that uses ScrapeGraphAI (a community node). Please ensure you have the ScrapeGraphAI community node installed in your n8n instance before using this template. This workflow automatically scrapes public transport websites or apps for real-time schedules and service alerts, then pushes concise delay notifications to Microsoft Teams while archiving full-detail JSON snapshots in Dropbox. Ideal for commuters and travel coordinators, it keeps riders informed and maintains a historical log of disruptions. Pre-conditions/Requirements Prerequisites n8n instance (self-hosted or n8n.cloud) ScrapeGraphAI community node installed Microsoft Teams incoming webhook configured Dropbox account with an app token created Public transit data source (website or API) that is legally scrapable or offers open data Required Credentials ScrapeGraphAI API Key** – enables web scraping Microsoft Teams Webhook URL** – posts messages into a channel Dropbox Access Token** – saves JSON files to Dropbox Specific Setup Requirements | Item | Example | Notes | |------|---------|-------| | Transit URL(s) | https://mycitytransit.com/line/42 | Must return the schedule or service alert data you need | | Polling Interval | 5 min | Adjust via Cron node or external trigger | | Teams Channel | #commuter-updates | Create an incoming webhook in channel settings | How it works This workflow automatically scrapes public transport websites or apps for real-time schedules and service alerts, then pushes concise delay notifications to Microsoft Teams while archiving full-detail JSON snapshots in Dropbox. Ideal for commuters and travel coordinators, it keeps riders informed and maintains a historical log of disruptions. Key Steps: Webhook Trigger**: Starts the workflow (can be replaced with Cron for polling). Set Node**: Stores target route IDs, URLs, or API endpoints. SplitInBatches**: Processes multiple routes one after another to avoid rate limits. ScrapeGraphAI**: Scrapes each route page/API and returns structured schedule & alert data. Code Node (Normalize)**: Cleans & normalizes scraped fields (e.g., converts times to ISO). If Node (Delay Detected?)**: Compares live data vs. expected timetable to detect delays. Merge Node**: Combines route metadata with delay information. Microsoft Teams Node**: Sends alert message and rich card to the selected Teams channel. Dropbox Node**: Saves the full JSON snapshot to a dated folder for historical reference. StickyNote**: Documents the mapping between scraped fields and final JSON structure. Set up steps Setup Time: 15-25 minutes Clone or Import the JSON workflow into your n8n instance. Install ScrapeGraphAI community node if you haven’t already (Settings → Community Nodes). Open the Set node and enter your target routes or API endpoints (array of URLs/IDs). Configure ScrapeGraphAI: Add your API key in the node’s credentials section. Define CSS selectors or API fields inside the node parameters. Add Microsoft Teams credentials: Paste your channel’s incoming webhook URL into the Microsoft Teams node. Customize the message template (e.g., include route name, delay minutes, reason). Add Dropbox credentials: Provide the access token and designate a folder path (e.g., /TransitLogs/). Customize the If node logic to match your delay threshold (e.g., ≥5 min). Activate the workflow and trigger via the webhook URL, or add a Cron node (every 5 min). Node Descriptions Core Workflow Nodes: Webhook** – External trigger for on-demand checks or recurring scheduler. Set** – Defines static or dynamic variables such as route list and thresholds. SplitInBatches** – Iterates through each route to control request volume. ScrapeGraphAI** – Extracts live schedule and alert data from transit websites/APIs. Code (Normalize)** – Formats scraped data, merges dates, and calculates delay minutes. If (Delay Detected?)** – Branches the flow based on presence of delays. Merge** – Re-assembles metadata with computed delay results. Microsoft Teams** – Sends formatted notifications to Teams channels. Dropbox** – Archives complete JSON payloads for auditing and analytics. StickyNote** – Provides inline documentation for maintainers. Data Flow: Webhook → Set → SplitInBatches → ScrapeGraphAI → Code (Normalize) → If (Delay Detected?) ├─ true → Merge → Microsoft Teams → Dropbox └─ false → Dropbox Customization Examples Change to Slack instead of Teams // Replace Microsoft Teams node with Slack node { "text": 🚊 ${$json.route} is delayed by ${$json.delay} minutes., "channel": "#commuter-updates" } Filter only major delays (>10 min) // In If node, use: return $json.delay >= 10; Data Output Format The workflow outputs structured JSON data: { "route": "Line 42", "expected_departure": "2024-04-22T14:05:00Z", "actual_departure": "2024-04-22T14:17:00Z", "delay": 12, "status": "delayed", "reason": "Signal failure at Main Station", "scraped_at": "2024-04-22T13:58:22Z", "source_url": "https://mycitytransit.com/line/42" } Troubleshooting Common Issues ScrapeGraphAI returns empty data – Verify CSS selectors/API fields match the current website markup; update selectors after site redesigns. Teams messages not arriving – Ensure the Teams webhook URL is correct and the incoming webhook is still enabled. Dropbox writes fail – Check folder path, token scopes (files.content.write), and available storage quota. Performance Tips Limit SplitInBatches to 5-10 routes per run to avoid IP blocking. Cache unchanged schedules locally and fetch only alert pages for faster runs. Pro Tips: Use environment variables for API keys & webhook URLs to keep credentials secure. Attach a Cron node set to off-peak hours (e.g., 4 AM) for daily full-schedule backups. Add a Grafana dashboard that reads the Dropbox archive for long-term delay analytics.
by Diego Alejandro Parrás
Manage Supabase database with Telegram commands How it works Receive message — Telegram trigger captures incoming bot messages Validate user — Checks if sender's chat ID is in the authorized list Parse command — Extracts command type (/add, /list, etc.) and parameters Route & execute — Performs the appropriate Supabase operation (INSERT, SELECT, UPDATE, DELETE) Respond — Sends formatted results back to Telegram Turn your Telegram into a powerful database management interface. This workflow lets you create, read, update, delete, and search records in your Supabase database using simple chat commands — no SQL knowledge required. Who is this for? Small business owners, freelancers, and teams who need to manage data on the go without opening dashboards or writing queries. Perfect for inventory tracking, simple CRM, expense logging, or any scenario where you need quick mobile access to your database. What problem does it solve? Managing database records typically requires logging into admin panels or writing SQL queries. This workflow eliminates that friction by letting you interact with your data through familiar Telegram messages, from anywhere, on any device. Benefits Mobile database access** — Manage your data from anywhere using just your phone Zero SQL required** — Simple commands replace complex database queries Secure by default** — Only authorized Telegram users can access your data Instant feedback** — Get formatted responses confirming every operation Fully customizable** — Adapt to any table structure with minimal changes Available commands | Command | Format | Example | |---------|--------|---------| | /add | name, price, quantity, category | /add iPhone 15, 999.99, 50, electronics | | /list | [category] | /list or /list electronics | | /get | id | /get 15 | | /update | id field=value | /update 15 price=899 quantity=45 | | /delete | id | /delete 15 | | /search | text | /search iPhone | | /help | — | Shows all available commands | Set up steps 1. Create your Telegram bot Message @BotFather on Telegram Send /newbot and follow the prompts Save the API token you receive 2. Get your Telegram chat ID Message @userinfobot on Telegram It will reply with your chat ID number 3. Create the Supabase table Run this SQL in your Supabase project's SQL Editor: CREATE TABLE products ( id SERIAL PRIMARY KEY, name TEXT NOT NULL, price DECIMAL(10,2), quantity INTEGER DEFAULT 0, category TEXT, created_at TIMESTAMP DEFAULT NOW() ); 4. Configure the workflow Import the workflow into n8n Add your Telegram Bot credentials (using the API token from step 1) Add your Supabase credentials (Project URL + API Key from Supabase dashboard) Open the "Is Authorized?" node and replace 123456789 with your actual chat ID from step 2 Activate the workflow 5. Test it Send /help to your bot to verify everything works! Customization Adding more authorized users Open the "Is Authorized?" node Click "Add condition" Add another OR condition: chatId equals [new user's chat ID] Using a different table Change products to your table name in all Supabase nodes Update the field parsing in the "Parse Command and Parameters" code node Update field mappings in "Supabase Insert Product" and "Prepare Update Data" nodes Adjust the help message in "Send Help Message" node Adding more fields Modify the command parsing logic in "Parse Command and Parameters" Add field mappings in the Supabase Insert node Update the "Prepare Update Data" Set node with new fields Update the help message Example use cases Inventory management** — Track stock levels from your phone while in the warehouse Simple CRM** — Add and lookup contacts on the go Expense tracking** — Log expenses as they happen Task management** — Create and update tasks without opening any app Field data collection** — Teams can submit data from anywhere Requirements n8n instance (cloud or self-hosted) Telegram account Supabase account (free tier works) Difficulty level: Intermediate Estimated setup time: 15-20 minutes Monthly operating cost: $0 (Telegram and Supabase free tiers)
by Panth1823
WhatsApp Resume Ranking Bot — AI-Powered Career Score via PDF Upload Let job seekers check their resume strength directly on WhatsApp — no app, no sign-up, no friction. Users send a keyword, answer 2 quick questions, upload their PDF resume, and receive a personalized career score, ATS feedback, and rejection analysis in under 60 seconds. Who is this for? Career coaches, job portals, HR-tech startups, or recruitment agencies who want to offer a self-serve resume evaluation tool directly inside WhatsApp — where their audience already is. What this workflow does Listens for incoming WhatsApp messages via the WhatsApp Business Cloud API Manages multi-turn conversation state in Supabase — tracks each user's progress through the flow (idle → name → role → resume → processing) Guides the user step-by-step to provide their name, target job role, and PDF resume Downloads the resume PDF from WhatsApp's media server using the Business API Extracts resume text for analysis Runs a Scoring Engine that calculates a career score (0–100) based on resume content and target role Calls OpenAI (GPT-4o-mini) in parallel to generate rejection reasons and actionable improvement tips Merges both results and formats a final WhatsApp message Sends the personalized report back to the user — score, weaknesses, and what to fix Prerequisites A WhatsApp Business Cloud API account (Meta Developer App in Live mode) A Supabase project with a whatsapp_sessions table An OpenAI API key (GPT-4o-mini recommended) A self-hosted n8n instance or n8n Cloud Supabase table setup Create a table called whatsapp_sessions with these columns: | Column | Type | Notes | |---|---|---| | phone | text | Primary key / unique | | state | text | Conversation state (IDLE, WAITING_NAME, etc.) | | name | text | User's name | | target_role | text | Job role they're targeting | | started_at | timestamptz | Session start time | | updated_at | timestamptz | Last activity timestamp | Setup steps Connect WhatsApp Business API credentials in n8n (Meta App token + Phone Number ID) Connect OpenAI credentials in n8n Update the Supabase URL and API key inside the Conversation State Manager Code node Replace YOUR_PHONE_NUMBER_HERE in the Send nodes with your WhatsApp Phone Number ID Set up Meta webhook pointing to your n8n WhatsApp Trigger URL, subscribed to messages Activate the workflow — users can now send CHECK MY RANK to trigger the bot Conversation flow User: CHECK MY RANK Bot: Intro + "Type YES to start" User: YES Bot: "What's your full name?" User: Rahul Sharma Bot: "Which job role are you targeting?" User: Data Analyst Bot: "Upload your resume as a PDF" User: [uploads PDF] Bot: "Analyzing... hang tight 🔍" Bot: [sends career score + rejection reasons + tips] Customization Modify the Scoring Engine Code node to adjust how the score is calculated (weights for skills, experience, formatting, etc.) Edit the OpenAI prompt to change the tone or depth of feedback Add a Supabase insert after analysis to log all submissions for your own analytics Extend the flow to offer a paid detailed report or booking link after the free score ⚠️ Important notes The WhatsApp App must be in Live mode (not sandbox) to receive messages from non-whitelisted numbers — requires completing Meta Business Verification Only PDF resumes are supported; DOCX files are rejected with a helpful prompt Session state persists in Supabase, so users can resume mid-flow if they get disconnected The bot handles concurrent users independently via phone number as the session key
by JJ Tham
Struggling with inaccurate Meta Ads tracking due to iOS 14+ and ad blockers? 📉 This workflow is your solution. It provides a robust, server-side endpoint to reliably send conversion events directly to the Meta Conversions API (CAPI). By bypassing the browser, you can achieve more accurate ad attribution and optimize your campaigns with better data. This template handles all the required data normalization, hashing, and formatting, so you can set up server-side tracking in minutes. ⚙️ How it works This workflow provides a webhook URL that you can send your conversion data to (e.g., from a web form, CRM, or backend). Once it receives the data, it: Sanitizes User Data: Cleans and normalizes PII like email and phone numbers. Hashes PII: Securely hashes the user data using SHA-256 to meet Meta's privacy requirements. Formats the Payload: Assembles all the data, including click IDs (fbc, fbp) and user info, into the exact format required by the Meta CAPI. Sends the Event: Makes a direct, server-to-server call to Meta, reliably logging your conversion event. 👥 Who’s it for? Performance Marketers: Improve ad performance and ROAS with more accurate conversion data. Lead Generation Businesses: Reliably track form submissions as conversions. E-commerce Stores: Send purchase events from your backend to ensure nothing gets missed. Developers: A ready-to-use template for implementing server-side tracking without writing custom code from scratch. 🛠️ How to set up Setup is straightforward. You'll need your Meta Pixel ID and a CAPI Access Token. For a complete walkthrough, check out the tutorial video for this workflow on YouTube: https://youtu.be/_fdMPIYEvFM The basic steps are to copy the webhook URL, configure your form or backend to send the correct data payload, and add your Meta Pixel ID and Access Token to the final HTTP Request node. 👉 For a detailed, step-by-step guide, please refer to the yellow sticky note inside the workflow.
by Atta
What it does Instead of manually checking separate apps for your calendar, weather, and news each morning, this workflow consolidates the most important information into a single, convenient audio briefing. The "Good Morning Podcast" is designed to be a 3-minute summary of your day ahead, delivered directly to you. It's multi-lingual and customizable, allowing you to start your day informed and efficiently. How it works The workflow executes in three parallel branches before merging the data to generate the final audio file. Weather Summary: It starts by taking a user-provided city and fetching the current 15-hour forecast from the OpenWeatherMap. It formats this information into a concise weather report. Calendar Summary: It securely connects to your Google Calendar to retrieve all of today's scheduled meetings and events. It then formats the schedule into a clear, readable summary. News Summary: It connects to the NewsAPI to perform two tasks: it fetches the top general headlines and also searches for articles based on user-defined keywords (e.g., "AI", "automation", "space exploration"). The collected headlines are then summarized using a Google Gemini node to create a brief news digest. Audio Generation and Delivery: All three text summaries (weather, calendar, and news) are merged into a single script. The workflow uses Google's Text-to-Speech (TTS) to generate the raw multi-speaker audio. A dedicated FFmpeg node then processes and converts this audio into the final MP3 format. The completed podcast is then sent directly to you via a Telegram Bot. Setup Instructions To get this workflow running, you will need to configure credentials for each of the external services and set your initial parameters. ⚠️ Important Prerequisite Install FFmpeg: The workflow requires the FFmpeg software package to be installed on the machine running your n8n instance (local or server). Please ensure it is installed and accessible in your system's PATH before running this workflow. Required Credentials OpenWeatherMap: Sign up for a free account at OpenWeatherMap and get your API key. Add the API key to your n8n OpenWeatherMap credentials. Google Calendar & Google AI (Gemini/TTS): You will need Google OAuth2 credentials for the Google Calendar node. You will also need credentials for the Google AI services (Gemini and Text-to-Speech). Follow the n8n documentation to create and add these credentials. NewsAPI: Get a free API key from NewsAPI.org. Add the API key to your n8n NewsAPI credentials. Telegram: Create a new bot by talking to the BotFather in your Telegram app. Copy the Bot Token it provides and add it to your n8n Telegram credentials. Send a message to your new bot and get your Chat ID from the Telegram Trigger node or another method. You will need this for the Telegram send node. Workflow Inputs In the first node (or when you run the workflow manually), you must provide the following initial data: name: Your first name for a personalized greeting. city: The city for your local weather forecast (e.g., "Amsterdam"). language: The language for the entire podcast output (e.g., "en-US", "nl-NL", "fa-IR"). news_keywords: A comma-separated list of topics you are interested in for the news summary (e.g., "n8n,AI,technology"). How to Adapt the Template This workflow is highly customizable. Here are several ways you can adapt it to fit your needs: Triggers Automate It:* The default trigger is manual. Change it to a *Schedule Trigger** to have your podcast automatically generated and sent to you at the same time every morning (e.g., 7:00 AM). Content Sources Weather:** In the "User Weather Map" node, you can change the forecast type or switch the units from metric to imperial. Calendar:** In the "Get Today Meetings" node, you can select a different calendar from your Google account (e.g., a shared work calendar instead of your personal one). News:** In the "Get Headlines From News Sources" node, change the country or category to get different top headlines. In the "Get Links From Keywords" node, update your keywords to track different topics. In the "Aggregate Headlines" (Gemini) node, you can modify the prompt to change the tone or length of the AI-generated news summary. Audio Generation Voice & Language:** The language is a starting parameter, but you can go deeper into the Google TTS nodes (Generate Virtual Parts, etc.) to select specific voices, genders, and speaking rates to create a unique podcast host style. Scripting:** Modify the Set and Merge nodes that construct the final script. You can easily change the greeting, the transition phrases between sections, or the sign-off message. Delivery Platform:** Don't use Telegram? Swap the Telegram node for a Slack node, Discord node, or even an Email node to send the MP3 file to your preferred platform. Message:** Customize the text message that is sent along with the audio file in the final node.
by Tomohiro Goto
🧠 How it works This workflow automatically transcribes and translates voice messages from Telegram to Slack, enabling seamless communication between Japanese and English speakers. In our real-world use case, our distributed team often sends short voice updates on Telegram — but most discussion happens on Slack. Before this workflow, we constantly asked: “Can someone write a summary of that voice message?” “I can’t understand what was said — is there a transcript?” “Can we translate this audio for our English-speaking teammates?” This workflow fixes that problem without changing anyone’s communication habits. Built with n8n, OpenAI Whisper, and GPT-4o-mini, it automatically: Detects when a voice message is posted on Telegram Downloads and transcribes it via Whisper Translates the text with GPT-4o-mini Posts the result in Slack — with flags 🇯🇵→🇺🇸 and username attribution ⚙️ Features 🎧 Voice-to-text transcription using OpenAI Whisper 🌐 Automatic JA ↔ EN detection and translation via GPT-4o-mini 💬 Clean Slack message formatting with flags, username, and original text 🔧 Easy to customize: adjust target languages, tone, or message style ⚡ Typical end-to-end time: under 10 seconds for short audio clips 💼 Use Cases Global teams** – Send quick voice memos in Telegram and share readable translations in Slack Project coordination** – Record updates while commuting and post bilingual notes automatically Remote check-ins** – Replace daily written reports with spoken updates Cross-language collaboration** – Let English and Japanese teammates stay perfectly synced 💡 Perfect for Bilingual creators and managers** working across Japan and Southeast Asia AI automation enthusiasts** who love connecting voice and chat platforms Teams using Telegram for fast communication** and Slack for structured workspaces 🧩 Notes Requires three credentials: TELEGRAM_BOT_TOKEN OPENAI_API_KEY_HEADER SLACK_BOT_TOKEN_HEADER Slack scopes: chat:write, files:write, channels:history You can change translation direction or add languages in the “Detect Language” → “Translate (OpenAI)” nodes. Keep audio files under 25 MB for Whisper processing. Always export your workflow with credentials OFF before sharing or publishing. ✨ Powered by OpenAI Whisper × GPT-4o-mini × n8n × Telegram Bot API × Slack API A complete multilingual voice-to-text bridge — connecting speech, translation, and collaboration across platforms. 🌍
by JinPark
🧩 Summary Easily digitize and organize your business cards! This workflow allows you to upload a business card image, automatically extract contact information using Google Gemini’s OCR & vision model, and save the structured data into a Notion database — no manual typing required. Perfect for teams or individuals who want to centralize client contact info in Notion after networking events or meetings. ⚙️ How it works Form Submission Upload a business card image (.jpg, .png, or .jpeg) through an n8n form. Optionally select a category (e.g., Partner, Client, Vendor). AI-Powered OCR (Google Gemini) The uploaded image is sent to Google Gemini Vision for intelligent text recognition and entity extraction. Gemini returns structured text data such as: { "Name": "Jung Hyun Park", "Position": "Head of Development", "Phone": "021231234", "Mobile": "0101231234", "Email": "abc@dc.com", "Company": "TOV", "Address": "6F, Donga Building, 212, Yeoksam-ro, Gangnam-gu, Seoul", "Website": "www.tov.com" } JSON Parsing & Cleanup The text response from Gemini is cleaned and parsed into a valid JSON object using a Code node. Save to Notion The parsed data is automatically inserted into your Notion database (Customer Business Cards). Fields such as Name, Email, Phone, Address, and Company are mapped to Notion properties. 🧠 Used Nodes Form Trigger** – Captures uploaded business card and category input Google Gemini (Vision)** – Extracts contact details from the image Code** – Parses Gemini’s output into structured JSON Notion** – Saves extracted contact info to your Notion database 📦 Integrations | Service | Purpose | Node Type | |----------|----------|-----------| | Google Gemini (PaLM) | Image-to-text extraction (OCR + structured entity parsing) | @n8n/n8n-nodes-langchain.googleGemini | | Notion | Contact data storage | n8n-nodes-base.notion | 🧰 Requirements A connected Google Gemini (PaLM) API credential A Notion integration with edit access to your database 🚀 Example Use Cases Digitize stacks of collected business cards after a conference Auto-save new partner contacts to your CRM database in Notion Build a searchable Notion-based contact directory Combine with Notion filters or rollups to manage client relationships 💡 Tips You can easily extend this workflow by adding an email notification node to confirm successful uploads. For multilingual cards, Gemini Vision handles mixed-language text recognition well. Adjust Gemini model (gemini-1.5-flash or gemini-1.5-pro) based on your accuracy vs. speed needs. 🧾 Template Metadata | Field | Value | |-------|--------| | Category | AI + Notion + OCR | | Difficulty | Beginner–Intermediate | | Trigger Type | Form Submission | | Use Case | Automate business card digitization | | Works with | Google Gemini, Notion |
by AI/ML API | D1m7asis
Who’s it for Teams and makers who want a plug-and-play vision bot: users send a photo in Telegram, the bot returns a concise description plus OCR text. No custom servers required—just n8n, a Telegram bot, and an AIMLAPI key. What it does / How it works The workflow listens for new Telegram messages, fetches the highest-resolution photo, converts it to base64, normalizes the MIME type, and calls AIMLAPI (GPT-4o Vision) via the HTTP Request node using the OpenAI-compatible messages format with an image_url data URI. The model returns a short caption and extracted text. The answer is sent back to the same Telegram chat. Requirements n8n instance (self-hosted or cloud) Telegram bot token (from @BotFather) AIMLAPI account and API key (OpenAI-compatible endpoint) How to set up Create a Telegram bot with @BotFather and copy the token. In n8n, add Telegram credentials (no hardcoded tokens in nodes). Add AIMLAPI credentials with your API key (base URL: https://api.aimlapi.com/v1). Import the workflow JSON and connect credentials in the nodes. Execute the trigger and send a photo to your bot to test. How to customize the workflow Modify the vision prompt (e.g., add brand, language, or formatting rules). Switch models within AIMLAPI (any vision-capable model using the same messages schema). Add an IF branch for text-only messages (reply with guidance). Log usage to Google Sheets or a database (user id, file id, response). Add rate limits, user allowlists, or Markdown formatting in Telegram responses. Increase timeouts/retries in the HTTP Request node for long-running images.
by Cheng Siong Chin
How It Works This workflow automates end-to-end legal contract review and compliance governance for legal teams, contract managers, and risk officers. It solves the problem of manually reviewing uploaded contracts for regulatory compliance, risk classification, and approval routing, a process that is time-consuming, inconsistent, and difficult to audit at scale. Contracts are ingested via a webhook upload trigger and text is extracted immediately. The extracted content is passed to a Legal Governance Agent backed by shared memory, which coordinates three specialist components: a Contract Review Agent (using a dedicated review model and memory), a Compliance Validation Agent (referencing a Regulatory Database Tool and compliance model), and a Slack Alert Tool with structured output parsing. The agent classifies each contract by risk level and routes accordingly, critical alerts, high-risk alerts, and standard reviews each follow distinct paths. All contracts generate an audit record. High and critical cases trigger contract review tracking, approval requirement checks, approval log preparation, and a final summary. Risk clauses are extracted in parallel, split, and stored for downstream reference. Setup Steps Import workflow; configure the Contract Upload Webhook trigger URL. Add AI model credentials to the Legal Governance Agent, Contract Review Agent, and Compliance Validation Agent nodes. Connect Slack credentials to the Slack Alert Tool node. Link Google Sheets credentials; set sheet IDs for Contract Reviews, Approval Log, and Risk Clauses tabs. Configure the Regulatory Database Tool with your compliance database API endpoint and credentials. Prerequisites OpenAI API key (or compatible LLM) Slack workspace with bot credentials Google Sheets with review and risk log tabs pre-created Regulatory database API endpoint access Use Cases Legal teams auto-triaging uploaded vendor contracts by compliance risk level Customisation Swap the Regulatory Database Tool endpoint to target jurisdiction-specific compliance frameworks (GDPR, CCPA, MAS) Benefits Eliminates manual contract triage, reducing review cycle time significantly