by Databox
Your paid ads and website analytics live in separate tools. This workflow bridges both via Databox MCP, runs three specialized AI agents in sequence, and emails a daily intelligence report with a correlation layer that surfaces insights neither dataset could show alone. Who's it for Performance marketers** who want to understand how ads influence website quality Growth teams** looking for daily cross-channel signals without building custom dashboards Marketing managers** who need one morning briefing covering paid spend and website behavior How it works Schedule Trigger fires every day at 8 AM Agent 1 fetches website performance from Databox: sessions, bounce rate, goal completions, conversion rate Agent 2 fetches paid channel data from Databox: spend, CPC, CTR, ROAS per platform Agent 3 synthesizes both outputs - ranks channel efficiency, estimates cost per quality visit, and writes 3 actionable recommendations A styled HTML email report is delivered to your inbox Requirements Databox account** with website analytics and at least one paid ads platform connected (free plan works) OpenAI API key (or Anthropic) Gmail account How to set up Click each Databox MCP Tool node - set Authentication to OAuth2 and authorize Add your OpenAI API key to each of the three Chat Model nodes Connect Gmail and set the recipient address in the Send Email node Activate - your first report arrives tomorrow at 8 AM
by Cheng Siong Chin
How It Works This workflow automates end-to-end carbon emissions monitoring, strategy optimisation, and ESG reporting using a multi-agent AI supervisor architecture in n8n. Designed for sustainability managers, ESG teams, and operations leads, it eliminates the manual effort of tracking emissions, evaluating reduction strategies, and producing compliance reports. Data enters via scheduled pulls and real-time webhooks, then merges into a unified feed processed by a Carbon Supervisor Agent. Sub-agents handle monitoring, optimisation, policy enforcement, and ESG reporting. Approved strategies are auto-executed or routed for human sign-off. Outputs are consolidated and pushed to Slack, Google Sheets, and email, keeping all stakeholders informed. The workflow closes the loop from raw sensor data to actionable ESG dashboards with minimal human intervention. Setup Steps Connect scheduled trigger and webhook nodes to your emissions data sources. Add credentials for Slack (bot token), Gmail (OAuth2), and Google Sheets (service account). Configure the Carbon Supervisor Agent with your preferred LLM (OpenAI or compatible). Set approval thresholds in the Check Approval Required node. Map Google Sheets document ID for ESG report and KPI dashboard nodes. Prerequisites OpenAI or compatible LLM API key Slack bot token Gmail OAuth2 credentials Google Sheets service account Use Cases Corporate sustainability teams automating monthly ESG reporting Customisation Swap LLM models per agent for cost or accuracy trade-offs Benefits Eliminates manual emissions data aggregation and report generation
by Avkash Kakdiya
How it works This workflow starts whenever a new lead comes in through Typeform (form submission) or Calendly (meeting booking). It captures the lead’s information, standardizes it into a clean format, and checks the email domain. If it’s a business domain, the workflow uses AI to enrich the lead with company details such as industry, headquarters, size, and website. Finally, it merges all the data and automatically saves the enriched contact in HubSpot CRM. Step-by-step Capture Leads The workflow listens for new form responses in Typeform or new invitees in Calendly. Both sources are merged into a single stream of leads. Standardize Data All incoming data is cleaned and formatted into a consistent structure: Name, Email, Phone, Message, and Domain. Filter Domains Checks the email domain. If it’s a free/public domain (like Gmail or Yahoo), the lead is ignored. If it’s a business domain, the workflow continues. AI Company Enrichment Sends the domain to an AI Agent (OpenAI GPT-4o-mini). AI returns structured company details: Company Name Industry Headquarters (city & country) Employee Count Website LinkedIn Profile Short Company Description Merge Lead & AI Data Combines the original lead details with the AI-enriched company information. Adds metadata like timestamp and workflow ID. Save to HubSpot CRM Creates or updates a contact record in HubSpot. Maps enriched fields like company name, LinkedIn, website, and description. Why use this? Automatically enriches every qualified lead with valuable company intelligence. Filters out unqualified leads with personal email addresses. Keeps your CRM updated without manual research. Saves time by centralizing lead capture, enrichment, and CRM sync in one flow. Helps sales teams focus on warm, high-value prospects instead of raw, unverified leads.
by Natnail Getachew
Who’s it for This workflow is ideal for: Content creators producing daily historical or educational videos YouTube automation enthusiasts building AI-driven channels Educators sharing engaging historical facts in short-form video format Anyone creating an automated AI video pipeline with human approval How it works This workflow automates the full pipeline of generating and publishing historical videos: Triggers daily at 1 AM and initializes retry tracking (maximum 3 attempts) Fetches historical events for the current date and selects one randomly Uses Google Gemini to generate a cinematic text-to-video script Sends the prompt to fal.ai (Hunyuan LoRA) to generate a short video Polls the generation status every 30 seconds until the video is ready Downloads the generated video and sends it to Telegram with context Waits for manual approval via Telegram If approved → uploads the video to YouTube and sends a confirmation message If declined → retries with a new event (up to 3 attempts total) How to set up Import the workflow into n8n Configure your Telegram credentials Set your Telegram Chat ID using a variable or Set node (avoid hardcoding) Configure HTTP Header Auth credentials for fal.ai (API key required) Set up Google Gemini API credentials Connect your YouTube account using OAuth2 (Optional) Adjust the schedule time in the trigger node Activate the workflow Requirements n8n (cloud or self-hosted) fal.ai account and API key (for video generation) Google Gemini API access YouTube account with upload permissions Telegram account for approval notifications How to customize the workflow Adjust retry limits in the retry logic node Modify video parameters (resolution, frames, aspect ratio) in the fal.ai request Change the script style by editing the Gemini prompt Replace the historical events API with another content source Customize Telegram messages or approval flow
by Cheng Siong Chin
Introduction Upload invoices via Telegram, receive structured data instantly. Perfect for accountants and finance teams. How It Works Telegram bot receives invoices, downloads files, extracts data using OpenAI, then returns analysis. Workflow Template Telegram Trigger → Document Check → Get File → HTTP Download → AI Extract → Format Response → Send to Telegram Workflow Steps Telegram Trigger: Listens for uploads. Document Check: Validates files; routes errors. Get File: Retrieves metadata. HTTP Download: Fetches content. AI Extract: OpenAI parses invoice fields. Format Response: Structures data. Send Analysis: Delivers to chat. Setup Instructions Telegram Bot: Create via BotFather, add credentials. OpenAI Agent: Add API key and extraction prompt. HTTP Node: Set authentication. Parser: Define invoice schema. Error Handling: Configure fallbacks. Prerequisites n8n instance Telegram Bot Token OpenAI API key Customization Database storage Accounting software integration Benefits Eliminates manual entry Reduces errors
by Guillaume Duvernay
Never worry about losing your n8n workflows again. This template provides a powerful, automated backup system that gives you the peace of mind of version control without the complexity of Git. On a schedule you define, it intelligently scans your n8n instance for new workflow versions and saves them as downloadable snapshots in a clean and organized Airtable base. But it’s more than just a backup. This workflow uses AI to automatically generate a concise summary of what each workflow does and even documents the changes between versions. The result is a fully searchable, self-documenting library of all your automations, making it the perfect "single source of truth" for your team or personal projects. Who is this for? Self-hosted n8n users:** This is an essential insurance policy to protect your critical automations from server issues or data loss. n8n developers & freelancers:** Maintain a complete version history for client projects, allowing you to easily review changes and restore previous versions. Teams using n8n:** Create a central, browseable, and documented repository of all team workflows, making collaboration and handovers seamless. Any n8n user who values their work:** Protect your time and effort with an easy-to-use, "set it and forget it" backup solution. What problem does this solve? Prevents catastrophic data loss:** Provides a simple, automated way to back up your most critical assets—your workflows. Creates "no-code" version control:** Offers the benefits of version history (like Git) but in a user-friendly Airtable interface, allowing you to browse and download any previous snapshot. Automates documentation:** Who has time to document every change? The AI summary and changelog features mean you always have up-to-date documentation, even if you forget to write it yourself. Improves workflow discovery:** Your Airtable base becomes a searchable and browseable library of all your workflows and their purposes, complete with AI-generated summaries. How it works Scheduled check: On a recurring schedule (e.g., daily), the workflow fetches a list of all workflows from your n8n instance. Detect new versions: It compares the current version ID of each workflow with the snapshot IDs already saved in your Airtable base. It only proceeds with new, unsaved versions. Generate AI documentation: For each new snapshot, the workflow performs two smart actions: AI Changelog: It compares the new workflow JSON with the previously saved version and uses AI to generate a one-sentence summary of what’s changed. AI Summary: It periodically re-analyzes the entire workflow to generate a fresh, high-level summary of its purpose, ensuring the main description stays up-to-date. Store in Airtable: It saves everything neatly in the provided two-table Airtable base: A Workflows table holds the main record and the AI summary. A linked Snapshots table stores the version-specific details, the AI changelog, and the actual .json backup file as an attachment. Setup Duplicate the Airtable base: Before you start, click here to duplicate the Airtable Base template into your own Airtable account. Configure the workflow: Connect your n8n API credentials to the n8n nodes. Connect your Airtable credentials and map the nodes to the base you just duplicated. Connect your AI provider credentials to the OpenAI Chat Model nodes. Important: In the Store workflow file into Airtable (HTTP Request) node, you must replace <AIRTABLE-BASE-ID> in the URL with your own base ID (it starts with app...). Set your schedule: Configure the Schedule Trigger to your desired frequency (daily is a good start). Activate the workflow. Your automated, AI-powered backup system is now live! Taking it further Add notifications:* Add a *Slack* or *Email** node at the end of the workflow to send a summary of which workflows were backed up during each run. Use different storage:* While designed for Airtable, you could adapt the logic to store the JSON files in *Google Drive* or *Dropbox* and the metadata in *Google Sheets* or *Notion**. Optimize AI costs:* The *Check workflow status** (Code) node is set to regenerate the main AI summary for the first few snapshots and then every 5th snapshot. You can edit the code in this node to change this frequency and manage your token consumption.
by Rahul Joshi
📊 Description Monitor daily brand visibility and reputation with an automated AI-powered mention tracker. 🔍🤖 This workflow checks Hacker News every morning for new stories matching your brand keyword, classifies each mention’s sentiment and urgency using GPT-4o-mini, and delivers a clean daily summary to Slack. If no mentions are found, the workflow sends a simple “no mentions today” update instead—ensuring your team is always informed without manual monitoring. Perfect for reputation tracking, competitive intelligence, and early warning alerts. 📈💬 🔁 What This Template Does 1️⃣ Triggers every morning at 09:00 to begin the analysis. ⏰ 2️⃣ Loads brand name + keyword filters from configuration. 🏷️ 3️⃣ Fetches relevant mentions from Hacker News using the Algolia API. 🌐 4️⃣ Normalizes raw API data into clean fields (title, URL, snippet, author, points). 📄 5️⃣ Classifies each mention’s sentiment, stance, topic, and urgency using GPT-4o-mini. 🤖 6️⃣ Builds a ranked daily summary including top 10 mentions and sentiment totals. 📊 7️⃣ Sends the report to Slack, formatted cleanly and ready for team consumption. 💬 8️⃣ If no mentions exist, sends a simple “no new mentions today” message. 🚫 9️⃣ Includes an error handler that notifies Slack of any workflow failures. ⚠️ ⭐ Key Benefits ✅ Automatically tracks brand presence without manual searching ✅ AI-powered sentiment & urgency analysis for deeper insights ✅ Clean Slack summaries keep teams aligned and aware ✅ Early detection of negative or high-urgency mentions ✅ Zero manual monitoring — runs fully on schedule ✅ Suitable for brand monitoring, PR, marketing, and leadership teams 🧩 Features Daily schedule trigger Hacker News API (Algolia) integration Structured data normalization GPT-4o-mini classification (sentiment, stance, topic, urgency) Slack notifications (detailed report or no-mention message) Error-handling pipeline with Slack alerts Fully configurable brand keywords 🔐 Requirements Slack API credentials OpenAI API key (GPT-4o-mini) No authentication required for Hacker News API n8n with LangChain nodes enabled 🎯 Target Audience Brand monitoring & PR teams AI companies tracking public sentiment Founders monitoring mentions of their product Marketing teams watching trends & community feedback
by Rully Saputra
Who’s it for This workflow is perfect for IT departments, helpdesk teams, or internal service units that manage incoming support requests through Jotform. It automates ticket handling, classification, and response—saving time and ensuring consistent communication. How it works When a new IT service request is submitted through Jotform, this workflow automatically triggers in n8n. The submitted details (name, department, category, comments, etc.) are structured and analyzed using Google Gemini AI to summarize and classify the issue’s priority level (P0–P2). P0 (High): Urgent issues that send an immediate Telegram alert. P1 (Medium) / P2 (Low): Logged in Google Sheets for tracking and reporting. After classification, the workflow sends a confirmation email to the requester via Gmail, providing a summary of their submission and current status. How to set up Connect your Jotform account to the Jotform Trigger node. Add your Google Sheets, Gmail, and (optionally) Telegram credentials. Map your Jotform fields in the “Set” node (Full Name, Department, Category, etc.). Test by submitting a form response. Requirements Jotform account and published IT request form Google Sheets account Gmail account (for replies) Optional: Telegram bot for real-time alerts n8n account (cloud or self-hosted) How to customize the workflow Adjust AI classification logic in the Priority Classifier node. Modify email templates for tone or format. Add filters or additional routing for different departments. Extend to integrate with your internal ticketing or Slack systems.
by Ghufran Barcha
Telegram AI Personal Assistant — Calendar & Email Manager This workflow turns a Telegram bot into a fully functional personal AI assistant capable of handling your schedule and inbox through natural conversation. Send it a text message, record a voice note, or snap a photo — it understands all three and responds intelligently. The assistant is powered by Claude Haiku (via OpenRouter) and comes with a built-in 30-message memory buffer, so it remembers context across a conversation just like a real assistant would. It has full read/write access to Google Calendar and Gmail, meaning it can book meetings, check your availability, send emails, reply to threads, and clean up your inbox — all from a single Telegram chat. What this workflow does Multi-modal input handling Text messages are processed directly Voice notes are downloaded from Telegram and transcribed using OpenAI Whisper Photos are downloaded and analyzed using GPT-4o vision, with any caption included as additional context All three input types are normalized into a unified context object before reaching the agent, so the AI always receives clean, structured input regardless of how the user communicated. Authorization layer Only the allowlisted Telegram User ID can interact with the assistant. Any unauthorized message receives an instant rejection and the workflow stops — no agent calls are made. AI agent with tools The LangChain agent receives the full context and decides autonomously whether to reply conversationally or invoke one of the connected tools. It uses the current date/time from n8n to handle scheduling requests accurately. Google Calendar tools: check availability, create events, list upcoming events, fetch a specific event, update event details, delete events. Gmail tools: send new emails, search the inbox, read a specific email, reply to a thread, delete messages. Persistent memory Each user's conversation is tracked using a sliding window of the last 30 messages, keyed by their Telegram User ID. The assistant remembers what was said earlier in the same session without needing reminders. Example use cases "What do I have on Thursday?" → fetches and summarizes calendar events "Schedule a call with Ahmed tomorrow at 3pm" → creates a calendar event "Any emails from the client today?" → searches Gmail and summarizes results "Reply to John's last email and say I'll confirm by Friday" → reads the thread and sends a reply [sends a photo of a meeting invite] → extracts details from the image and creates a calendar event Setup instructions Telegram API — create a bot via @BotFather and connect the token to the Telegram Trigger node and both send nodes. OpenAI API — required for Whisper voice transcription and GPT-4o image analysis. OpenRouter API — used to run Claude Haiku as the agent's language model. You can swap this for any OpenRouter-compatible model. Google Calendar OAuth2 — authorize your Google account and update the calendar ID (currently set to an example address) in all six calendar tool nodes. Gmail OAuth2 — authorize your Gmail account in all five Gmail tool nodes. User authorization — open the If node and replace the placeholder value with your own Telegram numeric User ID. You can find this by messaging @userinfobot on Telegram. Customization tips To support multiple authorized users, replace the If node with a list-based check or a Code node that checks against an array of allowed IDs. To change the AI model, swap the OpenRouter Chat Model node — the agent prompt and all tools remain fully compatible. To adjust memory length, change the contextWindowLength value in the Simple Memory node (currently 30 messages). To modify the assistant's personality or add new instructions, edit the system prompt inside the MainAgent node. Additional tools (e.g. Notion, Slack, Airtable) can be connected to the MainAgent node as sub-nodes without changing any other part of the workflow.
by Madame AI
Auto-reply to Telegram messages using BrowserAct & Google Gemini This workflow acts as a smart, 24/7 personal assistant for your Telegram chats. It runs on a schedule to monitor your message history, uses AI to decide if a reply is necessary, drafts a personalized response, and sends it back to the user, all while handling delivery verification and potential CAPTCHA challenges via BrowserAct. Target Audience Community managers, busy professionals, and customer support teams who need to manage Telegram communications efficiently. How it works Scheduled Check: Every 15 minutes, the workflow triggers BrowserAct to fetch the latest chat history. Analysis: An AI Agent (using Google Gemini) reviews the conversation. It determines if the last message requires a response (e.g., a question) or if the chat is idle. Drafting: If a reply is needed, the AI drafts a personalized message that includes the user's name and a standard footer disclaimer. Formatting: A Code node cleans up the text to ensure proper line breaks and formatting for Telegram. Delivery: BrowserAct executes the task to send the drafted reply. The workflow loops to check the task status, ensuring the message is delivered successfully. How to set up Configure Credentials: Connect your BrowserAct and Google Gemini accounts in n8n. Prepare BrowserAct: Ensure the Telegram Personal Assistant template is saved in your BrowserAct account. Set Schedule: The default trigger interval is 15 minutes. Adjust the Schedule Trigger node if you need a different frequency. Activate: Turn on the workflow to start monitoring your chats. Requirements BrowserAct* account with the *Telegram Personal Assistant** template. Google Gemini** account. Telegram** account (accessed via BrowserAct). How to customize the workflow Change AI Persona: Modify the system prompt in the Chatting & Answering AI agent to change the tone from "Professional Support" to "Casual Assistant" or "Sales Representative." Adjust Frequency: Change the Schedule Trigger interval to run every 5 minutes for faster responses or hourly for less urgency. Add Notification: Add a Slack or Email node after the delivery step to get notified whenever the bot sends a reply. Need Help? How to Find Your BrowserAct API Key & Workflow ID How to Connect n8n to BrowserAct How to Use & Customize BrowserAct Templates Workflow Guidance and Showcase Video Telegram Personal Assistant: Auto-Read Chats & Auto-Reply them with n8n
by Cheng Siong Chin
How It Works This workflow automates personalized customer journeys by analyzing CRM data, purchase history, chat interactions, and performance metrics to intelligently route customer actions through multiple channels (email, SMS, retargeting) via AI-optimized schemas. A webhook trigger initiates the process by fetching CRM customer data, which is then merged with historical records and interactions. OpenAI builds comprehensive customer state profiles, enabling intelligent routing to appropriate channels using optimized journey logic. The system aggregates performance metrics in real-time and updates the database to maintain synchronized customer information across all systems. Setup Steps Connect CRM credentials (source system) Add OpenAI API key for state builder Configure Gmail/SMS provider credentials Add Google Sheets connection for performance tracking Set Touchpoint Event Webhook URL Map database connection for customer state persistence Prerequisites OpenAI API key, CRM access, Gmail/SMS provider accounts, Google Sheets, database (PostgreSQL/MySQL), n8n instance with webhook enabled. Use Cases E-commerce personalization, SaaS customer retention, multi-touch marketing automation Customization Modify journey schemas in Journey Optimizer AI, adjust routing rules in Action Type Router Benefits Reduces manual campaign management 80%, improves conversion via AI personalization
by WeblineIndia
Daily Market Brief Automation > n8n, Gemini AI, Nifty 50 API & Gmail This workflow automatically generates and emails a concise daily stock market brief at 9:30 AM using real-time Nifty 50 data, financial news and AI-powered insights. Quick Implementation Steps Import the workflow JSON into your n8n account Configure Google Gemini API credentials Set your recipient email in Gmail node Activate the workflow Sit back and receive daily market summaries automatically What It Does This workflow automates the creation of a daily stock market brief by combining real-time stock data and financial news with AI-driven insights. Every day at 9:30 AM, the workflow fetches the latest Nifty 50 stock prices and financial news from public APIs. It processes stock data to identify top gainers and losers, while simultaneously analyzing news sentiment using Google Gemini AI. Finally, it generates a crisp, human-readable market summary (under 120 words) and sends it via email—ensuring you stay informed without manual effort. Who It's For Stock market traders and investors Financial analysts Portfolio managers Business owners tracking market trends Anyone who wants a quick daily market snapshot Requirements To use this workflow, you need: n8n account (self-hosted or cloud) Google Gemini API credentials Gmail account connected in n8n Access to public APIs: Market News API Nifty 50 Stock Quotes API How It Works & Setup Instructions Step 1: Schedule Trigger Node: Market Open Scheduler Triggers workflow daily at 9:30 AM You can modify timing based on your market preference. Step 2: Fetch Data Stock Prices Node: Fetch Nifty 50 Stock Prices Fetches real-time prices for predefined Nifty 50 stocks Market News Node: Fetch Market News Retrieves latest financial news Step 3: Process Stock Data Extract Price Data Extracts prices object from API response Normalize Price Data Converts object into array format Calculate Gainers & Losers Sorts stocks by percentage change Identifies: Top 10 Gainers Top 10 Losers Step 4: Analyze News with AI Analyze News Sentiment (AI) Uses Google Gemini to: Extract stock names Determine sentiment (Bullish / Bearish / Mixed) Provide short reasoning Parse News Response Converts AI output into structured JSON Step 5: Merge & Validate Data Merge Market Data** Combines stock insights and news analysis Validate Data** Ensures data is not empty before proceeding Aggregate Dataset** Combines all outputs into a single structure Structure Final Payload** Prepares: Top gainers Top losers News insights Step 6: Generate AI Market Brief Node: Generate Market Brief (AI) Uses Gemini AI to generate: Nifty trend summary News highlights Top gainers & losers Output Format: Max 120 words Bullet style Clean and concise Step 7: Send Email Node: Send Email Report Required Setup: Add recipient email in sendTo field Configure Gmail credentials How To Customize Nodes Modify Stock List Update symbols inside Fetch Nifty 50 Stock Prices node JSON body Change Schedule Edit Market Open Scheduler Adjust trigger time as needed Customize AI Prompts Modify prompts in: Analyze News Sentiment (AI) Generate Market Brief (AI) You can: Increase word limit Change tone (formal/casual) Add more insights Customize Email Update subject line in Gmail node Switch to HTML format if needed Add-ons (Enhancements) Send alerts to Slack/Telegram instead of email Store reports in Google Sheets or database Add historical trend comparison Include sector-wise analysis Add chart/graph attachments Multi-recipient distribution list Use Case Examples Daily Investor Briefing Receive quick updates before market decisions Portfolio Monitoring Track gainers/losers affecting your holdings Financial Newsletter Automation Use output as ready-to-send content Trading Desk Alerts Provide morning insights to trading teams Market Research Support Automate initial research summaries There can be many more use cases depending on how you extend this workflow. Troubleshooting Guide | Issue | Possible Cause | Solution | |------|--------------|---------| | No email received | Gmail not configured | Add credentials and recipient email | | Empty market brief | API returned no data | Check API endpoints and responses | | AI output not parsed | Incorrect JSON format from AI | Verify parsing logic in "Parse News Response" node | | Workflow not triggering | Scheduler misconfigured | Check trigger time and timezone | | Incorrect gainers/losers | Data structure mismatch | Validate API response format | Need Help? If you need assistance with: Setting up this workflow in your environment Customizing AI prompts or logic Adding advanced features or integrations Building similar automation workflows 👉 Reach out to WeblineIndia for expert support. We’re happy to help you automate smarter and scale faster.