by Supira Inc.
Overview This template automates invoice processing for teams that currently copy data from PDFs into spreadsheets by hand. It is ideal for small businesses, back-office teams, accounting, and operations who want to reduce manual entry, avoid human error, and never miss a payment deadline. The workflow watches a structured Google Drive folder, performs OCR, converts the text into clean structured JSON with an LLM, and appends one row per invoice into Google Sheets. It preserves a link back to the original file for easy review and audit. Designed for small businesses and back-office teams.** Eliminates manual typing** and reduces errors. Prevents missed due dates** by centralizing data. Works with monthly subfolders like "2025年10月分" (meaning "October 2025"). Keeps a Google Drive link to each invoice file. How It Works The workflow runs on a schedule, scans your Drive folder hierarchy, OCRs the PDFs/images, cleans the text, extracts key fields with an LLM, and appends a row to Google Sheets per invoice. Each step is modular so you can swap services or tweak prompts without breaking the flow. Scheduled trigger** runs on a recurring cadence. Scan the parent folder** in Google Drive. Auto-detect the current-month folder** (e.g., a folder named "2025年10月分" meaning "October 2025"). Download PDFs/images** from the detected folder. Extract text** using the OCR.Space API. Clean noise** and normalize with a Code node. Use an OpenAI model** to extract invoice_date, due_date, client_name, line items, totals, and bank info to JSON. Append one row per invoice** to Google Sheets. Requirements Before you start, make sure you have access to the required services and that your Drive is organized into monthly subfolders so the workflow can find the right files. n8n account.** Google Drive access.** Google Sheets access.** OCR.Space API key** (set as <your_ocr_api_key>). OpenAI / LLM API credential** (e.g., <your_openai_credential_name>). Invoice PDFs organized by month** on Google Drive (e.g., folders like "2025年10月分"). Setup Instructions Import the workflow, replace placeholder credentials and IDs with your own, and enable the schedule. You can also run it manually for testing. The parent-folder query and sheet ID must reflect your environment. Replace <your_google_drive_credential_id> and <your_google_drive_credential_name> with your Google Drive Credential. Adjust the parent folder search query to your invoice repository name. Replace the Sheets document ID <your_google_sheet_id> with your spreadsheet ID. Ensure your OpenAI credential <your_openai_credential_name> is selected. Set your OCR.Space key as <your_ocr_api_key>. Enable the Schedule Trigger** after testing. Customization This workflow is easily extensible. You can adapt folder naming rules, enrich the spreadsheet schema, and expand the AI prompt to extract custom fields specific to your company. It also works beyond invoices, covering receipts, quotes, or purchase orders with minor changes. Change the monthly folder naming rule such as {{$now.setZone("Asia/Tokyo").format("yyyy年MM月")}}分 to match your convention. Modify or extend Google Sheets column mappings as needed. Tune the AI prompt to extract project codes, owner names, or custom fields. Repurpose for receipts, quotes, or purchase orders. Localize date formats and tax calculation rules to your standards.
by Vlad Arbatov
Summary Chat with your AI agent in Telegram. It remembers important facts about you in Airtable, can transcribe your voice messages, search the web, read and manage Google Calendar, fetch Gmail, and query Notion. Responses are grounded in your recent memories and tool outputs, then sent back to Telegram. What this workflow does Listens to your Telegram messages (text or voice) Maintains short-term chat memory per user and long-term memory in Airtable Decides when to save new facts about you (auto “Save Memory” without telling you) Uses tools on demand: Web search via SerpAPI Google Calendar: list/create/update/delete events Gmail: list and read messages Notion: fetch database info Transcribes Telegram voice notes with OpenAI and feeds them to the agent Combines live tool results + recent memories and replies in Telegram Apps and credentials Telegram Bot API: personal_bot xAI Grok: Grok-4 model for chat OpenAI: speech-to-text (transcribe audio) Airtable: store long-term memories Google Calendar: calendar actions Gmail: email actions Notion: knowledge and reading lists SerpAPI: web search Typical use cases Personal assistant that remembers preferences, decisions, and tasks Create/update meetings by chatting, and get upcoming events Ask “what did I say I’m reading?” or “what’s our plan from last week?” Voice-first capture: send a voice note → get a transcribed, actionable reply Fetch recent emails or look up info on the web without leaving Telegram Query a Notion database (e.g., “show me the Neurocracy entries”) How it works (node-by-node) Telegram Trigger Receives messages from your Telegram chat (text and optional voice). Text vs Message Router Routes based on message contents: Text path → goes directly to the Agent (AI). Voice path → downloads the file and transcribes before AI. Always also fetches recent Airtable memories for context. Get a file (Telegram) Downloads the voice file (voice.file_id) when present. Transcribe a recording (OpenAI) Converts audio to text so the agent can use it like a normal message. Get memories (Airtable) Searches your “Agent Memories” base/table, filtered by user, sorted by Created. Aggregate (Aggregate) Bundles recent memory records into a single array “Memories” with text + timestamp. Merge (Merge) Combines current input (text or transcript) with the memory bundle before the agent. Simple Memory (Agent memory window) Short-term session memory keyed by Telegram chat ID; keeps the recent 30 turns. Tools wired into the agent SerpAPI Google Calendar tools: Get many events in Google Calendar Create an event in Google Calendar Update an event in Google Calendar Delete an event in Google Calendar Gmail tools: Get many messages in Gmail Get a message in Gmail Notion tool: Get a database in Notion Airtable tool: Save Memory (stores distilled facts about the user) Agent System prompt defines role, tone, and rules: Be a friendly assistant. On each message, decide if it contains user info worth saving. If yes, call “Save Memory” to persist a short summary in Airtable. Don’t announce memory saves—just continue helping. Use tools when needed (web, calendar, Gmail, Notion). Think with the provided memory context block. Uses xAI Grok Chat Model for reasoning and tool-calling. Can call Save Memory, Calendar, Gmail, Notion, and SerpAPI tools as needed. Save Memory (Airtable) Persists Memory and User fields to “Agent Memories” base; auto timestamp by Airtable. Send a text message (Telegram) Sends the agent’s final answer back to the same Telegram chat ID. Node map | Node | Type | Purpose | |---|---|---| | Telegram Trigger | Trigger | Receive text/voice from Telegram | | Text vs voice router | Flow control | Route text vs voice; also trigger memories fetch | | Get a file | Telegram | Download voice audio | | Transcribe a recording | OpenAI | Speech-to-text for voice notes | | Get memories | Airtable | Load recent user memories | | Aggregate | Aggregate | Pack memory records into “Memories” array | | Merge | Merge | Combine input and memories before agent call | | Simple Memory | Agent memory | Short-term chat memory per chat ID | | xAI Grok Chat Model | LLM | Core reasoning model for the Agent | | Search Web with SerpAPI | Tool | Web search | | Google Calendar tools | Tool | List/create/update/delete events | | Gmail tools | Tool | Search and read email | | Notion tool | Tool | Query a Notion database | | Save Memory | Airtable Tool | Persist distilled user facts | | AI Agent | Agent | Orchestrates tools + memory, produces the answer | | Send a text message | Telegram | Reply to the user in Telegram | Before you start Create a Telegram bot and get your token (via @BotFather). Put your Telegram user ID into the Telegram Trigger node (chatIds). Connect credentials: xAI Grok (model: grok-4-0709) OpenAI (for audio transcription) Airtable (Agent Memories base and table) Google Calendar OAuth Gmail OAuth Notion API SerpAPI key Adjust the Airtable “User” value and the filterByFormula to match your name or account. Setup instructions 1) Telegram Telegram Trigger: additionalFields.chatIds = your_telegram_id download = true to allow voice handling Send a text message: chatId = {{ $('Telegram Trigger').item.json.message.chat.id }} 2) Memory Airtable base/table must exist with fields: Memory, User, Created (Created auto-managed). In Save Memory and Get memories nodes, align Base, Table, and filterByFormula with your setup. Simple Memory: sessionKey = {{ $('If').item.json.message.chat.id }} contextWindowLength = 30 (adjust as needed) 3) Tools Google Calendar: choose your calendar, test get/create/update/delete. Gmail: set “returnAll/simplify/messageId” via $fromAI or static defaults. Notion: set your databaseId. SerpAPI: ensure the key is valid. 4) Agent (AI node) SystemMessage: customize role, name, and any constraints. Text input: concatenates transcript or text into one prompt: {{ $json.text }}{{ $json.message.text }} How to use Send a text or voice message to your bot in Telegram. The agent replies in the same chat, optionally performing tool actions. New personal facts you mention are silently summarized and stored in Airtable for future context. Customization ideas Replace Grok with another LLM if desired. Add more tools: Google Drive, Slack, Jira, GitHub, etc. Expand memory schema (e.g., tags, categories, confidence). Add guardrails: profanity filters, domain limits, or cost control. Multi-user support: store chat-to-user mapping and separate memories by user. Add summaries: a daily recap message created from new memories. Limits and notes Tool latency: calls to Calendar, Gmail, Notion, and SerpAPI add response time. Audio size/format: OpenAI transcription works best with common formats and short clips. Memory growth: periodically archive old Airtable entries, or change Aggregate window. Timezone awareness: Calendar operations depend on your Google Calendar settings. Privacy and safety Sensitive info may be saved to Airtable; restrict access to the base. Tool actions operate under your connected accounts; review scopes and permissions. The agent may call external APIs (SerpAPI, OpenAI); ensure this aligns with your policies. Example interactions “Schedule a 30‑min catch‑up with Alex next Tuesday afternoon.” “What meetings do I have in the next 4 weeks?” “Summarize my latest emails from Product Updates.” “What did I say I’m reading?” (agent recalls from memories) Voice note: “Remind me to call the dentist this Friday morning.” → agent transcribes and creates an event. Tags telegram, agent, memory, grok, openai, airtable, google-calendar, gmail, notion, serpapi, voice, automation Changelog v1: First release with Telegram agent, short/long-term memory, voice transcription, and tool integrations for web, calendar, email, and Notion.
by Nguyen Thieu Toan
🤖 Facebook Messenger Smart Chatbot – Batch, Format & Notify with n8n Data Table by Nguyen Thieu Toan 🌟 What Is This Workflow? This is a smart chatbot solution built with n8n, designed to integrate seamlessly with Facebook Messenger. It batches incoming messages, formats them for clarity, tracks conversation history, and sends natural replies using AI. Perfect for businesses, customer support, or personal AI agents. ⚙️ Key Features 🔄 Smart batching: Groups consecutive user messages to process them in one go, avoiding fragmented replies. 🧠 Context formatting: Automatically formats messages to fit Messenger’s structure and length limits. 📋 Conversation history tracking: Stores and retrieves chat logs between user and bot using n8n Data Table. 👀 Seen & Typing effects: Adds human-like responsiveness with Messenger’s sender actions. 🧩 AI Agent integration: Easily connects to GPT, Gemini, or any LLM for natural replies, scheduling, or business logic. 🚀 How It Works Connects to your Facebook Page via webhook to receive and send messages. Stores incoming messages in a Data Table called Batch_messages, including fields like user_text, bot_rep, processed, etc. Collects unprocessed messages, sorts them by id, and creates a merged_message and full history. Sends the history to an AI Agent for contextual response generation. Sends the AI reply back to Messenger with Seen/Typing effects. Updates the message status to processed = true to prevent duplicate handling. 🛠️ Setup Guide Create a Facebook App and Messenger webhook, link it to your Page. Set up the Batch_messages Data Table in n8n with required columns. Import the workflow or build nodes manually using the tutorial. Configure your API tokens, webhook URLs, and AI Agent endpoint. Deploy the workflow on a public n8n server. 📘 Full tutorial available at: 👉 Smart Chatbot Workflow Guide by Nguyen Thieu Toan 💡 Pro Tips Customize the AI prompt and persona to match your business tone. Add scheduling, lead capture, or CRM integration using n8n’s flexible nodes. Monitor your Data Table regularly to ensure clean message flow and batching. 👤 About the Creator Nguyen Thieu Toan (Nguyễn Thiệu Toàn/Jay Nguyen) is an expert in AI automation, business optimization, and chatbot development. With a background in marketing and deep knowledge of n8n workflows, Jay helps businesses harness AI to save time, boost performance, and deliver smarter customer experiences. Website: https://nguyenthieutoan.com
by Laiba
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How it works User Uploads PDF** : The workflow accepts a PDF via webhook. Extract Text** : n8n extracts the text content from the PDF. Summarize with AI** : The extracted text is passed to an AI model (Groq) with OpenAI model for summarization. Generate Audio** : The summary text is sent to a TTS (Text-to-Speech) API (Qwen-TTS-Demo), you can use other free alternatives. Serve Result** : The workflow outputs both Summary and Audio File URL (WAV link) which you can attached to your audioPlayer. This allows users to read or listen to the summary instantly. How to use / Requirements Import Workflow** : Copy/paste the workflow JSON into your n8n instance. Set Up Input Trigger** : If you want users to upload directly you can use webhook or any other trigger. Configure AI Node** : Add your own API key for (Groq / Open AI). Configure TTS Node** : Add credentials for your chosen TTS service. Run Workflow** : Upload a PDF and get back the summary and audio file url. n8n-smart pdf summarizer & voice generator Please reach out to me at Laiba Zubair if you need further assistance with you n8n workflows and automations!
by Vitorio Magalhães
🎯 What this workflow does This workflow automatically monitors Reddit subreddits for new image posts and downloads them to Google Drive. It's perfect for content creators, meme collectors, or anyone who wants to automatically archive images from their favorite subreddits without manual work. The workflow intelligently prevents duplicate downloads by checking existing files in Google Drive and sends you Telegram notifications about the download status, so you always know when new content has been saved. 🚀 Key Features Multi-subreddit monitoring**: Configure multiple subreddits to monitor simultaneously Smart duplicate detection**: Never downloads the same image twice Automated scheduling**: Runs on a customizable cron schedule Real-time notifications**: Get instant Telegram updates about download activity Rate limit friendly**: Built-in delays to respect Reddit's API limits Cloud storage integration**: Direct upload to organized Google Drive folders 📋 Prerequisites Before using this workflow, you'll need: Reddit Developer Account**: Create an app at reddit.com/prefs/apps Google Cloud Project**: With Drive API enabled and OAuth2 credentials Telegram Bot**: Created via @BotFather with your chat ID Basic n8n knowledge**: Understanding of credentials and node configuration ⚙️ Setup Instructions 1. Configure Reddit API Access Visit reddit.com/prefs/apps and create a new "script" type application Note your Client ID and Client Secret Add Reddit OAuth2 credentials in n8n 2. Set up Google Drive Integration Enable Google Drive API in Google Cloud Console Create OAuth2 credentials with appropriate scopes Configure Google Drive OAuth2 credentials in n8n Update the folder ID in the workflow to your desired destination 3. Configure Telegram Notifications Create a bot via @BotFather on Telegram Get your chat ID (message @userinfobot) Add Telegram API credentials in n8n 4. Customize Your Settings Update the Settings node with: Your Telegram chat ID List of subreddits to monitor (e.g., ['memes', 'funny', 'pics']) Optional: Adjust wait time between requests Optional: Modify the cron schedule 🔄 How it works Scheduled Trigger: The workflow starts automatically based on your cron configuration Random Selection: Picks a random subreddit from your configured list Fetch Posts: Retrieves the latest 30 posts from the subreddit's "new" section Image Filtering: Keeps only posts with i.redd.it image URLs Duplicate Check: Searches Google Drive to avoid re-downloading existing images Download & Upload: Downloads new images and uploads them to your Drive folder Notification: Sends a Telegram message with the download summary 🛠️ Customization Options Scheduling Modify the cron trigger to run hourly, daily, or at custom intervals Add timezone considerations for your location Content Filtering Add upvote threshold filters to get only popular content Filter by image dimensions or file size Implement NSFW content filtering Storage & Organization Create subfolders by subreddit Add date-based folder organization Implement file naming conventions Notifications & Monitoring Add Discord webhook notifications Create download statistics tracking Log failed downloads for debugging 📊 Use Cases Content Creators**: Automatically collect memes and trending images for social media Digital Marketers**: Monitor visual trends across different communities Researchers**: Archive visual content from specific subreddits for analysis Personal Use**: Build a curated collection of images from your favorite subreddits 🎯 Best Practices Respect Rate Limits**: Keep the wait time between requests to avoid being blocked Monitor Storage**: Regularly check Google Drive storage usage Subreddit Selection**: Choose active subreddits with regular image posts Credential Security**: Use n8n's credential system and never hardcode API keys 🚨 Important Notes This workflow only downloads images from i.redd.it (Reddit's image host) Some subreddits may have bot restrictions Reddit's API has rate limits (~60 requests per minute) Ensure your Google Drive has sufficient storage space Always comply with Reddit's Terms of Service and content policies
by Daniel Lianes
Auto-scrape Twitter accounts to WhatsApp groups This workflow provides automated access to real-time Twitter/X content through intelligent scraping and AI processing. It keeps you at the cutting edge of breaking news, emerging trends, and industry developments by eliminating the need to manually check multiple social media accounts and delivering curated updates directly to your communication channels. Overview This workflow automatically handles the complete Twitter monitoring process using advanced scraping techniques and AI analysis. It manages API authentication, multi-source data collection, intelligent content filtering, and message delivery with built-in error handling and rate limiting for reliable automation. Core Function: Real-time social media monitoring that transforms Twitter noise into actionable intelligence, ensuring you're always first to know about the latest trends, product launches, and industry shifts that shape your field. Key Capabilities Real-time trend detection** - Catch breaking news and emerging topics as they happen on X/Twitter Multi-source Twitter monitoring** - Track specific accounts AND trending keyword searches simultaneously AI-powered trend analysis** - Gemini 2.5 Pro filters noise and surfaces only the latest developments that matter Stay ahead of the curve** - Identify emerging technologies, viral discussions, and industry shifts before they go mainstream Flexible delivery options** - Pre-configured for WhatsApp, but easily adaptable for Telegram, Slack, Discord, or even blog content generation Rate limit protection** - Built-in delays and error handling using TwitterAPI.io's reliable, cost-effective infrastructure Tools Used n8n**: The automation platform orchestrating the entire workflow TwitterAPI.io**: Reliable access to Twitter/X data without API complexities OpenRouter**: Gateway to advanced AI models for content processing Gemini 2.5 Pro**: Google's latest AI for intelligent content analysis and formatting Evolution API**: WhatsApp Business API integration for message delivery Built-in Error Handling**: Automatic retry logic and comprehensive error management How to Install IMPORTANT: Before importing this workflow, you need to install the Evolution API community node: Install Community Node First: Go to Settings > Community Nodes in your n8n instance Add Evolution API: Install n8n-nodes-evolution-api package Restart n8n: Allow the new nodes to load properly Import the Workflow: Download the .json file and import it into your n8n instance Configure Twitter Access: Set up TwitterAPI.io credentials and add target accounts/keywords Set Up AI Processing: Add your OpenRouter API key for Gemini 2.5 Pro access Configure WhatsApp: Set up Evolution API and add your target group ID Test & Deploy: Run a test execution and schedule for daily operation Use Cases Stay Ahead of Breaking News**: Be the first to know about industry announcements, product launches, and major developments the moment they hit X/Twitter Spot Trends Before They Explode**: Identify emerging technologies, viral topics, and shifting conversations while they're still building momentum Competitive Intelligence**: Monitor what industry leaders, competitors, and influencers are discussing in real-time Brand Surveillance**: Track mentions, discussions, and sentiment around your brand as conversations develop Content Creation Pipeline**: Gather trending topics, viral discussions, and timely content ideas for blogs, newsletters, or social media strategy Market Research**: Collect real-time social sentiment and emerging market signals from X/Twitter conversations Multi-platform Distribution**: While configured for WhatsApp, the structured output can easily feed Telegram bots, Slack channels, Discord servers, or automated blog generation systems FIND YOUR WHATSAPP GROUPS The workflow includes a helper node to easily find your WhatsApp group IDs: Use the Fetch Groups node: The workflow includes a dedicated node that fetches all your available WhatsApp groups Run the helper: Execute just that node to see a list of all groups with their IDs Copy the group ID: Find your target group in the list and copy its ID Update the delivery node: Paste the group ID into the final WhatsApp sending node Group ID format: Always ends with @g.us (example: 120363419788967600@g.us) Pro tip: Test with a small private group first before deploying to your main team channels. Connect with Me LinkedIn**: https://www.linkedin.com/in/daniel-lianes/ Discovery Call**: https://cal.com/averis/asesoria Consulting Session**: https://cal.com/averis/consultoria-personalizada Was this helpful? Let me know! I truly hope this was helpful. Your feedback is very valuable and helps me create better resources. Want to take automation to the next level? If you're looking to optimize your business processes or need expert help with a project, here's how I can assist you: Advisory (Discovery Call): Do you have a process in your business that you'd like to automate but don't know where to start? In this initial call, we'll explore your needs and see if automation is the ideal solution for you. Schedule a Discovery Call Personalized Consulting (Paid Session): If you already have a specific problem, an integration challenge, or need hands-on help building a custom workflow, this session is for you. Together, we'll find a powerful solution for your case. Book Your Consulting Session Stay Up to Date For more tricks, ideas, and news about automation and AI, let's connect on LinkedIn! Follow me on LinkedIn #n8n #automation #twitter #whatsapp #ai #socialmedia #monitoring #intelligence #gemini #scraping #workflow #nocode #businessautomation #socialmonitoring #contentcuration #teamcommunication #brandmonitoring #trendanalysis #marketresearch #productivity
by Khair Ahammed
Meet Troy, your intelligent personal assistant that seamlessly manages your Google Calendar and Tasks through Telegram. This workflow combines AI-powered natural language processing with MCP (Model Context Protocol) integration to provide a conversational interface for scheduling meetings, managing tasks, and organizing your digital life. Key Features 📅 Smart Calendar Management Create single and recurring events with conflict detection Support for multiple attendees (1-2 attendee variants) Automatic time zone handling (Bangladesh Standard Time) Weekly recurring event scheduling Event retrieval, updates, and deletion ✅ Task Management Create, update, and delete tasks in Google Tasks Mark tasks as completed Retrieve task lists with completion status Task repositioning and organization Parent-child task relationships 🤖Intelligent Processing Natural language understanding for scheduling requests Automatic conflict detection before event creation Context-aware responses with conversation memory Error handling with fallback messages 📱 Telegram Interface Real-time chat interaction Simple commands and natural language Instant confirmations and updates Error notifications Workflow Components Core Architecture: Telegram Trigger for user messages AI Agent with GPT-4o-mini processing MCP Client Tools for Google services Conversation memory for context Error handling with backup responses MCP Integrations: Google Calendar MCP Server (6 specialized tools) Google Tasks MCP Server (5 task operations) Custom HTTP tool for advanced task positioning Use Cases Calendar Scenarios: "Schedule a meeting tomorrow at 3 PM with john@example.com" "Set up weekly team standup every Monday at 10 AM" "Check my calendar for conflicts this afternoon" "Delete the meeting with ID xyz123" Task Management: "Add a task to buy groceries" "Mark the project report task as completed" "Update my presentation task due date to Friday" "Show me all pending tasks" Setup Requirements Required Credentials: Google Calendar OAuth2 Google Tasks OAuth2 OpenAI API key Telegram Bot token ** MCP Configuration:** Two MCP server endpoints for Google services Proper webhook configurations SSL-enabled n8n instance for MCP triggers Business Benefits Productivity: Voice-to-action task and calendar management *Efficiency: *Eliminate app switching with chat interface Intelligence: AI prevents scheduling conflicts automatically Accessibility: Simple Telegram commands for complex operations Technical Specifications Components: 1 Telegram trigger 1 AI Agent with memory 2 MCP triggers (Calendar & Tasks) 13 Google service tools Error handling flows Response Time: Sub-second for most operations *Memory: *Session-based conversation context Timezone: Automatic Bangladesh Standard Time conversion This personal assistant transforms how you interact with Google services, making scheduling and task management as simple as sending a text message to Troy on Telegram. Tags: personal-assistant, mcp-integration, google-calendar, google-tasks, telegram-bot, ai-agent, productivity
by Hugo
🤖 n8n AI Workflow Dashboard Template Overview This template is designed to collect execution data from your AI workflows and generate an interactive dashboard for easy monitoring. It's compatible with any AI Agent or RAG workflow in n8n. Main Objectives 💾 Collect Execution Data Track messages, tokens used (prompt/completion), session IDs, model names, and compute costs Designed to plug into any AI agent or RAG workflow in n8n 📊 Generate an Interactive Dashboard Visualize KPIs like total messages, unique sessions, tokens used, and costs Display daily charts, including stacked bars for prompt vs completion tokens Monitor AI activity, analyze usage, and track costs at a glance ✨ Key Features 💬 Conversation Data Collection Messages sent to the AI agent are recorded with: sessionId chatInput output promptTokens, completionTokens, totalTokens globalCost and modelName This allows detailed tracking of AI interactions across sessions. 💰 Model Pricing Management A sub-workflow with a Set node provides token prices for LLMs Data is stored in the Model price table for cost calculations 🗄️ Data Storage via n8n Data Tables Two tables need to be created: Model price { "id": 20, "createdAt": "2025-10-11T12:16:47.338Z", "updatedAt": "2025-10-11T12:16:47.338Z", "name": "claude-4.5-sonnet", "promptTokensPrice": 0.000003, "completionTokensPrice": 0.000015 } Messages [ { "id": 20, "createdAt": "2025-10-11T15:28:00.358Z", "updatedAt": "2025-10-11T15:31:28.112Z", "sessionId": "c297cdd4-7026-43f8-b409-11eb943a2518", "action": "sendMessage", "output": "Hey! \nHow's it going?", "chatInput": "yo", "completionTokens": 6, "promptTokens": 139, "totalTokens": 139, "globalCost": null, "modelName": "gpt-4.1-mini", "executionId": 245 } ] These tables store conversation data and pricing info to feed the dashboard and calculations. 📈 Interactive Dashboard KPIs Generated**: total messages, unique sessions, total/average tokens, total/average cost 💸 Charts Included**: daily messages, tokens used per day (prompt vs completion, stacked bar) Provides a visual summary of AI workflow performance ⚙️ Installation & Setup Follow these steps to set up and run the workflow in n8n: 1. Import the Workflow Download or copy the JSON workflow and import it into n8n. 2. Create the Data Tables Model price table**: stores token prices per model Messages table**: stores messages generated by the AI agent 3. Configure the Webhook The workflow is triggered via a webhook Use the webhook URL to send conversation data 4. Set Up the Pricing Sub-workflow Automatically generates price data for the models used Connect it to your main workflow to enrich cost calculations 5. Dashboard Visualization The workflow returns HTML code rendering the dashboard View it in a browser or embed it in your interface 🌐 Once configured, your workflow tracks AI usage and costs in real-time, providing a live dashboard for quick insights. 🔧 Adaptability The template is modular and can be adapted to any AI agent or RAG workflow KPIs, charts, colors, and metrics can be customized in the HTML rendering Ideal for monitoring, cost tracking, and reporting AI workflow performance
by Nikan Noorafkan
🧠 Google Ads Monthly Performance Optimization (Channable + Google Ads + Relevance AI) 🚀 Overview This workflow automatically analyzes your Google Ads performance every month, identifies top-performing themes and categories, and regenerates optimized ad copy using Relevance AI — powered by insights from your Channable product feed. It then saves the improved ads to Google Sheets for review and sends a detailed performance report to your Slack workspace. Ideal for marketing teams who want to automate ad optimization at scale with zero manual intervention. 🔗 Integrations Used Google Ads** → Fetch campaign and ad performance metrics using GAQL. Relevance AI** → Analyze performance data and regenerate ad copy using AI agents and tools. Channable** → Pull updated product feeds for ad refresh cycles. Google Sheets** → Save optimized ad copy for review and documentation. Slack** → Send a 30-day performance report to your marketing team. 🧩 Workflow Summary | Step | Node | Description | | ---- | --------------------------------------------------- | --------------------------------------------------------------------------- | | 1 | Monthly Schedule Trigger | Runs automatically on the 1st of each month to review last 30 days of data. | | 2 | Get Google Ads Performance Data | Fetches ad metrics via GAQL query (impressions, clicks, CTR, etc.). | | 3 | Calculate Performance Metrics | Groups results by ad group and theme to find top/bottom performers. | | 4 | AI Performance Analysis (Relevance AI) | Generates human-readable insights and improvement suggestions. | | 5 | Update Knowledge Base (Relevance AI) | Saves new insights for future ad copy training. | | 6 | Get Updated Product Feed (Channable) | Retrieves the latest catalog items for ad regeneration. | | 7 | Split Into Batches | Splits the feed into groups of 50 to avoid API rate limits. | | 8 | Regenerate Ad Copy with Insights (Relevance AI) | Rewrites ad copy with the latest product and performance data. | | 9 | Save Optimized Ads to Sheets | Writes output to your “Optimized Ads” Google Sheet. | | 10 | Generate Performance Report | Summarizes the AI analysis, CTR trends, and key insights. | | 11 | Email Performance Report (Slack) | Sends report directly to your Slack channel/team. | 🧰 Requirements Before running the workflow, make sure you have: A Google Ads account with API access and OAuth2 credentials. A Relevance AI project (with one Agent and one Tool setup). A Channable account with API key and project feed. A Google Sheets document for saving results. A Slack webhook URL for sending performance summaries. ⚙️ Environment Variables Add these environment variables to your n8n instance (via .env or UI): | Variable | Description | | -------------------------------- | ------------------------------------------------------------------- | | GOOGLE_ADS_API_VERSION | API version (e.g., v17). | | GOOGLE_ADS_CUSTOMER_ID | Your Google Ads customer ID. | | RELEVANCE_AI_API_URL | Base Relevance AI API URL (e.g., https://api.relevanceai.com/v1). | | RELEVANCE_AGENT_PERFORMANCE_ID | ID of your Relevance AI Agent for performance analysis. | | RELEVANCE_KNOWLEDGE_SOURCE_ID | Knowledge base or dataset ID used to store insights. | | RELEVANCE_TOOL_AD_COPY_ID | Relevance AI tool ID for generating ad copy. | | CHANNABLE_API_URL | Channable API endpoint (e.g., https://api.channable.com/v1). | | CHANNABLE_COMPANY_ID | Your Channable company ID. | | CHANNABLE_PROJECT_ID | Your Channable project ID. | | FEED_ID | The feed ID for product data. | | GOOGLE_SHEET_ID | ID of your Google Sheet to store optimized ads. | | SLACK_WEBHOOK_URL | Slack Incoming Webhook URL for sending reports. | 🔐 Credentials Setup in n8n | Credential | Type | Usage | | ----------------------------------------------- | ------- | --------------------------------------------------- | | Google Ads OAuth2 API | OAuth2 | Authenticates your Ads API queries. | | HTTP Header Auth (Relevance AI & Channable) | Header | Uses your API key as Authorization: Bearer <key>. | | Google Sheets OAuth2 API | OAuth2 | Writes optimized ads to Sheets. | | Slack Webhook | Webhook | Sends monthly reports to your team channel. | 🧠 Example AI Insight Output { "insights": [ "Ad groups using 'vegan' and 'organic' messaging achieved +23% CTR.", "'Budget' keyword ads underperformed (-15% CTR).", "Campaigns featuring 'new' or 'bestseller' tags showed higher conversion rates." ], "recommendations": [ "Increase ad spend for top-performing 'vegan' and 'premium' categories.", "Revise copy for 'budget' and 'sale' ads with low CTR." ] } 📊 Output Example (Google Sheet) | Product | Category | Old Headline | New Headline | CTR Change | Theme | | ------------------- | -------- | ------------------------ | -------------------------------------------- | ---------- | ------- | | Organic Protein Bar | Snacks | “Healthy Energy Anytime” | “Organic Protein Bar — 100% Natural Fuel” | +12% | Organic | | Eco Face Cream | Skincare | “Gentle Hydration” | “Vegan Face Cream — Clean, Natural Moisture” | +17% | Vegan | 📤 Automation Flow Run Automatically on the first of every month (cron: 0 0 1 * *). Fetch Ads Data → Analyze & Learn → Generate New Ads → Save & Notify. Every iteration updates the AI’s knowledge base — improving your campaigns progressively. ⚡ Scalability The flow is batch-optimized (50 items per request). Works for large ad accounts with up to 10,000 ad records. AI analysis & regeneration steps are asynchronous-safe (timeouts extended). Perfect for agencies managing multiple ad accounts — simply duplicate and update the environment variables per client. 🧩 Best Use Cases Monthly ad creative optimization for eCommerce stores. Marketing automation for Google Ads campaign scaling. Continuous learning ad systems powered by Relevance AI insights. Agencies automating ad copy refresh cycles across clients. 💬 Slack Report Example 30-Day Performance Optimization Report Date: 2025-10-01 Analysis Period: Last 30 days Ads Analyzed: 842 Top Performing Themes Vegan: 5.2% CTR (34 ads) Premium: 4.9% CTR (28 ads) Underperforming Themes Budget: 1.8% CTR (12 ads) AI Insights “Vegan” and “Premium” themes outperform baseline by +22% CTR. “Budget” ads underperform due to lack of value framing. Next Optimization Cycle: 2025-11-01 🛠️ Maintenance Tips Update your GAQL query occasionally to include new metrics or segments. Refresh Relevance AI tokens every 90 days (if required). Review generated ads in Google Sheets before pushing them live. Test webhook and OAuth connections after major n8n updates. 🧩 Import Instructions Open n8n → Workflows → Import from File / JSON. Paste this workflow JSON or upload it. Add all required environment variables and credentials. Execute the first run manually to validate connections. Once verified, enable scheduling for automatic monthly runs. 🧾 Credits Developed for AI-driven marketing teams leveraging Google Ads, Channable, and Relevance AI to achieve continuous ad improvement — fully automated via n8n.
by Sandeep Patharkar | ai-solutions.agency
Build an AI HR Assistant to Screen Resumes and Send Telegram Alerts A step-by-step guide to creating a fully automated recruitment pipeline that screens candidates, generates interview questions, and notifies your team. This template provides a complete, step-by-step guide to building an AI-powered HR assistant from scratch in n8n. You will learn how to connect a web form to an intelligent screening agent that reads resumes, evaluates candidates against your job criteria, and prepares unique interview questions for the most promising applicants. | Services Used | Features | | :---------------------------------------------- | :----------------------------------------------------------------------------- | | 🤖 OpenAI / LangChain | Uses AI Agents to screen, score, and analyze candidates. | | 📄 Google Drive & Google Sheets | Stores resumes and manages a database of open positions and applicants. | | 📥 n8n Form Trigger | Provides a public-facing web form to capture applications. | | 💬 Telegram | Sends real-time alerts to the hiring team for qualified candidates. | How It Works ⚙️ 📥 Application Submitted: The workflow starts when a candidate fills out the n8n Form Trigger with their details and uploads their CV. 📂 File Processing: The CV is automatically uploaded to a specific Google Drive folder for record-keeping, and the Extract from File node reads its text content. 🧠 AI Screening Agent: A LangChain Agent analyzes the resume text. It uses the Google Sheets Tool to look up the requirements for the applied role, then scores the candidate and decides if they should be shortlisted. 📊 Log Results: The agent's decision (name, score, shortlisted status) is logged in your master "Applications" Google Sheet. ✅ Qualification Check: An IF node checks if the candidate was shortlisted. ❓ AI Question Generator: If shortlisted, a second LangChain Agent generates three unique, relevant interview questions based on the candidate's resume and the job description. ✍️ Update Sheet: The generated questions are added to the candidate's row in the Google Sheet. 🔔 Notify Team: A final alert is sent via Telegram to notify the HR team that a new candidate has been qualified and is ready for review. 🛠️ How to Build This Workflow Follow these steps to build the recruitment assistant from a blank canvas. Step 1: Set Up the Application Intake Add a Form Trigger node. Configure it with fields for Name, Email, Phone Number, a File Upload for the CV, and a Dropdown for the "Job Role". Connect a Google Drive node. Set the Operation to Upload and connect your credentials. Set it to upload the CV file from the Form Trigger into a specific folder. Add an Extract from File node. Set it to extract text from the PDF CV file provided by the trigger. Step 2: Build the AI Screening Agent Add a Langchain Agent node. This will be your main screening agent. In its prompt, instruct the AI to act as a resume screener. Tell it to use the input text from the Extract from File node and the tools you will provide to score and shortlist candidates. Add an OpenAI Chat Model node and connect it to the Agent's Language Model input. Add a Google Sheets Tool node. Point it to a sheet with your open positions and their requirements. Connect this to the Agent's Tool input. Add a Structured Output Parser node and define the JSON structure you want the agent to return (e.g., candidate_name, score, shortlisted). Connect this to the Agent's Output Parser input. Step 3: Log Results & Check for a Match Connect a Google Sheets node after the Agent. Set its operation to Append or Update. Use it to add the structured output from the agent into your main "Applications" sheet. Add an IF node. Set the condition to continue only if the shortlisted field equals "yes". Step 4: Generate Interview Questions On the 'true' path of the IF node, add a second Langchain Agent node. Write a prompt telling this agent to generate 3 interview questions based on the candidate's resume and the job requirements. Connect the same OpenAI Model and Google Sheets Tool to this agent. Add another Google Sheets node. Set it to Update the existing row for the candidate, adding the newly generated questions. 💬 Need Help or Want to Learn More? Join my Skool community for n8n + AI automation tutorials, live Q&A sessions, and exclusive workflows: 👉 https://www.skool.com/n8n-ai-automation-champions Template Author: Sandeep Patharkar Category: Website Chatbots / AI Automation Difficulty: Beginner Estimated Setup Time: ⏱️ 15 minutes
by Jainik Sheth
What is this? This RAG workflow allows you to build a smart chat assistant that can answer user questions based on any collection of documents you provide. It automatically imports and processes files from Google Drive, stores their content in a searchable vector database, and retrieves the most relevant information to generate accurate, context-driven responses. The workflow manages chat sessions and keeps the document database current, making it adaptable for use cases like customer support, internal knowledge bases, or HR assistant etc. How it works 1. Chat RAG Agent Uses OpenAI for responses, referencing only specific data from the vector store (data that is uploaded on google drive folder). Maintains chat history in Postgres using a session key from the chat input. 2. Data Pipeline (File Ingestion) Monitors Google Drive for new/updated files and automatically updates them in vector store Downloads, extracts, and processes file content (PDFs, Google Docs). Generates embeddings and stores them in the Supabase vector store for retrieval. 3. Vector Store Cleanup Scheduled and manual routines to remove duplicate or outdated entries from the Supabase vector store. Ensures only the latest and unique documents are available for retrieval. 4. File Management Handles folder and file creation, upload, and metadata assignment in Google Drive. Ensures files are organized and linked with their corresponding vector store entries. Getting Started Create and connect all relevant credentials Google Drive Postgres Supabase OpenAI Run the table creation nodes first to set up your database tables in Postgres Upload your documents through Google Drive (or swap out for a different file storage solution) The agent will process them automatically (chunking text, storing tabular data in Postgres) Start asking questions that leverage the agent's multiple reasoning approaches Customization (optional) This template provides a solid foundation that you can extend by: Tuning the system prompt for your specific use case Adding document metadata like summaries Implementing more advanced RAG techniques Optimizing for larger knowledge bases Note, if you're using a different nodes eg. file storage, vector store etc the integration may vary a little Prerequisites Google account (google drive) Supabase account OpenAI APIs Postgres account
by Rapiwa
Who is this for? This workflow is designed for online store owners, customer-success teams, and marketing operators who want to automatically verify customers' WhatsApp numbers and deliver order updates or invoice links via WhatsApp. It is built around WooCommerce order WooCommerce Trigger (order.updated) but is easily adaptable to Shopify or other platforms that provide billing and line_items in the WooCommerce Trigger payload. What this Workflow Does / Key Features Listens for WooCommerce order events (example: order.updated) via a Webhook or a WooCommerce trigger. Filters only orders with status "completed" and maps the payload into a normalized object: { data: { customer, products, invoice_link } } using the Code node Order Completed check. Iterates over line items using SplitInBatches to control throughput. Cleans phone numbers (Clean WhatsApp Number code node) by removing all non-digit characters. Verifies whether the cleaned phone number is registered on WhatsApp using Rapiwa's verify endpoint (POST https://app.rapiwa.com/api/verify-whatsapp). If verified, sends a templated WhatsApp message via Rapiwa (POST https://app.rapiwa.com/api/send-message). Appends an audit row to a "Verified & Sent" Google Sheet for successful sends, or to an "Unverified & Not Sent" sheet for unverified numbers. Uses Wait and batching to throttle requests and avoid API rate limits. Requirements HTTP Bearer credential for Rapiwa (example name in flow: Rapiwa Bearer Auth). WooCommerce API credential for the trigger (example: WooCommerce (get customer)) Running n8n instance with nodes: WooCommerce Trigger, Code, SplitInBatches, HTTP Request, IF, Google Sheets, Wait. Rapiwa account and a valid Bearer token. Google account with Sheets access and OAuth2 credentials configured in n8n. WooCommerce store (or any WooCommerce Trigger source) that provides billing and line_items in the payload. How to Use — step-by-step Setup 1) Credentials Rapiwa: Create an HTTP Bearer credential in n8n and paste your token (flow example name: Rapiwa Bearer Auth). Google Sheets: Add an OAuth2 credential (flow example name: Google Sheets). WooCommerce: Add the WooCommerce API credential or configure a Webhook on your store. 3) Configure Google Sheets The exported flow uses spreadsheet ID: 1S3RtGt5xxxxxxxXmQi_s (Sheet gid=0) as an example. Replace with your spreadsheet ID and sheet gid. Ensure your sheet column headers exactly match the mapping keys listed below (case and trailing spaces must match or be corrected in the mapping). 5) Verify HTTP Request nodes Verify endpoint: POST https://app.rapiwa.com/api/verify-whatsapp — sends { number } (uses HTTP Bearer credential). Send endpoint: POST https://app.rapiwa.com/api/send-message — sends number, message_type=text, and a templated message that uses fields from the Clean WhatsApp Number output. Google Sheet Column Structure The Google Sheets nodes in the flow append rows with these column keys. Make sure the spreadsheet headers A Google Sheet formatted like this ➤ sample | Name | Number | Email | Address | Product Title | Product ID | Total Price | Invoice Link | Delivery Status | Validity | Status | |----------------|---------------|-------------------|------------------------------------------|-----------------------------|------------|---------------|--------------------------------------------------------------------------------------------------------------|-----------------|------------|-----------| | Abdul Mannan | 8801322827799 | contact@spagreen.net | mirpur dohs | Air force 1 Fossil 1:1 - 44 | 238 | BDT 5500.00 | Invoice link | completed | verified | sent | | Abdul Mannan | 8801322827799 | contact@spagreen.net | mirpur dohs h#1168 rd#10 av#10 mirpur dohs dhaka | Air force 1 Fossil 1:1 - 44 | 238 | BDT 5500.00 | Invoice link | completed | unverified | not sent | Important Notes Do not hard-code API keys or tokens; always use n8n credentials. Google Sheets column header names must match the mapping keys used in the nodes. Trailing spaces are common accidental problems — trim them in the spreadsheet or adjust the mapping. The IF node in the exported flow compares to the string "true". If the verify endpoint returns boolean true/false, convert to string or boolean consistently before the IF. Message templates in the flow reference $('Clean WhatsApp Number').item.json.data.products[0] — update templates if you need multiple-product support. Useful Links Dashboard:** https://app.rapiwa.com Official Website:** https://rapiwa.com Documentation:** https://docs.rapiwa.com Support & Help WhatsApp**: Chat on WhatsApp Discord**: SpaGreen Community Facebook Group**: SpaGreen Support Website**: https://spagreen.net Developer Portfolio**: Codecanyon SpaGreen