by Jainik Sheth
What is this? This RAG workflow allows you to build a smart chat assistant that can answer user questions based on any collection of documents you provide. It automatically imports and processes files from Google Drive, stores their content in a searchable vector database, and retrieves the most relevant information to generate accurate, context-driven responses. The workflow manages chat sessions and keeps the document database current, making it adaptable for use cases like customer support, internal knowledge bases, or HR assistant etc. How it works 1. Chat RAG Agent Uses OpenAI for responses, referencing only specific data from the vector store (data that is uploaded on google drive folder). Maintains chat history in Postgres using a session key from the chat input. 2. Data Pipeline (File Ingestion) Monitors Google Drive for new/updated files and automatically updates them in vector store Downloads, extracts, and processes file content (PDFs, Google Docs). Generates embeddings and stores them in the Supabase vector store for retrieval. 3. Vector Store Cleanup Scheduled and manual routines to remove duplicate or outdated entries from the Supabase vector store. Ensures only the latest and unique documents are available for retrieval. 4. File Management Handles folder and file creation, upload, and metadata assignment in Google Drive. Ensures files are organized and linked with their corresponding vector store entries. Getting Started Create and connect all relevant credentials Google Drive Postgres Supabase OpenAI Run the table creation nodes first to set up your database tables in Postgres Upload your documents through Google Drive (or swap out for a different file storage solution) The agent will process them automatically (chunking text, storing tabular data in Postgres) Start asking questions that leverage the agent's multiple reasoning approaches Customization (optional) This template provides a solid foundation that you can extend by: Tuning the system prompt for your specific use case Adding document metadata like summaries Implementing more advanced RAG techniques Optimizing for larger knowledge bases Note, if you're using a different nodes eg. file storage, vector store etc the integration may vary a little Prerequisites Google account (google drive) Supabase account OpenAI APIs Postgres account
by Vlad Arbatov
Summary Chat with your AI agent in Telegram. It remembers important facts about you in Airtable, can transcribe your voice messages, search the web, read and manage Google Calendar, fetch Gmail, and query Notion. Responses are grounded in your recent memories and tool outputs, then sent back to Telegram. What this workflow does Listens to your Telegram messages (text or voice) Maintains short-term chat memory per user and long-term memory in Airtable Decides when to save new facts about you (auto “Save Memory” without telling you) Uses tools on demand: Web search via SerpAPI Google Calendar: list/create/update/delete events Gmail: list and read messages Notion: fetch database info Transcribes Telegram voice notes with OpenAI and feeds them to the agent Combines live tool results + recent memories and replies in Telegram Apps and credentials Telegram Bot API: personal_bot xAI Grok: Grok-4 model for chat OpenAI: speech-to-text (transcribe audio) Airtable: store long-term memories Google Calendar: calendar actions Gmail: email actions Notion: knowledge and reading lists SerpAPI: web search Typical use cases Personal assistant that remembers preferences, decisions, and tasks Create/update meetings by chatting, and get upcoming events Ask “what did I say I’m reading?” or “what’s our plan from last week?” Voice-first capture: send a voice note → get a transcribed, actionable reply Fetch recent emails or look up info on the web without leaving Telegram Query a Notion database (e.g., “show me the Neurocracy entries”) How it works (node-by-node) Telegram Trigger Receives messages from your Telegram chat (text and optional voice). Text vs Message Router Routes based on message contents: Text path → goes directly to the Agent (AI). Voice path → downloads the file and transcribes before AI. Always also fetches recent Airtable memories for context. Get a file (Telegram) Downloads the voice file (voice.file_id) when present. Transcribe a recording (OpenAI) Converts audio to text so the agent can use it like a normal message. Get memories (Airtable) Searches your “Agent Memories” base/table, filtered by user, sorted by Created. Aggregate (Aggregate) Bundles recent memory records into a single array “Memories” with text + timestamp. Merge (Merge) Combines current input (text or transcript) with the memory bundle before the agent. Simple Memory (Agent memory window) Short-term session memory keyed by Telegram chat ID; keeps the recent 30 turns. Tools wired into the agent SerpAPI Google Calendar tools: Get many events in Google Calendar Create an event in Google Calendar Update an event in Google Calendar Delete an event in Google Calendar Gmail tools: Get many messages in Gmail Get a message in Gmail Notion tool: Get a database in Notion Airtable tool: Save Memory (stores distilled facts about the user) Agent System prompt defines role, tone, and rules: Be a friendly assistant. On each message, decide if it contains user info worth saving. If yes, call “Save Memory” to persist a short summary in Airtable. Don’t announce memory saves—just continue helping. Use tools when needed (web, calendar, Gmail, Notion). Think with the provided memory context block. Uses xAI Grok Chat Model for reasoning and tool-calling. Can call Save Memory, Calendar, Gmail, Notion, and SerpAPI tools as needed. Save Memory (Airtable) Persists Memory and User fields to “Agent Memories” base; auto timestamp by Airtable. Send a text message (Telegram) Sends the agent’s final answer back to the same Telegram chat ID. Node map | Node | Type | Purpose | |---|---|---| | Telegram Trigger | Trigger | Receive text/voice from Telegram | | Text vs voice router | Flow control | Route text vs voice; also trigger memories fetch | | Get a file | Telegram | Download voice audio | | Transcribe a recording | OpenAI | Speech-to-text for voice notes | | Get memories | Airtable | Load recent user memories | | Aggregate | Aggregate | Pack memory records into “Memories” array | | Merge | Merge | Combine input and memories before agent call | | Simple Memory | Agent memory | Short-term chat memory per chat ID | | xAI Grok Chat Model | LLM | Core reasoning model for the Agent | | Search Web with SerpAPI | Tool | Web search | | Google Calendar tools | Tool | List/create/update/delete events | | Gmail tools | Tool | Search and read email | | Notion tool | Tool | Query a Notion database | | Save Memory | Airtable Tool | Persist distilled user facts | | AI Agent | Agent | Orchestrates tools + memory, produces the answer | | Send a text message | Telegram | Reply to the user in Telegram | Before you start Create a Telegram bot and get your token (via @BotFather). Put your Telegram user ID into the Telegram Trigger node (chatIds). Connect credentials: xAI Grok (model: grok-4-0709) OpenAI (for audio transcription) Airtable (Agent Memories base and table) Google Calendar OAuth Gmail OAuth Notion API SerpAPI key Adjust the Airtable “User” value and the filterByFormula to match your name or account. Setup instructions 1) Telegram Telegram Trigger: additionalFields.chatIds = your_telegram_id download = true to allow voice handling Send a text message: chatId = {{ $('Telegram Trigger').item.json.message.chat.id }} 2) Memory Airtable base/table must exist with fields: Memory, User, Created (Created auto-managed). In Save Memory and Get memories nodes, align Base, Table, and filterByFormula with your setup. Simple Memory: sessionKey = {{ $('If').item.json.message.chat.id }} contextWindowLength = 30 (adjust as needed) 3) Tools Google Calendar: choose your calendar, test get/create/update/delete. Gmail: set “returnAll/simplify/messageId” via $fromAI or static defaults. Notion: set your databaseId. SerpAPI: ensure the key is valid. 4) Agent (AI node) SystemMessage: customize role, name, and any constraints. Text input: concatenates transcript or text into one prompt: {{ $json.text }}{{ $json.message.text }} How to use Send a text or voice message to your bot in Telegram. The agent replies in the same chat, optionally performing tool actions. New personal facts you mention are silently summarized and stored in Airtable for future context. Customization ideas Replace Grok with another LLM if desired. Add more tools: Google Drive, Slack, Jira, GitHub, etc. Expand memory schema (e.g., tags, categories, confidence). Add guardrails: profanity filters, domain limits, or cost control. Multi-user support: store chat-to-user mapping and separate memories by user. Add summaries: a daily recap message created from new memories. Limits and notes Tool latency: calls to Calendar, Gmail, Notion, and SerpAPI add response time. Audio size/format: OpenAI transcription works best with common formats and short clips. Memory growth: periodically archive old Airtable entries, or change Aggregate window. Timezone awareness: Calendar operations depend on your Google Calendar settings. Privacy and safety Sensitive info may be saved to Airtable; restrict access to the base. Tool actions operate under your connected accounts; review scopes and permissions. The agent may call external APIs (SerpAPI, OpenAI); ensure this aligns with your policies. Example interactions “Schedule a 30‑min catch‑up with Alex next Tuesday afternoon.” “What meetings do I have in the next 4 weeks?” “Summarize my latest emails from Product Updates.” “What did I say I’m reading?” (agent recalls from memories) Voice note: “Remind me to call the dentist this Friday morning.” → agent transcribes and creates an event. Tags telegram, agent, memory, grok, openai, airtable, google-calendar, gmail, notion, serpapi, voice, automation Changelog v1: First release with Telegram agent, short/long-term memory, voice transcription, and tool integrations for web, calendar, email, and Notion.
by Deniz
📌 How to Set Up the AI UGC Video Automation System This system uses Telegram + N8N (no-code automation) + AI models to generate user-generated content (UGC) videos automatically. 🔹 Overview Input: Send a photo of the product + character via Telegram bot. Process: N8N workflow handles: Image analysis Prompt generation Image creation Video clip generation Combining clips into a final UGC ad Output: Video sent back to Telegram (or other destination like Google Drive/Dropbox). 🔹 System Workflow Input Section Telegram Setup: Create a Telegram bot and get its Bot ID. Connect the bot to N8N Telegram Trigger node. Bot listens for messages (photos + instructions). Send Input Upload one compressed image with : Product Character (optional) Example: “Create a UGC video with Gandalf promoting The Hobbit book. 20 seconds long.” Image Handling . N8N retrieves the image from Telegram (via file path). . OpenAI agent analyzes the image: . Extracts product details (brand, color, description). . Extracts character details (name, outfit, style). Confirm Input: . System replies on Telegram: “Got it. I’m now creating your video.” Step 1: Create Image AI Agent (Image Prompt) Generates a natural, UGC-style prompt (realistic iPhone photo look). Uses OpenAI GPT to structure prompt and aspect ratio (2:3 or 3:2). Image Generation Sends prompt + aspect ratio to Key.AI → 4.0 Image Model. Waits until image is generated. Example: Gandalf holding The Hobbit book. Step 2: Create Video Clips AI Agent (Video Prompt) Creates video script and scenes (dialogue + setting). Calculates how many clips needed (e.g. 20s request → 3 x 8s clips). Ensures UGC style (casual, amateur look). Clip Generation Sends prompts to Key.AI V3 model (Fast or Quality). Input: Prompt + image + aspect ratio. Output: Multiple short clips (8s each). Wait for Processing Clips take a few minutes to generate. Retrieve video URLs from Key.AI. Step 3: Combine Video Aggregate Clips 2.Collect all video URLs (from multiple clips). Merge with FFmpeg Send videos to File.AI → FFmpeg Merge Service. Stitches clips into one continuous video. Final Output Final merged video returned as a download URL. N8N sends the video back to your Telegram chat (or connected storage). 🔹 Customization Options Models: V3 Fast (~$0.40/clip, cheaper, good enough). V3 Quality (~$2/clip, slightly higher quality). Video Length: AI automatically adjusts number of clips. Outputs: Telegram (default) Can be extended to Google Drive, Dropbox, etc. 🔹 Cost Image generation: a few cents. Video clips: ~$0.40 each with V3 Fast. Clip merging: < $0.01. Much cheaper than manual UGC production.
by Daniel Lianes
Auto-scrape Twitter accounts to WhatsApp groups This workflow provides automated access to real-time Twitter/X content through intelligent scraping and AI processing. It keeps you at the cutting edge of breaking news, emerging trends, and industry developments by eliminating the need to manually check multiple social media accounts and delivering curated updates directly to your communication channels. Overview This workflow automatically handles the complete Twitter monitoring process using advanced scraping techniques and AI analysis. It manages API authentication, multi-source data collection, intelligent content filtering, and message delivery with built-in error handling and rate limiting for reliable automation. Core Function: Real-time social media monitoring that transforms Twitter noise into actionable intelligence, ensuring you're always first to know about the latest trends, product launches, and industry shifts that shape your field. Key Capabilities Real-time trend detection** - Catch breaking news and emerging topics as they happen on X/Twitter Multi-source Twitter monitoring** - Track specific accounts AND trending keyword searches simultaneously AI-powered trend analysis** - Gemini 2.5 Pro filters noise and surfaces only the latest developments that matter Stay ahead of the curve** - Identify emerging technologies, viral discussions, and industry shifts before they go mainstream Flexible delivery options** - Pre-configured for WhatsApp, but easily adaptable for Telegram, Slack, Discord, or even blog content generation Rate limit protection** - Built-in delays and error handling using TwitterAPI.io's reliable, cost-effective infrastructure Tools Used n8n**: The automation platform orchestrating the entire workflow TwitterAPI.io**: Reliable access to Twitter/X data without API complexities OpenRouter**: Gateway to advanced AI models for content processing Gemini 2.5 Pro**: Google's latest AI for intelligent content analysis and formatting Evolution API**: WhatsApp Business API integration for message delivery Built-in Error Handling**: Automatic retry logic and comprehensive error management How to Install IMPORTANT: Before importing this workflow, you need to install the Evolution API community node: Install Community Node First: Go to Settings > Community Nodes in your n8n instance Add Evolution API: Install n8n-nodes-evolution-api package Restart n8n: Allow the new nodes to load properly Import the Workflow: Download the .json file and import it into your n8n instance Configure Twitter Access: Set up TwitterAPI.io credentials and add target accounts/keywords Set Up AI Processing: Add your OpenRouter API key for Gemini 2.5 Pro access Configure WhatsApp: Set up Evolution API and add your target group ID Test & Deploy: Run a test execution and schedule for daily operation Use Cases Stay Ahead of Breaking News**: Be the first to know about industry announcements, product launches, and major developments the moment they hit X/Twitter Spot Trends Before They Explode**: Identify emerging technologies, viral topics, and shifting conversations while they're still building momentum Competitive Intelligence**: Monitor what industry leaders, competitors, and influencers are discussing in real-time Brand Surveillance**: Track mentions, discussions, and sentiment around your brand as conversations develop Content Creation Pipeline**: Gather trending topics, viral discussions, and timely content ideas for blogs, newsletters, or social media strategy Market Research**: Collect real-time social sentiment and emerging market signals from X/Twitter conversations Multi-platform Distribution**: While configured for WhatsApp, the structured output can easily feed Telegram bots, Slack channels, Discord servers, or automated blog generation systems FIND YOUR WHATSAPP GROUPS The workflow includes a helper node to easily find your WhatsApp group IDs: Use the Fetch Groups node: The workflow includes a dedicated node that fetches all your available WhatsApp groups Run the helper: Execute just that node to see a list of all groups with their IDs Copy the group ID: Find your target group in the list and copy its ID Update the delivery node: Paste the group ID into the final WhatsApp sending node Group ID format: Always ends with @g.us (example: 120363419788967600@g.us) Pro tip: Test with a small private group first before deploying to your main team channels. Connect with Me LinkedIn**: https://www.linkedin.com/in/daniel-lianes/ Discovery Call**: https://cal.com/averis/asesoria Consulting Session**: https://cal.com/averis/consultoria-personalizada Was this helpful? Let me know! I truly hope this was helpful. Your feedback is very valuable and helps me create better resources. Want to take automation to the next level? If you're looking to optimize your business processes or need expert help with a project, here's how I can assist you: Advisory (Discovery Call): Do you have a process in your business that you'd like to automate but don't know where to start? In this initial call, we'll explore your needs and see if automation is the ideal solution for you. Schedule a Discovery Call Personalized Consulting (Paid Session): If you already have a specific problem, an integration challenge, or need hands-on help building a custom workflow, this session is for you. Together, we'll find a powerful solution for your case. Book Your Consulting Session Stay Up to Date For more tricks, ideas, and news about automation and AI, let's connect on LinkedIn! Follow me on LinkedIn #n8n #automation #twitter #whatsapp #ai #socialmedia #monitoring #intelligence #gemini #scraping #workflow #nocode #businessautomation #socialmonitoring #contentcuration #teamcommunication #brandmonitoring #trendanalysis #marketresearch #productivity
by JJ Tham
Stop manually digging through Meta Ads data and spending hours trying to connect the dots. This workflow turns n8n into an AI-powered media buyer that automatically analyzes your ad performance, categorizes your creatives, and delivers insights directly into a Google Sheet. ➡️ Watch the full 4-part setup and tutorial on YouTube: https://youtu.be/hxQshcD3e1Y About This 4-Part Automation Series As a media buyer, I built this system to automate the heavy lifting of analyzing ad data and brainstorming new creative ideas. This template is the first foundational part of that larger system. ✅ Part 1 (This Template): Pulling Ad Data & Getting Quick Insights Automatically pulls data into a Google Sheet and uses an LLM to categorize ad performance. ✅ Part 2: Finding the Source Files for the Best Ads Fetches the image or video files for top-performing ads. ✅ Part 3: Using AI to Understand Why an Ad Works Sends your best ads to Google Gemini for structured notes on hooks, transcripts, and visuals. ✅ Part 4: Getting the AI to Suggest New Creative Ideas Uses all the insights to generate fresh ad concepts, scripts, and creative briefs. What This Template (Part 1) Does Secure Token Management Automatically retrieves and refreshes your Facebook long-term access token. Fetch Ad Data Pulls the last 28 days of ad-level performance data from your Facebook Ads account. Process & Clean Parses raw data, standardizes key e-commerce metrics (like ROAS), and filters for sales-focused campaigns. Benchmark Calculation Aggregates all data to create an overall performance benchmark (e.g., average Cost Per Purchase). AI Analysis A “Senior Media Buyer” AI persona evaluates each ad against the benchmark and categorizes it as “HELL YES,” “YES,” or “MAYBE,” with justifications. Output to Google Sheets Updates your Google Sheet with both raw performance data and AI-generated insights. Who Is It For? E-commerce store owners Digital marketing agencies Facebook Ads media buyers How to Set It Up Credentials Connect your Google Gemini and Google Sheets accounts in the respective nodes. The template uses NocoDB for token management. Configure the “Getting Long-Term Token” and “Updating Token” nodes — or replace them with your preferred credential storage method. Update Your IDs In the “Getting Data For the Past 28 Days…” HTTP Request node, replace act_XXXXXX in the URL with your Facebook Ad Account ID. In both Google Sheets nodes (“Sending Raw Data…” and “Updating Ad Insights…”), update the Document ID with your target Google Sheet’s ID. Run the Workflow Click “Test workflow” to run your first AI-powered analysis! Tools Used n8n Facebook for Developers Google AI Studio (Gemini) NocoDB (or any credential database of your choice)
by Jadai kongolo
🚀 n8n Local AI Agentic RAG Template Author: Jadai kongolo What is this? This template provides an entirely local implementation of an Agentic RAG (Retrieval Augmented Generation) system in n8n that can be extended easily for your specific use case and knowledge base. Unlike standard RAG which only performs simple lookups, this agent can reason about your knowledge base, self-improve retrieval, and dynamically switch between different tools based on the specific question. Why Agentic RAG? Standard RAG has significant limitations: Poor analysis of numerical/tabular data Missing context due to document chunking Inability to connect information across documents No dynamic tool selection based on question type What makes this template powerful: Intelligent tool selection**: Switches between RAG lookups, SQL queries, or full document retrieval based on the question Complete document context**: Accesses entire documents when needed instead of just chunks Accurate numerical analysis**: Uses SQL for precise calculations on spreadsheet/tabular data Cross-document insights**: Connects information across your entire knowledge base Multi-file processing**: Handles multiple documents in a single workflow loop Efficient storage**: Uses JSONB in Supabase to store tabular data without creating new tables for each CSV Getting Started Run the table creation nodes first to set up your database tables in Supabase Upload your documents to the folder on your computer that is mounted to /data/shared in the n8n container. This folder by default is the "shared" folder in the local AI package. The agent will process them automatically (chunking text, storing tabular data in Supabase) Start asking questions that leverage the agent's multiple reasoning approaches Customization This template provides a solid foundation that you can extend by: Tuning the system prompt for your specific use case Adding document metadata like summaries Implementing more advanced RAG techniques Optimizing for larger knowledge bases The non-local ("cloud") version of this Agentic RAG agent can be found here.
by Sandeep Patharkar | ai-solutions.agency
Build an AI HR Assistant to Screen Resumes and Send Telegram Alerts A step-by-step guide to creating a fully automated recruitment pipeline that screens candidates, generates interview questions, and notifies your team. This template provides a complete, step-by-step guide to building an AI-powered HR assistant from scratch in n8n. You will learn how to connect a web form to an intelligent screening agent that reads resumes, evaluates candidates against your job criteria, and prepares unique interview questions for the most promising applicants. | Services Used | Features | | :---------------------------------------------- | :----------------------------------------------------------------------------- | | 🤖 OpenAI / LangChain | Uses AI Agents to screen, score, and analyze candidates. | | 📄 Google Drive & Google Sheets | Stores resumes and manages a database of open positions and applicants. | | 📥 n8n Form Trigger | Provides a public-facing web form to capture applications. | | 💬 Telegram | Sends real-time alerts to the hiring team for qualified candidates. | How It Works ⚙️ 📥 Application Submitted: The workflow starts when a candidate fills out the n8n Form Trigger with their details and uploads their CV. 📂 File Processing: The CV is automatically uploaded to a specific Google Drive folder for record-keeping, and the Extract from File node reads its text content. 🧠 AI Screening Agent: A LangChain Agent analyzes the resume text. It uses the Google Sheets Tool to look up the requirements for the applied role, then scores the candidate and decides if they should be shortlisted. 📊 Log Results: The agent's decision (name, score, shortlisted status) is logged in your master "Applications" Google Sheet. ✅ Qualification Check: An IF node checks if the candidate was shortlisted. ❓ AI Question Generator: If shortlisted, a second LangChain Agent generates three unique, relevant interview questions based on the candidate's resume and the job description. ✍️ Update Sheet: The generated questions are added to the candidate's row in the Google Sheet. 🔔 Notify Team: A final alert is sent via Telegram to notify the HR team that a new candidate has been qualified and is ready for review. 🛠️ How to Build This Workflow Follow these steps to build the recruitment assistant from a blank canvas. Step 1: Set Up the Application Intake Add a Form Trigger node. Configure it with fields for Name, Email, Phone Number, a File Upload for the CV, and a Dropdown for the "Job Role". Connect a Google Drive node. Set the Operation to Upload and connect your credentials. Set it to upload the CV file from the Form Trigger into a specific folder. Add an Extract from File node. Set it to extract text from the PDF CV file provided by the trigger. Step 2: Build the AI Screening Agent Add a Langchain Agent node. This will be your main screening agent. In its prompt, instruct the AI to act as a resume screener. Tell it to use the input text from the Extract from File node and the tools you will provide to score and shortlist candidates. Add an OpenAI Chat Model node and connect it to the Agent's Language Model input. Add a Google Sheets Tool node. Point it to a sheet with your open positions and their requirements. Connect this to the Agent's Tool input. Add a Structured Output Parser node and define the JSON structure you want the agent to return (e.g., candidate_name, score, shortlisted). Connect this to the Agent's Output Parser input. Step 3: Log Results & Check for a Match Connect a Google Sheets node after the Agent. Set its operation to Append or Update. Use it to add the structured output from the agent into your main "Applications" sheet. Add an IF node. Set the condition to continue only if the shortlisted field equals "yes". Step 4: Generate Interview Questions On the 'true' path of the IF node, add a second Langchain Agent node. Write a prompt telling this agent to generate 3 interview questions based on the candidate's resume and the job requirements. Connect the same OpenAI Model and Google Sheets Tool to this agent. Add another Google Sheets node. Set it to Update the existing row for the candidate, adding the newly generated questions. 💬 Need Help or Want to Learn More? Join my Skool community for n8n + AI automation tutorials, live Q&A sessions, and exclusive workflows: 👉 https://www.skool.com/n8n-ai-automation-champions Template Author: Sandeep Patharkar Category: Website Chatbots / AI Automation Difficulty: Beginner Estimated Setup Time: ⏱️ 15 minutes
by Oussama
This n8n template creates an intelligent expense tracking system 🤖 that processes text, voice, and receipt images through Telegram. The assistant automatically categorizes expenses, handles currency conversions 🌍, and maintains financial records in Google Sheets while providing smart spending insights 💡. Use Cases: 🗣️ Personal expense tracking via Telegram chat 🧾 Receipt scanning and data extraction 💱 Multi-currency expense management 📂 Automated financial categorization 🎙️ Voice-to-expense logging 📊 Daily/weekly/monthly spending analysis How it works: Multi-Input Processing: Telegram trigger captures text messages, voice notes, and receipt images. Content Analysis: A Switch node routes different input types (text, audio, images) to appropriate processors. Voice Processing: ElevenLabs converts voice messages to text for expense extraction. Receipt OCR: Google Gemini analyzes receipt images to extract amounts and descriptions. Expense Classification: An LLM determines if the input is an expense or a general query. Expense Parsing: For multiple expenses, the AI splits and normalizes each item. Currency Conversion: An exchange rate API converts foreign currencies to USD. Smart Categorization: The AI agent assigns expenses to predefined categories with emojis. Data Storage: Google Sheets stores all expense records with automatic totals. Intelligent Responses: The agent provides spending summaries, alerts, and financial insights. Requirements: 🌐 Telegram Bot API access 🤖 OpenAI, Gemini, or any other AI model 🗣️ ElevenLabs API for voice processing 📝 Google Sheets API access 💹 Exchange rate API access Good to know: ⚠️ Daily spending alerts trigger when expenses exceed 100 USD. 🏷️ Supports 12 predefined expense categories with emoji indicators. 🔄 Automatic currency detection and conversion to USD. 🎤 Voice messages are processed through speech-to-text. 📸 Receipt images are analyzed using computer vision. Customizing this workflow: ✏️ Modify expense categories in the system prompt. 📈 Adjust spending alert thresholds. 💵 Change the base currency from USD to your preferred currency. ✅ Add additional expense validation rules. 🔗 Integrate with other financial platforms.
by Margo Rey
AI-Powered Email Generation with MadKudu sent via Outreach.io This workflow researches prospects using MadKudu MCP, generates personalized emails with OpenAI, and syncs them to Outreach with automatic sequence enrollment. Its for SDRs and sales teams who want to scale personalized outreach by automating research and email generation while maintaining quality. ✨ Who it's for Sales Development Representatives (SDRs) doing cold outreach Business Development teams needing personalized emails at scale RevOps teams wanting to automate prospect research workflows Sales teams using Outreach for email sequences 🔧 How it works 1. Input Email & Research: Enter prospect email via chat trigger. Extract email and generate comprehensive account brief using MadKudu MCP account-brief-instructions. 2. Deep Research & Email Generation: AI Agent performs 6 research steps using MadKudu MCP tools: Account details (hiring, partnerships, tech stack, sales motion, risk) Top users in the account (for name-dropping opportunities) Contact details (role, persona, engagement) Contact web search (personal interests, activities) Contact picture web search (LinkedIn profile insights) Company value prop research AI generates 5 different email angles and selects the best one based on relevance. 3. Outreach Integration: Checks if prospect exists in Outreach by email. If exists: Updates custom field (custom49) with generated email. If new: Creates new prospect with email in custom field. Enrolls prospect in specified email sequence (ID 781) using mailbox (ID 51). Waits 30 seconds and verifies successful enrollment. 📋 How to set up Set your OpenAI credentials Required for AI research and email generation. Create a n8n Variable to store your MadKudu API key named madkudu_api_key Used for the MadKudu MCP tool to access account research capabilities. Create a n8n Variable to store your company domain named my_company_domain Used for context in email generation and value prop research. Create an Oauth2 API credential to connect your Outreach account Used to create/update prospects and enroll in sequences. Configure Outreach settings Update Outreach Mailbox ID (currently set to 51) in the "Configure Outreach Settings" node. Update Outreach Sequence ID (currently set to 781) in the same node. Adjust custom field name if using different field than custom49. 🔑 How to connect Outreach In n8n, add a new Oauth2 API credential and copy the callback URL Now go to Outreach developer portal Click "Add" to create a new app In Feature selection add Outreach API (OAuth) In API Access (Oauth) set the redirect URI to the n8n callback Select the following scopes accounts.read, accounts.write, prospects.read, prospects.write, sequences.read Save in Outreach 7.Now enter the Outreach Application ID into n8n Client Id and the Outreach Application Secret into n8n Client secret Save in n8n and connect via Oauth your Outreach Account ✅ Requirements MadKudu account with access to API Key Outreach Admin permissions to create an app OpenAI API Key 🛠 How to customize the workflow Change the research steps Modify the AI Agent prompt to adjust the 6 research steps or add additional MadKudu MCP tools. Update Outreach configuration Change Mailbox ID (51) and Sequence ID (781) in the "Configure Outreach Settings" node. Update custom field mapping if using different field than custom49. Modify email generation Adjust the prompt guidelines, tone, or angle priorities in the "AI Email Generator" node. Change the trigger Swap the chat trigger for a Schedule, Webhook, or integrate with your CRM to automate prospect input.
by Khairul Muhtadin
This AI-powered workflow transforms n8n workflow JSON files into publication-ready, SEO-optimized markdown posts for the n8n community. Simply upload your workflow's JSON, and let Google Gemini 2.5 Pro, guided by a LlamaIndex-powered knowledge base of best practices, automatically generate compelling content. Why Use This Workflow? Time Savings: Reduces the time to create a detailed workflow post from over an hour of manual writing to under 2 minutes. Cost Reduction: Eliminates the need for separate AI content subscriptions or outsourcing content creation tasks. Error Prevention: Enforces content quality and structural consistency by using a knowledge base of n8n's official guidelines, minimizing formatting errors. Ideal For n8n Workflow Creators:** To quickly document and share their creations on the community platform without the tedious, time-consuming writing process. Developer Advocates:** To standardize and accelerate the production of technical tutorials and workflow showcases. Content & Marketing Teams:** To streamline the content pipeline for n8n-related blog posts, tutorials, and community engagement initiatives. How It Works Trigger: The process starts when you upload an n8n workflow JSON file via a simple web form. Data Extraction: The workflow automatically extracts the JSON content from the uploaded file. Intelligence Layer: An advanced AI agent, powered by Google Gemini 2.5 Pro, analyzes the structure, nodes, and metadata of your workflow. Knowledge Retrieval: The agent consults a specialized, in-memory knowledge base built from n8n's content guidelines. This knowledge base is created by parsing documents with LlamaIndex and refined with a Cohere Reranker for maximum accuracy. Content Generation: The AI agent synthesizes the technical details from your JSON with the best practices from the knowledge base to write a complete, benefit-driven markdown post. Output & Delivery: The final, polished markdown content is generated as the workflow's output, ready to be copied and pasted into the n8n community platform. Setup Guide Prerequisites | Requirement | Type | Purpose | |-------------|------|---------| | n8n instance | Essential | Workflow execution platform | | Google Gemini API Key | Essential | Powers the core AI content generation | | LlamaIndex Cloud API Key | Essential | Parses documents for the knowledge base | | Cohere API Key | Optional | Improves knowledge base search results | | Google Drive Account | Optional | For automatically updating the knowledge base from a Google Doc | Installation Steps Import the JSON file to your n8n instance. Configure credentials: Google Gemini: In the "GEmini 2.5 pro" node, create and add your Google Gemini API credential. LlamaIndex: In the three HTTP Request nodes named "Parse Document...", "Monitor Document...", and "Retrieve Parsed...", create an HTTP Header Auth credential. The header name is Authorization and the value is Bearer YOUR_LLAMA_INDEX_API_KEY. Cohere: (Optional) In the "Reranker Cohere" node, create and add your Cohere API credential. Google Drive: (Optional) If you plan to auto-update the knowledge base, configure Google Drive OAuth2 credentials for the "Knowledge Base Updated Trigger" and "Download Knowledge Document" nodes. Update environment-specific values: To use the knowledge base auto-update feature, go to the "Knowledge Base Updated Trigger" node and select the Google Drive file containing your content guidelines. Customize settings: The primary system prompt in the "n8ncreator" agent node can be modified to adjust the tone, style, or structure of the generated content. Test execution: Run the workflow manually and use the form to upload a sample n8n workflow JSON file to verify that all connections work correctly. Technical Details Core Nodes | Node | Purpose | Key Configuration | |------|---------|-------------------| | Form Trigger | Initiates the workflow via a file upload. | Set the "Input Json Workflow" field to required. | | Langchain Agent | Orchestrates the entire content creation process. | The system prompt contains all instructions for the AI. | | ChatGoogleGemini | Provides the core generative AI capabilities. | Select your Gemini model of choice (e.g., gemini-2.5-pro). | | VectorStoreInMemory | Acts as the agent's knowledge base tool. | Configured to use embeddings from a Google Gemini model. | | HTTPRequest | Interacts with the LlamaIndex API to parse documents. | Set up with LlamaIndex API endpoint and authentication. | Customization Options Basic Adjustments: Change AI Model:** Replace the ChatGoogleGemini node with another LLM node (e.g., OpenAI, Anthropic) to use a different provider. Adjust System Prompt:** Modify the prompt in the "n8ncreator" node to tailor the output for different platforms (e.g., blog, internal wiki) or change the writing style. Advanced Enhancements: Automated Publishing:** Connect the output of the "n8ncreator" node to a Ghost, WordPress, or GitHub node to automatically publish the generated post. Add Web Search:** Equip the Langchain Agent with a web search tool to allow it to fetch live information about new n8n nodes or services. Batch Processing:** Replace the Form Trigger with a Read Binary Files node to process an entire folder of workflow JSON files in a single run. Performance & Optimization | Metric | Expected Performance | Optimization Tips | |--------|---------------------|-------------------| | Execution time | ~1 minute per run | Largely dependent on the Gemini API response time. | | API calls | 1 LLM call per post | Knowledge base updates trigger LlamaIndex/Google calls separately. | | Error handling | Built-in retry logic for document parsing | Add an error workflow path after the "n8ncreator" node to handle AI generation failures. | Troubleshooting Common Issues: | Problem | Cause | Solution | |---------|-------|----------| | AI output is generic or incomplete | The input JSON file is invalid or lacks key information (e.g., no node names). | Ensure you are uploading a valid, exported n8n workflow JSON. Verify the workflow has been saved with descriptive node names. | | LlamaIndex parsing fails | The LlamaIndex API key is incorrect or the source document is inaccessible. | Double-check your LlamaIndex API credential. Ensure the Google Doc sharing settings allow access. | | Credential Error | API keys are missing or incorrect for Gemini, LlamaIndex, or Cohere. | Go to the specified nodes and verify that the correct credentials have been created and selected. | Created by: khaisa Studio Category: AI Tags: AI, Content Generation, Google Gemini, LlamaIndex, Automation Need custom workflows? Contact us Connect with the creator: Portfolio • Workflows • LinkedIn • Medium • Threads
by Palak Rathor
This template transforms uploaded brand assets into AI-generated influencer-style posts — complete with captions, images, and videos — using n8n, OpenAI, and your preferred image/video generation APIs. 🧠 Who it’s for Marketers, creators, or brand teams who want to speed up content ideation and visual generation. Perfect for social-media teams looking to turn product photos and brand visuals into ready-to-review creative posts. ⚙️ How it works Upload your brand assets — A form trigger collects up to three files: product, background, and prop. AI analysis & content creation — An OpenAI LLM analyzes your brand tone and generates post titles, captions, and visual prompts. Media generation — Connected image/video generation workflows create corresponding visuals. Result storage — All captions, image URLs, and video URLs are automatically written to a Google Sheet for review or publishing. 🧩 How to set it up Replace all placeholders in nodes: <<YOUR_SHEET_ID>> <<FILE_UPLOAD_BASE>> <<YOUR_API_KEY>> <<YOUR_N8N_DOMAIN>>/form/<<FORM_ID>> Add your own credentials in: Google Sheets HTTP Request AI/LLM nodes Execute the workflow or trigger via form. Check your connected Google Sheet for generated posts and media links. 🛠️ Requirements | Tool | Purpose | |------|----------| | OpenAI / compatible LLM key | Caption & idea generation | | Image/Video generation API | Creating visuals | | Google Sheets credentials | Storing results | | (Optional) n8n Cloud / self-hosted | To run the workflow | 🧠 Notes The workflow uses modular sub-workflows for image and video creation; you can connect your own generation nodes. All credentials and private URLs have been removed. Works seamlessly with both n8n Cloud and self-hosted setups. Output is meant for creative inspiration — review before posting publicly. 🧩 Why it’s useful Speeds up campaign ideation and content creation. Provides structured, reusable results in Google Sheets. Fully visual, modular, and customizable for any brand or industry. 🧠 Example Use Cases Influencer campaign planning Product launch creatives E-commerce catalog posts Fashion, lifestyle, or tech brand content ✅ Security & best practices No hardcoded keys or credentials included. All private URLs replaced with placeholders. Static data removed from the public JSON. Follows n8n’s template structure, node naming, and sticky-note annotation guidelines. 📦 Template info Name: AI-Powered Influencer Post Generator with Google Sheets and Image/Video APIs Category: AI / Marketing Automation / Content Generation Author: Palak Rathor Version: 1.0 (Public Release — October 2025)
by Gegenfeld
AI Background Removal Workflow This workflow automatically removes backgrounds from images stored in Airtable using the APImage API 🡥, then downloads and saves the processed images to Google Drive. Perfect for batch processing product photos, portraits, or any images that need clean, transparent backgrounds. The source (Airtable) and the storage (Google Drive) can be changed to any service or database you want/use. 🧩 Nodes Overview 1. Remove Background (Manual Trigger) This manual trigger starts the background removal process when clicked. Customization Options: Replace with Schedule Trigger for automatic daily/weekly processing Replace with Webhook Trigger to start via API calls Replace with File Trigger to process when new files are added 2. Get a Record (Airtable) Retrieves media files from your Airtable "Creatives Library" database. Connects to the "Media Files" table in your Airtable base Fetches records containing image thumbnails for processing Returns all matching records with their thumbnail URLs and metadata Required Airtable Structure: Table with image/attachment field (currently expects "Thumbnail" field) Optional fields: File Name, Media Type, Upload Date, File Size Customization Options: Replace with Google Sheets, Notion, or any database node Add filters to process only specific records Change to different tables with image URLs 3. Code (JavaScript Processing) Processes Airtable records and prepares thumbnail data for background removal. Extracts thumbnail URLs from each record Chooses best quality thumbnail (large > full > original) Creates clean filenames by removing special characters Adds processing metadata and timestamps Key Features: // Selects best thumbnail quality if (thumbnail.thumbnails?.large?.url) { thumbnailUrl = thumbnail.thumbnails.large.url; } // Creates clean filename cleanFileName: (record.fields['File Name'] || 'unknown') .replace(//g, '_') .toLowerCase() Easy Customization for Different Databases: Product Database**: Change field mappings to 'Product Name', 'SKU', 'Category' Portfolio Database**: Use 'Project Name', 'Client', 'Tags' Employee Database**: Use 'Full Name', 'Department', 'Position' 4. Split Out Converts the array of thumbnails into individual items for parallel processing. Enables processing multiple images simultaneously Each item contains all thumbnail metadata for downstream nodes 5. APImage API (HTTP Request) Calls the APImage service to remove backgrounds from images. API Endpoint: POST https://apimage.org/api/ai-remove-background Request Configuration: Header**: Authorization: Bearer YOUR_API_KEY Body**: image_url: {{ $json.originalThumbnailUrl }} ✅ Setup Required: Replace YOUR_API_KEY with your actual API key Get your key from APImage Dashboard 🡥 6. Download (HTTP Request) Downloads the processed image from APImage's servers using the returned URL. Fetches the background-removed image file Prepares image data for upload to storage 7. Upload File (Google Drive) Saves processed images to your Google Drive in a "bg_removal" folder. Customization Options: Replace with Dropbox, OneDrive, AWS S3, or FTP upload Create date-based folder structures Use dynamic filenames with metadata Upload to multiple destinations simultaneously ✨ How To Get Started Set up APImage API: Double-click the APImage API node Replace YOUR_API_KEY with your actual API key Keep the Bearer prefix Configure Airtable: Ensure your Airtable has a table with image attachments Update field names in the Code node if different from defaults Test the workflow: Click the Remove Background trigger node Verify images are processed and uploaded successfully 🔗 Get your API Key 🡥 🔧 How to Customize Input Customization (Left Section) Replace the Airtable integration with any data source containing image URLs: Google Sheets** with product catalogs Notion** databases with image galleries Webhooks** from external systems File system** monitoring for new uploads Database** queries for image records Output Customization (Right Section) Modify where processed images are stored: Multiple Storage**: Upload to Google Drive + Dropbox simultaneously Database Updates**: Update original records with processed image URLs Email/Slack**: Send processed images via communication tools Website Integration**: Upload directly to WordPress, Shopify, etc. Processing Customization Batch Processing**: Limit concurrent API calls Quality Control**: Add image validation before/after processing Format Conversion**: Use Sharp node for resizing or format changes Metadata Preservation**: Extract and maintain EXIF data 📋 Workflow Connections Remove Background → Get a Record → Code → Split Out → APImage API → Download → Upload File 🎯 Perfect For E-commerce**: Batch process product photos for clean, professional listings Marketing Teams**: Remove backgrounds from brand assets and imagery Photographers**: Automate background removal for portrait sessions Content Creators**: Prepare images for presentations and social media Design Agencies**: Streamline asset preparation workflows 📚 Resources APImage API Documentation 🡥 Airtable API Reference 🡥 n8n Documentation 🡥 ⚡ Processing Speed: Handles multiple images in parallel for fast batch processing 🔒 Secure: API keys stored safely in n8n credentials 🔄 Reliable: Built-in error handling and retry mechanisms