by Stephan Koning
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. **Alternatively, you can delete the community node and use the HTTP node instead. ** Most email agent templates are fundamentally broken. They're stateless—they have no long-term memory. An agent that can't remember past conversations is just a glorified auto-responder, not an intelligent system. This workflow is Part 1 of building a truly agentic system: creating the brain. Before you can have an agent that replies intelligently, you need a knowledge base for it to draw from. This system uses a sophisticated parser to automatically read, analyze, and structure every incoming email. It then logs that intelligence into a persistent, long-term memory powered by mem0. The Problem This Solves Your inbox is a goldmine of client data, but it's unstructured, and manually monitoring it is a full-time job. This constant, reactive work prevents you from scaling. This workflow solves that "system problem" by creating an "always-on" engine that automatically processes, analyzes, and structures every incoming email, turning raw communication into a single source of truth for growth. How It Works This is an autonomous, multi-stage intelligence engine. It runs in the background, turning every new email into a valuable data asset. Real-Time Ingest & Prep: The system is kicked off by the Gmail Trigger, which constantly watches your inbox. The moment a new email arrives, the workflow fires. That email is immediately passed to the Set Target Email node, which strips it down to the essentials: the sender's address, the subject, and the core text of the message (I prefer using the plain text or HTML-as-text for reliability). While this step is optional, it's a good practice for keeping the data clean and orderly for the AI. AI Analysis (The Brain): The prepared text is fed to the core of the system: the AI Agent. This agent, powered by the LLM of your choice (e.g., GPT-4), reads and understands the email's content. It's not just reading; it's performing analysis to: Extract the core message. Determine the sentiment (Positive, Negative, Neutral). Identify potential red flags. Pull out key topics and keywords. The agent uses Window Buffer Memory to recall the last 10 messages within the same conversation thread, giving it the context to provide a much smarter analysis. Quality Control (The Parser): We don't trust the AI's first draft blindly. The analysis is sent to an Auto-fixing Output Parser. If the initial output isn't in a perfect JSON format, a second Parsing LLM (e.g., Mistral) automatically corrects it. This is our "twist" that guarantees your data is always perfectly structured and reliable. Create a Permanent Client Record: This is the most critical step. The clean, structured data is sent to mem0. The analysis is now logged against the sender's email address. This moves beyond just tracking conversations; it builds a complete, historical intelligence file on every person you communicate with, creating an invaluable, long-term asset. Optional Use: For back-filling historical data, you can disable the Gmail Trigger and temporarily connect a Gmail "Get Many" node to the Set Target Email node to process your backlog in batches. Setup Requirements To deploy this system, you'll need the following: An active n8n instance. Gmail** API credentials. An API key for your primary LLM (e.g., OpenAI). An API key for your parsing LLM (e.g., Mistral AI). An account with mem0.ai for the memory layer.
by Vitorio Magalhães
Auto-publish NASA APOD to LinkedIn with AI translation and hashtags Transform NASA's daily astronomical wonders into engaging LinkedIn content automatically. This workflow fetches NASA's Astronomy Picture of the Day, translates it to Brazilian Portuguese using AI, generates strategic hashtags, and publishes everything to your LinkedIn profile with the stunning space image attached. Who's it for Content creators, astronomy enthusiasts, science communicators, and anyone wanting to share high-quality educational content consistently on LinkedIn. Perfect for Portuguese-speaking professionals who want to engage their network with fascinating space discoveries while building their personal brand as a science advocate. How it works The workflow runs on a daily schedule and handles the complete content pipeline automatically. It fetches the latest NASA APOD through the official API, including both the image and detailed explanation. The English description gets professionally translated to selected language using Google Gemini 2.5 Flash, while maintaining scientific accuracy and terminology. Smart hashtag generation combines fixed branding tags with content-specific ones, mixing Portuguese and English for maximum reach. The final post includes the NASA image, translated description, and strategic hashtags, then gets published to your LinkedIn profile automatically. How to set up You'll need accounts for Google AI Studio (free), LinkedIn Developer (free), and a Telegram bot for notifications. The setup takes about 15 minutes and uses only free services and APIs. First, create your Google AI Studio account and get an API key for the AI translation services. Then set up a LinkedIn OAuth2 application to enable posting permissions. Create a Telegram bot through BotFather and get your chat ID for notifications. Configure the Settings node with your Telegram chat ID and preferred language. The workflow comes with all prompts and configurations ready to use. Test each component individually before activating the daily automation. Requirements LinkedIn account with posting permissions Google AI Studio API key (free tier available) Telegram bot token and your chat ID Basic understanding of OAuth2 setup for LinkedIn NASA API key (optional - demo key included) All services used have generous free tiers, making this workflow completely free to operate indefinitely. How to customize the workflow The centralized Settings node makes customization simple. Change the target language from Brazilian Portuguese to any other language by updating the translate_to_language variable. Modify the posting schedule in the CRON trigger to match your preferred timing. Customize the post template in the "Create Final Post Text" node to match your personal brand voice. Adjust the hashtag strategy by editing the AI prompt in the "Generate Hashtags" node. Add additional social platforms by duplicating the LinkedIn publisher with different credentials. The AI prompts can be fine-tuned for different writing styles or specific astronomical topics. You can also extend the workflow to include additional content processing, image enhancements, or cross-posting to multiple platforms while maintaining the core NASA APOD automation.
by Mary Newhauser
RAG over a PDF with Weaviate This workflow allows you to upload a PDF file and ask questions about it using the Question and Answer Chain and the Weaviate Vector Store nodes. Who it's for This workflow is the simplest possible implementation of RAG with Weaviate in n8n. It's intended to act as an extendable template for RAG over your own documents. Prerequisites An existing Weaviate cluster. You can view instructions for setting up a local cluster with Docker here or a Weaviate Cloud cluster here. API keys to generate embeddings and power chat models. We use OpenAI, but feel free to switch out the models as you like. Self-hosted n8n instance. See this video for how to get set up in just three minutes. How it works Part 1: Manually upload data In this example, we manually upload a 100+ page article from arXiv called "A Survey of Large Language Models". But you can replace this with your own more advanced data pipeline, if you wish. Part 2: Embed and load data into Weaviate collection Here, we generate embeddings for the full-text of the article and store them in Weaviate. Part 3: Perform RAG over PDF file with Weaviate In this part of the workflow, you can enter your query by running the Chat Node and get a RAG response grounded in context via the Question and Answer Chain node. How to run the workflow Go through the prerequisites, creating a Weaviate cluster (can be local or cloud), downloading self-hosted n8n, and adding your API keys and other credentials. Select the embedding and chat models you'd like to use. Upload a PDF file you want to ask questions about. Execute the rest of the workflow.
by Calistus Christian
What this workflow does Automatically triages risky AWS misconfigurations and alerts your team. Pipeline: Security Hub or AWS Config -> EventBridge rules -> SNS (HTTP) -> n8n Webhook -> Normalize -> AI Prioritizer -> Airtable (log) -> Gmail (email) Normalizes incoming findings (S3 / Security Groups / IAM / RDS) into a consistent JSON. Uses an LLM to assign a priority (P0–P3) with rationale and remediation steps. Upserts the finding into Airtable (avoids duplicates). Emails a compact incident summary to your inbox. This can be swapped for Microsoft Teams or Slack, etc. Category: Security / Cloud / Alerting Time to set up: ~10–15 minutes Difficulty: Beginner–Intermediate Cost: Mostly free (n8n CE + AWS SNS/EventBridge; OpenAI + Airtable/Gmail as used) What you’ll need An n8n instance reachable over HTTP. AWS account (one region) with permissions to create SNS topics and EventBridge rules. Security Hub** enabled (or AWS Config rules that emit compliance events). n8n credentials: OpenAI, Airtable, Gmail. Nodes used Webhook** (POST /aws-misconfig) Code:** SNS Handler (token check, confirm/unwrap) IF:** route mode === "confirm" vs notification HTTP Request:** SNS SubscriptionConfirmation (GET) Code:** Normalize Finding Message a model:** AI Prioritizer (JSON out) Airtable:** Create/Upsert Gmail:** Send message Edit Fields:** final JSON response Setup steps Import and activate the workflow in n8n. Webhook Respond: When Last Node Finishes -> First Entry JSON. Append a shared secret to the URL, e.g. ?token=MY_SUPER_TOKEN, and keep the check in the SNS Handler code node. Create an SNS topic (e.g., misconfig-events) in the same region as your EventBridge rules. Create EventBridge rules targeting the SNS topic: Rule A (Security Hub): source = aws.securityhub, detail-type = Security Hub Findings - Imported Rule B (AWS Config): source = aws.config, detail-type = Config Rules Compliance Change Create an SNS subscription with Protocol = HTTP and Endpoint = your production webhook URL: http://YOUR_HOST:5678/webhook/aws-misconfig?token=MY_SUPER_TOKEN (The workflow auto-confirms the subscription on first POST.) Configure Airtable (Upsert on Finding ID) and Gmail recipients.
by clancy jack
This n8n workflow recommends Taiwan indie music based on a user's city, mood, birthday, today's weather, and star sign. Here's a concise overview: Trigger: Starts manually with the "When clicking ‘Test workflow’" node. Input Setup: The "infomation" node sets user inputs (e.g., city: Taipei, mood: Happy, birthday: 1996/11/21). Song Recommendation: The "get song recommendation" node uses OpenAI's GPT-4o-mini to: Fetch today's weather for the specified city. Determine the user's zodiac sign from their birthday. Check the zodiac sign's daily fortune. Recommend a Taiwan indie song considering weather and fortune. Explain the song choice and highlight its features. Return results in JSON format. Data Extraction: The "Information Extractor" node parses the JSON output, extracting fields like date, city, weather, zodiac sign, fortune, song, artist, and additional info. Spotify Search: The "Spotify" node searches for the recommended song using the artist and song name, retrieving a Spotify URL. Final Output: The "Final Output" node compiles all data, including the Spotify link, into a structured format. Additional Note: A "Sticky Note" provides context about the workflow's purpose and credits the creator, n8nguide. This workflow integrates AI, weather data, astrology, and Spotify to deliver personalized Taiwan indie music recommendations.
by Jimleuk
This n8n template demonstrates how to use OpenAI's Responses API with existing LLM and AI Agent nodes. Though I would recommend just waiting for official support, if you're impatient and would like a round-about way to integrate OpenAI's responses API into your existing AI workflows then this template is sure to satisfy! This approach implements a simple API wrapper for the Responses API using n8n's builtin webhooks. When the base url is pointed to these webhooks using a custom OpenAI credential, it's possible to intercept the request and remap for compatibility. How it works An OpenAI subnode is attached to our agent but has a special custom credential where the base_url is changed to point at this template's webhooks. When executing a query, the agent's request is forwarded to our mini chat completion workflow. Here, we take the default request and remap the values to use with a HTTP node which is set to query the Responses API. Once a response is received, we'll need to remap the output for Langchain compatibility. This just means the LLM or Agent node can parse it and respond to the user. There are two response formats, one for streaming and one for non-streaming responses. How to use You must activate this workflow to be able to use the webhooks. Create the custom OpenAI credential as instructed. Go to your existing AI workflows and replace the LLM node with the custom OpenAI credential. You do not need to copy anything else over to the existing template. Requirements OpenAI account for Responses API Customising this workflow Feel free to experiment with other LLMs using this same technique! Keep up to date with the Responses API announcements and make modifications as required.
by Rajeet Nair
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Description This workflow automatically collects daily trending topics from Twitter and YouTube, filters them for relevance, and uses an AI model (such as Mistral Cloud or another OpenAI-compatible API) to generate engaging social media hashtags. The final results, including source platform and date, are saved into a connected Google Sheet for easy access, tracking, or team collaboration. Ideal for content creators, marketers, and social media managers, this automation eliminates the manual effort of trend research and hashtag writing by combining real-time scraping with LLM-powered generation. The result is a scalable, daily strategy tool to stay aligned with what’s trending across major platforms. How It Works Daily Trigger Starts the workflow automatically on a daily schedule. Trend Scraping Scrapes current trending content from Twitter and YouTube using the Crawl and Scrape community node. Filtering & Slicing Removes irrelevant or duplicate entries and limits each platform’s list to top-performing trends. Merge Trends Combines Twitter and YouTube trends into a single dataset. AI Hashtag Generation Sends each trend topic to an AI model to generate relevant hashtags. Output to Google Sheets Loops through AI results and writes them to a Google Sheet, including trend, platform, hashtags, and timestamp. Setup Instructions Estimated time: 10–15 minutes Prerequisites A self-hosted instance of n8n (required for community nodes) API key for Mistral Cloud or any OpenAI-compatible LLM Google Sheets account connected via OAuth2 credentials Twitter and YouTube trend URLs (or scraping logic for target regions) Template Image: Example: Crawl and Scrape Node for Twitter Trends You can use the following configuration in the Crawl and Scrape node to extract Twitter trends from Trends24) { "parameters": { "url": "https://trends24.in/", "selectors": [ { "label": "Twitter Trends", "selector": ".trend-card__list li a", "type": "text" } ] }, "name": "Scrape Twitter Trends", "type": "n8n-nodes-crawl-and-scrape.crawlAndScrape", "typeVersion": 1, "position": [300, 200] } Google Sheet Column Format Column A: Generated Hashtags
by Agus Narestha
🔒 SSL Certificate Monitoring & Expiry Alert with Spreadsheet [FREE APIs] ✅ What This Workflow Does This n8n template automatically monitors SSL certificates of websites listed in a Google Sheet and sends email alerts if any are expiring within 14 days. It helps ensure you avoid downtime, security issues, and trust warnings due to expired certificates. 🧩 Key Features 📅 Weekly Automation: Runs every Monday at 7:00 AM (configurable). 📄 Google Sheets Integration: Fetches and updates data in a spreadsheet. 🔍 SSL Check via API: Uses ssl-checker.io to get certificate details. ⚠️ SSL Expiry Filter: Identifies certificates expiring within 14 days. 📧 Email Alerts: Sends notifications for certificates close to expiration. 📂 Input Spreadsheet Format Your Google Sheet should have the following columns: | No | Name | Link | SSL Issued On | SSL Expired On | SSL Status | |----|-----------------|-----------------------|-------------------|-------------------|------------| | 1 | Example Site | https://example.com | 2024-07-01 | 2025-07-01 | Valid | | 2 | My Blog | https://myblog.org | 2024-07-05 | 2024-07-20 | Expiring | Each row should include a valid website URL in the Link column. 🛠️ How It Works Scheduled Trigger Executes weekly (Monday 7:00 AM). Fetch Website List Reads all website entries from the Google Sheet. Check SSL Certificates Uses ssl-checker.io API to retrieve certificate details for each website. Update Spreadsheet Writes "Issued On" and "Expired On" fields back to the spreadsheet. Evaluate SSL Expiry Filters for certificates expiring within 14 days. Check Condition Determines whether to send alerts based on filtered results. Send Email Alert Notifies via email if any certificates are expiring soon. 📬 Example Email Output Subject: ⚠️ ALERT!! SSL EXPIRED SSL certificates expiring soon: example.com (expires in 5 days) anotherdomain.net (expires in 3 days) 🧰 Setup Requirements A Google Sheet with the correct columns and website links. SMTP credentials to send alert emails.
by Robert Breen
This n8n workflow finds experts on any topic, scrapes their websites, and pulls out contact emails automatically. Core services used: SerpAPI (google search) · Apify (website crawler) · OpenAI (GPT-4o email extraction). 🛠️ Step-by-Step Setup & Execution 1️⃣ Run Workflow (Manual Trigger) | Node | Type | Purpose | |------|------|---------| | Run Workflow | Manual Trigger | Start the workflow on demand while you test. | 2️⃣ Set Your Topic | Node | Type | How to configure | |------|------|------------------| | Set Topic | Set | Add a string field Topic – e.g. "n8n". This keyword drives every subsequent step. | 3️⃣ Search Google (Results 1-10) | Node | Type | API Credential | |------|------|----------------| | Search Google (top 10) | SerpAPI | Create SerpAPI credential1. Sign up → copy API key → n8n → Credentials → New → SerpAPI → paste.2. Select the credential in this node. | | Key Params | | | | q | | ={{ $json.Topic }} Expert | | location | | Region code (ex 585069efee19ad271e9c9b36) | | additionalFields.start | | "10" (Google position 1-10)| 4️⃣ Search Google (Results 11-20) | Node | Type | Notes | |------|------|-------| | Search Google (11-20) | SerpAPI (same credential) | Remove start or set to 20+ to fetch next page. | 5️⃣ Extract URL Lists | Node | Type | Script Purpose | |------|------|----------------| | Extract Url & Extract Url 2 | Code | Loop data.organic_results → output { title, link, displayed_link } for each result. | 6️⃣ Combine Both Result Sets | Node | Type | Details | |------|------|---------| | Append Results | Merge (combineAll) | Merges arrays from steps 3 & 4 into a single list for processing. | 7️⃣ Loop Over Every URL | Node | Type | Configuration | |------|------|---------------| | Loop Over Items1 | Split In Batches | Default batch = 1 (process one page at a time).onError = continueRegularOutput keeps loop alive on failures. | 8️⃣ Scrape Webpage Content (Apify) | Node | Type | API Credential | |------|------|----------------| | Scrape URL with apify | HTTP Request | Create Apify credential1. Sign up at https://console.apify.com2. Account → API tokens → copy.3. n8n → Credentials → New → HTTP Query Auth → set query param token=YOUR_TOKEN. | | Request Details | | | | Method | POST | | URL | https://api.apify.com/v2/acts/6sigmag~fast-website-content-crawler/run-sync-get-dataset-items | | JSON Body | 9️⃣ Extract Email with OpenAI | Node | Type | API Credential | |------|------|----------------| | Extract Email from webpage | LangChain Agent | Create OpenAI credential1. Generate key at https://platform.openai.com/account/api-keys2. n8n → Credentials → New → OpenAI API → paste key. | | Prompt (system) | | Output Parser | Structured Output Parser2 expects → { "email": "address OR null" } | 🔟 Loop Continues & Final Data The extracted result returns to Loop Over Items1 until every URL is processed. Typical final item JSON**: { "title": "How to Build n8n Workflows", "link": "https://example.com", "email": "info@example.com" } 💡 Optional Enhancements Idea How Save Leads Add a Google Sheets or Airtable node after the loop. Validate Emails Chain a ZeroBounce / Hunter.io verification API before saving. Parallel Crawling Increase SplitInBatches size (watch Apify rate limits). 🙋♂️ Need More Help? Robert Breen – Automation Consultant & n8n Expert 📧 robert.j.breen@gmail.com 🔗 https://www.linkedin.com/in/robert-breen-29429625/ 🌐 https://ynteractive.com
by Rully Saputra
Who’s it for This workflow is ideal for marketing teams, growth analysts, and business owners who need regular Google Analytics insights without manually digging through data. It’s also perfect for organizations that want to ensure positive performance updates reach stakeholders quickly while negative trends get immediate attention from the internal team. How it works / What it does The workflow runs weekly on a set schedule, pulls key performance metrics from Google Analytics, and aggregates the data into a clean summary. An AI Agent (powered by Google Gemini and connected to Simple Memory for historical context) analyzes the data, generates actionable insights, and classifies the sentiment as Positive, Negative, or Neutral. Positive sentiment → Automatically emailed to stakeholders via Gmail. Negative sentiment → Sent instantly to a designated Telegram group for faster response. This ensures wins are celebrated, and issues are addressed promptly. How to set up Configure the Schedule Trigger for your preferred reporting day/time. Connect the Google Analytics node with your property ID and metrics/dimensions. Set up the AI Agent with Google Gemini/others model API credentials. Connect Gmail and Telegram accounts to their respective nodes. Adjust sentiment routing rules. Requirements Google Analytics account with API access Google Gemini API key Gmail account with OAuth connection Telegram bot token and group chat ID How to customize the workflow Modify the AI prompt to include custom KPIs or industry-specific recommendations. Change the schedule frequency (daily, monthly, or on-demand). Add Neutral sentiment handling (e.g., log to Google Sheets). Extend with Slack, Discord, or other notification channels.
by Yaron Been
Create your own intelligent Telegram bot that summarizes articles and processes commands automatically. This powerful workflow turns Telegram into your personal AI assistant, handling /help, /summary <URL>, and /img <prompt> commands with intelligent responses - perfect for teams, content creators, and anyone wanting smart automation in their messaging. 🚀 What It Does Smart Command Processing: Automatically recognizes and routes /help, /summary, and /img commands to appropriate AI-powered responses. Article Summarization: Fetches any URL, extracts content, and generates professional 10-12 bullet point summaries using OpenAI. Image Generation: Processes image prompts and integrates with AI image generation services. Help System: Provides users with clear command instructions and usage examples. 🎯 Key Benefits ✅ Personal AI Assistant: Get instant article summaries in Telegram ✅ Team Productivity: Share quick content summaries with colleagues ✅ Content Research: Rapidly digest articles and web content ✅ 24/7 Availability: Bot works around the clock without maintenance ✅ Easy Commands: Simple /summary <link> format anyone can use ✅ Scalable: Handles multiple users and requests simultaneously 🏢 Perfect For Content Teams & Researchers Journalists quickly summarizing news articles Marketing teams researching competitor content Students processing academic papers and articles Analysts digesting industry reports Business Applications Team Communication**: Share article insights in group chats Research Assistance**: Quick content analysis for decision making Content Curation**: Summarize articles for newsletters or reports Knowledge Sharing**: Help teams stay informed efficiently ⚙️ What's Included Complete Bot Workflow: Ready-to-deploy Telegram bot with all commands AI Integration: OpenAI-powered content summarization and processing Smart Routing: Intelligent command recognition and response system Error Handling: Robust system handles invalid commands gracefully Extensible Design: Easy to add new commands and features 🔧 Quick Setup Requirements n8n Platform**: Cloud or self-hosted instance Telegram Bot Token**: Create via @BotFather (free, 5 minutes) OpenAI API**: For content summarization (pay-per-use) Basic Configuration**: Follow 15-minute setup guide 📱 User Experience Simple Commands: /help - Show available commands /summary https://example.com - Get article summary /img sunset over mountains - Generate image (with supported APIs) Sample Summary Output: 📄 Article Summary: • Company reports 40% revenue growth in Q3 2024 • New AI features driving customer acquisition • Expansion into European markets planned for 2025 • Investment in R&D increased by 25% this quarter • Customer satisfaction scores improved to 94% • Three new product launches scheduled for next year • Remote work policy made permanent post-pandemic • Sustainability goals on track to meet 2025 targets • Partnership with major tech firm announced • Stock price up 15% following earnings report 🎨 Customization Options Command Extensions: Add custom commands for specific workflows Response Formatting: Customize summary style and length Multi-Language: Support different languages for international teams Integration APIs: Connect additional AI services and tools User Permissions: Control who can use specific commands Analytics: Track usage patterns and popular content 🏷️ Tags & Categories #telegram-bot #ai-automation #content-summarization #article-processing #team-productivity #openai-integration #smart-assistant #workflow-automation #messaging-bot #content-research #ai-agent #n8n-workflow #business-automation #telegram-integration #ai-powered-bot 💡 Use Case Examples News Team: Quickly summarize breaking news articles for editorial meetings Marketing Agency: Research competitor content and industry trends efficiently Sales Team: Digest industry reports and share insights with prospects Remote Team: Keep everyone informed with summarized company updates 📈 Expected Results 80% faster** content research and analysis 50% more articles** processed per day vs manual reading 100% team accessibility** through familiar Telegram interface 24/7 availability** for global teams across time zones 🛠️ Setup & Support Quick Start: Deploy your bot in 15 minutes with included guide Video Tutorial: Complete walkthrough available Template Commands: Pre-built responses and formatting Expert Support: Direct help from workflow creator 📞 Get Help & Resources YouTube: https://www.youtube.com/@YaronBeen/videos 💼 Professional Support LinkedIn: https://www.linkedin.com/in/yaronbeen/ 📧 Direct Help Email: Yaron@nofluff.online - Response within 24 hours Ready to build your intelligent Telegram assistant? Get this AI Telegram Bot Agent and transform your messaging app into a powerful content processing tool. Perfect for teams, researchers, and anyone who wants AI-powered assistance directly in Telegram. Stop manually reading long articles. Start getting instant, intelligent summaries with simple commands.
by Solomon
This n8n template demonstrates how to obtain token usage from AI Agents and places the data into a spreadsheet that calculates the estimated cost of the execution. Obtaining the token usage from AI Agents is tricky, because it doesn't provide all the data from tool calls. This workflow taps into the workflow execution metadata to extract token usage information. Works well with OpenAI, Google and Anthropic. Other LLM providers might need small tweaks. How it works The AI Agent executes and then calls a subworkflow to calculate the token usage. The data is stored in Google Sheets The spreadsheet has formulas to calculate the estimated cost of the execution. How to use The AI Agent is used as an example. Feel free to replace this with other agents you have. Call the subworkflow AFTER all the other branches have finished executing. Requirements LLM account (OpenAI, Gemini...) for API usage. Google Drive and Sheets credentials n8n API key of your instance