by David Roberts
OpenAI Assistant is a powerful tool, but at the time of writing it doesn't automatically remember past messages from a conversation. This workflow demonstrates how to get around this, by managing the chat history in n8n and passing it to the assistant when required. This makes it possible to use OpenAI Assistant for chatbot use cases. Note that to use this template, you need to be on n8n version 1.28.0 or later.
by Robert Breen
This guide walks you through building an intelligent AI Agent in n8n that routes tasks to the appropriate sub-agent using the new @n8n/n8n-nodes-langchain agent framework. You’ll create a Manager Agent that evaluates user input and delegates it to either an Email Agent or a Data Agent—each with its own role, memory, and OpenAI model. This is perfect for use cases where you want a single entry point but intelligent branching behind the scenes. 🔧 Step 1: Set Up the Manager Agent Start by dragging in an Agent node and name it something like ManagerAgent. This agent will act as the “brain” of your system, analyzing the user's input and determining whether it should be handled by the email-writing sub-agent or the data-summary sub-agent. Open the node’s settings and paste the following into the System Message: You are an AI Manager that delegates tasks to specialized agents. Your job is to analyze the user's message and decide whether it requires: An EmailAgent for writing outreach, follow-up, or templated emails, or A DataAgent for tasks involving data summaries, metrics, or analysis. Send the instructions to the sub agents. This instruction gives the Manager Agent clarity on what roles exist and what types of tasks belong to each one. 🧠 Step 2: Add Memory to the Manager Agent Drag in a Memory (BufferWindow) node and label it Manager Memory. Connect it to the ai_memory input of the Manager Agent. This ensures the agent can remember recent inputs and outputs from the user and agents during the conversation. No extra configuration is needed in this memory node—just connect it to the agent. 🔌 Step 3: Connect a Language Model to the Manager Agent Next, add a Language Model node and choose OpenAI Chat Model. Select a model like gpt-4o-mini or gpt-4, depending on what you have access to. Under Credentials, connect your OpenAI API key. If you haven’t created this credential yet: Click "OpenAI API" under Credentials. Choose "Create New". Paste your OpenAI API key (found at https://platform.openai.com/account/api-keys). Save it and return to the workflow. Once the model is set, connect it to the ai_languageModel input of the Manager Agent. ✉️ Step 4: Create the Email Agent Tool Now you’ll create a specialized sub-agent that only writes emails. Add an Agent Tool node and call it EmailAgent. In the tool’s settings, describe its job clearly. For example: Writes professional, friendly, or action-oriented emails based on instructions. Then scroll down to the System Message section and enter the following: You are a professional Email Writing Assistant. You write polished, effective emails for tasks such as outreach, follow-ups, and client communication. Follow the instruction provided exactly and return only the email content. Use a warm, business-appropriate tone. For the text input field, use the expression: {{ $fromAI('Prompt__User_Message_', ``, 'string') }} This allows the Email Agent to receive exactly what the Manager Agent wants it to handle. Add another Memory node and link it to this tool to help it maintain short-term context. Then add a second Language Model node, configured just like the first one (you can even clone it), and connect it to the EmailAgent. Finally, connect this entire EmailAgent setup back to the ManagerAgent by attaching it to its ai_tool input. 📊 Step 5: Create the Data Agent Tool Repeat the same steps, but this time for data summaries and analysis. Add another Agent Tool node and name it DataAgent. In the Tool Description, write something like: Responds to instructions requiring metrics, summaries, or data analysis explanations. For its input text field, you can use: {{json.query}} If desired, provide a system message that gives the agent more detailed instruction on how to behave: You are a helpful Data Analyst. Summarize trends, explain metrics, and break down data clearly based on user instructions. As with the EmailAgent, you’ll also need: A dedicated Memory node A dedicated Language Model node A connection to the ai_tool input of the Manager Agent Now the Manager Agent has two tools it can delegate to: one for communication and one for insights. 🧪 Step 6: Test Your AI Agent System Deploy the workflow and start testing by sending prompts like: > “Write a cold outreach email to a software company.” The ManagerAgent should route that to the EmailAgent. Then try: > “Summarize how our lead volume changed last month.” The DataAgent should receive that task. If routing isn’t working as expected, double-check your system messages and input bindings in each agent tool. ✅ You’re Done! You now have a modular, multi-agent AI system powered by n8n. The Manager Agent delegates intelligently, each sub-agent is optimized for its role, and all of them benefit from context memory. For more advanced setups, you can chain tools, add additional memory types, or use retrieval (RAG) tools for external document support.
by Robert Breen
A step-by-step demo that shows how to pull your Outlook calendar events for the week and ask GPT-4o to write a short summary. Along the way you’ll practice basic data-transform nodes (Code, Filter, Aggregate) and see where to attach the required API credentials. 1️⃣ Manual Trigger — Run Workflow | Why | Lets you click “Execute” in the n8n editor so you can test each change. | | --- | --- | 2️⃣ Get Outlook Events — Get many events Node type: Microsoft Outlook → Event → Get All Fields selected: subject, start API setup (inside this node): Click Credentials ▸ Microsoft Outlook OAuth2 API If you haven’t connected before: Choose “Microsoft Outlook OAuth2 API” → “Create New”. Sign in and grant the Calendars.Read permission. Save the credential (e.g., “Microsoft Outlook account”). Output: A list of events with the raw ISO start time. > Teaching moment: Outlook returns a full dateTime string. We’ll normalize it next so it’s easy to filter. 3️⃣ Normalize Dates — Convert to Date Format // Code node contents return $input.all().map(item => { const startDateTime = new Date(item.json.start.dateTime); const formattedDate = startDateTime.toISOString().split('T')[0]; // YYYY-MM-DD return { json: { ...item.json, startDateFormatted: formattedDate } }; }); 4️⃣ Filter the Events Down to This Week After we’ve normalised the start date-time into a simple YYYY-MM-DD string, we drop in a Filter node. Add one rule for every day you want to keep—for example 2025-08-07 or 2025-08-08. Rows that match any of those dates will continue through the workflow; everything else is quietly discarded. Why we’re doing this: we only want to summarise tomorrow’s and the following day’s meetings, not the entire calendar. 5️⃣ Roll All Subjects Into a Single Item Next comes an Aggregate node. Tell it to aggregate the subject field and choose the option “Only aggregated fields.” The result is one clean item whose subject property is now a tidy list of every meeting title. It’s far easier (and cheaper) to pass one prompt to GPT than dozens of small ones. 6️⃣ Turn That List Into Plain Text Insert a small Code node right after the aggregation: return [{ json: { text: items .map(item => JSON.stringify(item.json)) .join('\n') } }]; Need a Hand? I’m always happy to chat automation, n8n, or Outlook API quirks. Robert Breen – Automation Consultant & n8n Instructor 📧 robert@ynteractive.com | LinkedIn
by Hemanth Arety
Generate AEO strategy from brand input using AI competitor analysis This workflow automatically creates a comprehensive Answer Engine Optimization (AEO) strategy by identifying your top competitors, analyzing their positioning, and generating custom recommendations to help your brand rank in AI-powered search engines like ChatGPT, Perplexity, and Google SGE. Who it's for This template is perfect for: Digital marketing agencies** offering AEO services to clients In-house marketers** optimizing content for AI search engines Brand strategists** analyzing competitive positioning Content teams** creating AI-optimized content strategies SEO professionals** expanding into Answer Engine Optimization What it does The workflow automates the entire AEO research and strategy process in 6 steps: Collects brand information via a user-friendly web form (brand name, website, niche, product type, email) Identifies top 3 competitors using Google Gemini AI based on product overlap, market position, digital presence, and geographic factors Scrapes target brand website with Firecrawl to extract value propositions, features, and content themes Scrapes competitor websites in parallel to gather competitive intelligence Generates comprehensive AEO strategy using OpenAI GPT-4 with 15+ actionable recommendations Delivers formatted report via email with executive summary, competitive analysis, and implementation roadmap The entire process runs automatically and takes approximately 5-7 minutes to complete. How to set up Requirements You'll need API credentials for: Google Gemini API** (for competitor analysis) - Get API key OpenAI API** (for strategy generation) - Get API key Firecrawl API** (for web scraping) - Get API key Gmail account** (for email delivery) - Use OAuth2 authentication Setup Steps Import the workflow into your n8n instance Configure credentials: Add your Google Gemini API key to the "Google Gemini Chat Model" node Add your OpenAI API key to the "OpenAI Chat Model" node Add your Firecrawl API key as HTTP Header Auth credentials Connect your Gmail account using OAuth2 Activate the workflow and copy the form webhook URL Test the workflow by submitting a real brand through the form Check your email for the generated AEO strategy report Credentials Setup Tips For Firecrawl: Create HTTP Header Auth credentials with header name Authorization and value Bearer YOUR_API_KEY For Gmail: Use OAuth2 to avoid authentication issues with 2FA Test each API credential individually before running the full workflow How it works Competitor Identification The Google Gemini AI agent analyzes your brand based on 4 weighted criteria: product/service overlap (40%), market position (30%), digital presence (20%), and geographic overlap (10%). It returns structured JSON data with competitor names, URLs, overlap percentages, and detailed reasoning. Web Scraping Firecrawl extracts structured data from websites using custom schemas. For each site, it captures: company name, products/services, value proposition, target audience, key features, pricing info, and content themes. This runs asynchronously with 60-second waits to allow for complete extraction. Strategy Generation OpenAI GPT-4 analyzes the combined brand and competitor data to generate a comprehensive report including: executive summary, competitive analysis, 15+ specific AEO tactics across 4 categories (content optimization, structural improvements, authority building, answer engine targeting), content priority matrix with 10 ranked topics, and a detailed implementation roadmap. Email Delivery The strategy is formatted as a professional HTML email with clear sections, visual hierarchy, and actionable next steps. Recipients get an immediately implementable roadmap for improving their AEO performance. How to customize the workflow Change AI Models Replace Google Gemini** with Claude, GPT-4, or other LLM in the competitor analysis node Replace OpenAI** with Anthropic Claude or Google Gemini in the strategy generation node Both use LangChain agent nodes, making model swapping straightforward Modify Competitor Analysis Find more competitors**: Edit the AI prompt to request 5 or 10 competitors instead of 3 Add filtering criteria**: Include factors like company size, funding stage, or geographic focus Change ranking weights**: Adjust the 40/30/20/10 weighting in the prompt Enhance Data Collection Add social media scraping**: Include LinkedIn, Twitter/X, or Facebook page analysis Pull review data**: Integrate G2, Capterra, or Trustpilot APIs for customer sentiment Include traffic data**: Add SimilarWeb or Semrush API calls for competitive metrics Change Output Format Export to Google Docs**: Replace Gmail with Google Docs node to create shareable documents Send to Slack/Discord**: Post strategy summaries to team channels for collaboration Save to database**: Store results in Airtable, PostgreSQL, or MongoDB for tracking Create presentations**: Generate PowerPoint slides using automation tools Add More Features Schedule periodic analysis**: Run monthly competitive audits for specific brands A/B test strategies**: Generate multiple strategies and compare results over time Multi-language support**: Add translation nodes for international brands Custom branding**: Modify email templates with your agency's logo and colors Adjust Scraping Behavior Change Firecrawl schema**: Customize extracted data fields based on industry needs Add timeout handling**: Implement retry logic for failed scraping attempts Scrape more pages**: Extend beyond homepage to include blog, pricing, and about pages Use different scrapers**: Replace Firecrawl with Apify, Browserless, or custom solutions Tips for best results Provide clear brand information**: The more specific the product type and niche, the better the competitor identification Ensure websites are accessible**: Some sites block scrapers; consider adding user agents or rotating IPs Monitor API costs**: Firecrawl and OpenAI charges can add up; set usage limits Review generated strategies**: AI recommendations should be reviewed and customized for your specific context Iterate on prompts**: Fine-tune the AI prompts based on output quality over multiple runs Common use cases Client onboarding** for marketing agencies - Generate initial AEO assessments Content strategy planning** - Identify topics and angles competitors are missing Quarterly audits** - Track competitive positioning changes over time Product launches** - Understand competitive landscape before entering market Sales enablement** - Equip sales teams with competitive intelligence Note: This workflow uses community and AI nodes that require external API access. Make sure your n8n instance can make outbound HTTP requests and has the necessary LangChain nodes installed.
by Marth
Automated AI-Driven Competitor & Market Intelligence System Problem Solved:** Small and Medium-sized IT companies often struggle to stay ahead in a rapidly evolving market. Manually tracking competitor moves, pricing changes, product updates, and emerging market trends is time-consuming, inconsistent, and often too slow for agile sales strategies. This leads to missed sales opportunities, ineffective pitches, and a reactive rather than proactive market approach. Solution Overview:** This n8n workflow automates the continuous collection and AI-powered analysis of competitor data and market trends. By leveraging web scraping, RSS feeds, and advanced AI models, it transforms raw data into actionable insights for your sales and marketing teams. The system generates structured reports, notifies relevant stakeholders, and stores intelligence in your database, empowering your team with real-time, strategic information. For Whom:** This high-value workflow is perfect for: IT Solution Providers & SaaS Companies: To maintain a competitive edge and tailor sales pitches based on competitor weaknesses and market opportunities. Sales & Marketing Leaders: To gain comprehensive, automated market intelligence without extensive manual research. Product Development Teams: To identify market gaps and validate new feature development based on competitive landscapes and customer sentiment. Business Strategists: To inform strategic planning with data-driven insights into industry trends and competitive threats. How It Works (Scope of the Workflow) ⚙️ This system establishes a powerful, automated pipeline for market and competitor intelligence: Scheduled Data Collection: The workflow runs automatically at predefined intervals (e.g., weekly), initiating data retrieval from various online sources. Diverse Information Gathering: It pulls data from competitor websites (pricing, features, blogs via web scraping services), industry news and blogs (via RSS feeds), and potentially other sources. Intelligent Data Preparation: Collected data is aggregated, cleaned, and pre-processed using custom code to ensure it's in an optimal format for AI analysis, removing noise and extracting relevant text. AI-Powered Analysis: An advanced AI model (like OpenAI's GPT-4o) performs in-depth analysis on the cleaned data. It identifies competitor strengths, weaknesses, new offerings, pricing changes, customer sentiment from reviews, emerging market trends, and suggests specific opportunities and threats for your company. Automated Report Generation: The AI's structured insights are automatically populated into a professional Google Docs report using a predefined template, making the intelligence easily digestible for your team. Team Notification: Stakeholders (sales leads, marketing managers) receive automated notifications via Slack (or email), alerting them to the new report and key insights. Strategic Data Storage & Utilization: All analyzed insights are stored in a central database (e.g., PostgreSQL). This builds a historical record for long-term trend analysis and can optionally trigger sub-workflows to generate personalized sales talking points directly relevant to ongoing deals or specific prospects. Setup Steps 🛠️ (Building the Workflow) To implement this sophisticated workflow in your n8n instance, follow these detailed steps: Prepare Your Digital Assets & Accounts: Google Sheet (Optional, if using for CRM data): For simpler CRM, create a sheet with CompetitorName, LastAnalyzedDate, Strengths, Weaknesses, Opportunities, Threats, SalesTalkingPoints. API Keys & Credentials: OpenAI API Key: Essential for the AI analysis. Web Scraping Service API Key: For services like Apify, Crawlbase, or similar (e.g., Bright Data, ScraperAPI). Database Access: Credentials for your PostgreSQL/MySQL database. Ensure you've created necessary tables (competitor_profiles, market_trends) with appropriate columns. Google Docs Credential: To link n8n to your Google Drive for report generation. Create a template Google Doc with placeholders (e.g., {{competitorName}}, {{strengths}}). Slack Credential: For sending team notifications to specific channels. CRM API Key (Optional): If directly integrating with HubSpot, Salesforce, or custom CRM via API. Identify Data Sources for Intelligence: Compile a list of competitor website URLs you want to monitor (e.g., pricing pages, blog sections, news). Identify relevant online review platforms (e.g., G2, Capterra) for competitor products. Gather RSS Feed URLs from key industry news sources, tech blogs, and competitor's own blogs. Define keywords for general market trends or competitor mentions, if using tools that provide RSS feeds (like Google Alerts). Build the n8n Workflow (10 Key Nodes): Start a new workflow in n8n and add the following nodes, configuring their parameters and connections carefully: Cron (Scheduled Analysis Trigger): Set this to trigger daily or weekly at a specific time (e.g., Every Week, At Hour: 0, At Minute: 0). HTTP Request (Fetch Competitor Web Data): Configure this to call your chosen web scraping service's API. Set Method to POST, URL to the service's API endpoint, and build the JSON/Raw Body with the startUrls (competitor websites, review sites) for scraping, including your API Key in Authentication (e.g., Header Auth). RSS Feed (Fetch News & Blog RSS): Add the URLs of competitor blogs and industry news RSS feeds. Merge (Combine Data Sources): Connect inputs from both Fetch Competitor Web Data and Fetch News & Blog RSS. Use Merge By Position. Code (Pre-process Data for AI): Write JavaScript code to iterate through merged items, extract relevant text content, perform basic cleaning (e.g., HTML stripping), and limit text length for AI input. Output should be an array of objects with content, title, url, and source. OpenAI (AI Analysis & Competitor Insights): Select your OpenAI credential. Set Resource to Chat Completion and Model to gpt-4o. In Messages, create a System message defining AI's role and a User message containing the dynamic prompt (referencing {{ $json.map(item => ... ).join('\\n\\n') }} for content, title, url, source) and requesting a structured JSON output for analysis. Set Output to Raw Data. Google Docs (Generate Market Intelligence Report): Select your Google Docs credential. Set Operation to Create document from template. Provide your Template Document ID and map the Values from the parsed AI output (using JSON.parse($json.choices[0].message.content).PropertyName) to your template placeholders. Slack (Sales & Marketing Team Notification): Select your Slack credential. Set Chat ID to your team's Slack channel ID. Compose the Text message, referencing the report link ({{ $json.documentUrl }}) and key AI insights (e.g., {{ JSON.parse($json.choices[0].message.content).Competitor_Name }}). PostgreSQL (Store Insights to Database): Select your PostgreSQL credential. Set Operation to Execute Query. Write an INSERT ... ON CONFLICT DO UPDATE SQL query to store the AI insights into your competitor_profiles or market_trends table, mapping values from the parsed AI output. OpenAI (Generate Personalized Sales Talking Points - Optional Branch): This node can be part of the main workflow or a separate, manually triggered workflow. Configure it similarly to the main AI node, but with a prompt tailored to generate sales talking points based on a specific sales context and the stored insights. Final Testing & Activation: Run a Test: Before going live, manually trigger the workflow from the first node. Carefully review the data at each stage to ensure correct processing and output. Verify that reports are generated, notifications are sent, and data is stored correctly. Activate Workflow: Once testing is complete and successful, activate the workflow in n8n. This system will empower your IT company's sales team with invaluable, data-driven intelligence, enabling them to close more deals and stay ahead in the market.
by franck fambou
Overview This advanced automation workflow enables deep web scraping combined with Retrieval-Augmented Generation (RAG) to transform websites into intelligent, queryable knowledge bases. The system recursively crawls target websites, extracts content, and indexes all data in a vector database for AI conversational access. How the system works Intelligent Web Scraping and RAG Pipeline Recursive Web Scraper - Automatically crawls every accessible page of a target website Data Extraction - Collects text, metadata, emails, links, and PDF documents Supabase Integration - Stores content in PostgreSQL tables for scalability RAG Vectorization - Generates embeddings and stores them for semantic search AI Query Layer - Connects embeddings to an AI chat engine with citations Error Handling - Automatically retriggers failed queries Setup Instructions Estimated setup time: 30-45 minutes Prerequisites Self-hosted n8n instance (v0.200.0 or higher) Supabase account and project (PostgreSQL enabled) OpenAI/Gemini/Claude API key for embeddings and chat Optional: External vector database (Pinecone, Qdrant) Detailed configuration steps Step 1: Supabase configuration Project creation**: New Supabase project with PostgreSQL enabled Generating credentials**: API keys (anon key and service_role key) and connection string Security configuration**: RLS policies according to your access requirements Step 2: Connect Supabase to n8n Configure Supabase node**: Add credentials to n8n Credentials Test connection**: Verify with a simple query Configure PostgreSQL**: Direct connection for advanced operations Step 3: Preparing the database Main tables**: pages: URLs, content, metadata, scraping statuses documents: Extracted and processed PDF files embeddings: Vectors for semantic search links: Link graph for navigation Management functions**: Scripts to reactivate failed URLs and manage retries Step 4: Configuring automation Recursive scraper**: Starting URL, crawling depth, CSS selectors HTTP extraction**: User-Agent, headers, timeouts, and retry policies Supabase backup**: Batch insertion, data validation, duplicate management Step 5: Error handling and re-executions Failure monitoring**: Automatic detection of failed URLs Manual triggers**: Selective re-execution by domain or date Recovery sub-streams**: Retry logic with exponential backoff Step 6: RAG processing Embedding generation**: Text-embedding models with intelligent chunking Vector storage**: Supabase pgvector or external database Conversational engine**: Connection to chat models with source citations Data structure Main Supabase tables | Table | Content | Usage | |-------|---------|-------| | pages | URLs, HTML content, metadata | Main storage for scraped content | | documents | PDF files, extracted text | Downloaded and processed documents | | embeddings | Vectors, text chunks | Semantic search and RAG | | links | Link graph, navigation | Relationships between pages | Use cases Business and enterprise Competitive intelligence with conversational querying Market research from complex web domains Compliance monitoring and regulatory watch Research and academia Literature extraction with semantic search Building datasets from fragmented sources Legal and technical Scraping legal repositories with intelligent queries Technical documentation transformed into a conversational assistant Key features Advanced scraping Recursive crawling with automatic link discovery Multi-format extraction (HTML, PDF, emails) Intelligent error handling and retry Intelligent RAG Contextual embeddings for semantic search Multi-document queries with citations Intuitive conversational interface Performance and scalability Processing of thousands of pages per execution Embedding cache for fast responses Scalable architecture with Supabase Technical Architecture Main flow: Target URL → Recursive scraping → Content extraction → Supabase storage → Vectorization → Conversational interface Supported types: HTML pages, PDF documents, metadata, links, emails Performance specifications Capacity**: 10,000+ pages per run Response time**: < 5 seconds for RAG queries Accuracy**: >90% relevance for specific domains Scalability**: Distributed architecture via Supabase Advanced configuration Customization Crawling depth and scope controls Domain and content type filters Chunking settings to optimize RAG Monitoring Real-time monitoring in Supabase Cost and performance metrics Detailed conversation logs
by WeblineIndia
Automated Podcast Generation with n8n, OpenAI & Buzzsprout This workflow automatically turns any audio file uploaded to Google Drive into a complete podcast episode. It handles transcription, content generation, blog drafting, social copy creation, thumbnail generation, Airtable record updates, Buzzsprout publishing and finally notifies your team via Slack. It is a full “audio-to-published-podcast” automation that requires no manual editing. ⚡ Quick Start: 10-Step Fast Implementation Connect Google Drive, OpenAI, Airtable, Slack & Buzzsprout credentials. Upload an audio file to the Drive folder. Workflow triggers automatically. Audio → Text transcription via OpenAI Whisper. AI converts transcript into title, show notes, tags, publish date, etc. AI generates social content & blog article. AI generates YouTube thumbnail (URL saved). Data saved to Airtable. Episode uploaded to Buzzsprout. Slack receives final “Episode Published” notification. What It Does This workflow automates the complete lifecycle of podcast publishing. Once an audio file is placed inside a Google Drive folder, the system takes over: it transcribes the audio, generates structured metadata, creates rich content including a blog post and multiple social media captions and produces a YouTube-ready thumbnail. Generated content is saved into Airtable, ensuring you always have a centralized database of episode metadata. The workflow then uploads the processed audio file to Buzzsprout, creating a ready-to-publish podcast entry. Finally, it sends a Slack notification summarizing the episode details. This workflow eliminates hours of manual podcast editing, writing and uploading — providing a fast, reliable and repeatable content automation pipeline. Who’s It For Podcasters who want to automate production and publishing Marketing teams managing multiple content channels Creators who want to repurpose audio into blogs & social content Agencies or studios producing podcasts for clients Anyone wanting a hands-free “record → upload → publish” podcast system Airtable Table Structure (Required Fields) To run this workflow successfully, you must create one Airtable table that stores all generated podcast episode data. Below is the exact list of fields (with correct Airtable field types) that the workflow expects. | Field Name | Airtable Field Type | Purpose | | -------------------------- | ------------------- | --------------------------------------------------------- | | episode_title | Single line text | Title generated from the audio transcript | | record_Id | Auto number | Automatically generated unique internal record identifier | | episode_description | Long text | Description of the episode (AI-generated) | | show_notes | Long text | Detailed show notes generated from transcript | | suggested_publish_date | Single line text | Publish date suggested by AI | | blog_draft_markdown | Long text | SEO-optimized full blog draft | | linkedin_post | Long text | LinkedIn long-form post | | instagram_caption | Long text | Instagram caption + hashtags | | twitter_thread | Long text | Thread text (converted from array → single string) | | tiktok_script | Long text | Short TikTok/Shorts script | | youtube_thumbnail_url | URL | Thumbnail image URL generated by OpenAI | | buzzsprout_episode_id | Number | Episode ID returned from Buzzsprout | | audio_url | Single line text | Buzzsprout final audio file URL | | published_at | Single line text | Actual published datetime returned by Buzzsprout | | guid | Single line text | Unique GUID assigned by Buzzsprout | Notes for Users Setting Up the Table Do not use array fields — arrays are converted into strings before saving. record_Id is Airtable auto-number will generate it and it used for update record. published_at, audio_url, guid and buzzsprout_episode_id are populated after Buzzsprout upload. youtube_thumbnail_url is populated after the AI Image Generation node. Requirements To use this workflow, you need: n8n** self-hosted or cloud Google Drive account** (folder monitored for new audio uploads) OpenAI API Key** (Whisper + GPT + Image Generation) Airtable Base & API Key** Buzzsprout Podcast ID & API Token** Slack App / Bot Token** Audio file in MP3/WAV/M4A format How It Works & Setup Steps Step 1 — Google Drive Trigger Monitors a specific folder. When an audio file is uploaded, the workflow starts. Step 2 — Transcribe Audio with OpenAI Uses Whisper model to convert speech → text. Returned transcript is cleaned and extracted. Step 3 — Extract Episode Metadata Title Description Show notes Episode tags Suggested publish date All parsed and structured via OpenAI. Step 4 — Generate Social Media Content LinkedIn article post Twitter/X thread Instagram caption TikTok short script Step 5 — Generate Blog Draft Full SEO-optimized Markdown article created by OpenAI. Step 6 — Create YouTube Thumbnail OpenAI Image API 1280×720 thumbnail URL stored in Airtable. Step 7 — Save Everything to Airtable One single table Stores episode title, description, tags, blog draft, social content, thumbnail URL, and transcript. Step 8 — Upload Episode to Buzzsprout Audio file from Google Drive → uploaded via multipart/form-data Title & description set Response returns audio URL and episode ID These are stored in Airtable. Step 9 — Slack Notification Sends message containing: Episode Title Buzzsprout Episode ID Published At Date Thumbnail URL How to Customize Nodes Google Drive Trigger Change the folder ID to monitor a different source directory. OpenAI Metadata Prompt Adjust tone, style, summarization level or tag generation. Social Content Node Modify the prompt to produce longer posts, more thread tweets or add new channels. Image Generation Customize thumbnail style: minimal, realistic, bold text, brand colors, etc. Buzzsprout Upload Node Set episode[published_at] to schedule future releases. Slack Notification Add more fields such as show notes or Airtable record links. Add-Ons (Optional Enhancements) You may extend the workflow with: Auto-post to LinkedIn, X, Instagram, TikTok** via Buffer API Notion page creation** for drafting content Automatic transcript** upload to Google Docs Email notification** to hosts or guests Auto-publish** YouTube video version (using static background + audio) Use Case Examples Podcast Production Automation – Record audio, upload to Drive and get a fully published episode automatically. Content Repurposing Workflow – Turn one audio file into several content formats instantly. Marketing Team Efficiency – Auto-generate social posts for multiple platforms. Client Podcast Management – Agencies can manage episodes with zero manual work. Internal Communications – Convert internal voice memos into polished summaries and blogs. (There can be many more such use cases depending on customization.) Troubleshooting Guide | Issue | Possible Cause | Solution | | ------------------------------------- | ---------------------------------- | -------------------------------------------------------------------- | | Buzzsprout upload missing audio | Wrong field name or missing binary | Ensure file field is binary & set to correct Drive binary property | | Episode title missing in Buzzsprout | OpenAI JSON not parsed correctly | Check parse code & that episode_title exists | | Social content not stored in Airtable | Airtable field type mismatch | Convert arrays into strings before writing | | Thumbnail not displaying | Expired OpenAI image URL | Regenerate or store static copy in Drive | | Workflow not triggering | Wrong Drive folder ID | Update folder ID in trigger node | Need Help? If you need assistance customizing this workflow, adding new features or building similar automations, WeblineIndia’s n8n automation developers can help you design, optimize and scale enterprise-grade n8n workflows. Feel free to reach out for professional support anytime.
by Dr. Firas
💥 Create AI Viral Videos using NanoBanana 2 PRO & VEO3.1 and Publish via Blotato Who is this for? This template is for content creators, marketers, agencies, and UGC studios who want to turn a simple Telegram message into AI-generated vertical videos, automatically published across multiple social platforms using Blotato. What problem is this workflow solving? / Use case Creating short-form video ads usually requires: Designing visuals Writing hooks and captions Generating or editing video Manually uploading to TikTok, Instagram, YouTube, Facebook, LinkedIn, X, etc. This workflow solves that by automating the full pipeline from image + idea → edited image → AI video → multi-platform post. What this workflow does Create Image with NanoBanana 2 PRO User sends a photo + caption idea to a Telegram bot. OpenAI Vision analyzes the reference image. An LLM builds a UGC-style image prompt. NanoBanana 2 PRO generates an enhanced, UGC-friendly image. Generate Video with VEO3.1 An AI Agent structures a detailed Veo prompt (scene, camera, lighting, audio). Prompt is optimized and sent to VEO3.1 reference-to-video. The result is a 9:16, ~8s vertical video downloaded back into n8n. Publish with Blotato Video is uploaded to Blotato. Posts are created for TikTok, Instagram, YouTube, Facebook, LinkedIn, and X using the AI-generated caption, title, and hashtags. A final “Published” message is sent on Telegram. Setup Create and configure: Telegram bot (token in Set: Bot Token (Placeholder) node). OpenAI credentials. Fal.ai API key (for NanoBanana 2 PRO + VEO3.1). Blotato account + API credentials and connected social accounts. Import the template into n8n and update all credential references. Test by sending a product image + short idea to your Telegram bot. How to customize this workflow to your needs Edit the UGC image prompt system message to change visual style (more cinematic, minimal, etc.). Adjust the VEO prompt optimizer to tweak duration, mood, or camera movement. Enable/disable specific Blotato platforms depending on where you want to publish. Modify the caption/hashtag generation logic to match your brand tone, language, or niche. 👋 Need help or want to customize this? 📩 Contact: LinkedIn 📺 YouTube: @DRFIRASS 🚀 Workshops: Mes Ateliers n8n 📄 Documentation: Notion Guide Need help customizing? Contact me for consulting and support : Linkedin / Youtube / 🚀 Mes Ateliers n8n
by Václav Čikl
Overview Transform your Gmail sent folder into a comprehensive, enriched contact database automatically. This workflow processes hundreds or thousands of sent emails, extracting and enriching contact information using AI and web search – saving days of manual work. What This Workflow Does Loads sent Gmail messages and extracts basic contact information Deduplicates contacts against your existing Google Sheets database Searches for email conversation history with each contact AI-powered extraction from email threads (phone, socials, websites) Fallback web search via Brave API when no email history exists Saves enriched data to Google Sheets with all discovered contact details Perfect For Musicians & bands** organizing booker/venue contacts Freelancers & agencies** building client databases Sales teams** enriching prospect lists from outbound campaigns Consultants** creating structured contact databases from years of emails Key Features Intelligent Two-Path Enrichment Path A (Email History)**: Analyzes existing email threads to extract contact details from signatures and message content Path B (Web Search)**: Falls back to Brave API search + HTML scraping when no email history exists AI-Powered Data Extraction Uses GPT-5 Nano to intelligently parse: Phone numbers Website URLs LinkedIn profiles Instagram, Twitter, Facebook, Youtube, TikTok, LinkTree, BandCamp... Alternative email addresses Built-in Deduplication Prevents duplicate entries by checking existing Google Sheets records before processing. Free-Tier Friendly Runs entirely on free tiers: Gmail API (free) OpenAI GPT-5 Nano (cost-effective) Brave Search API (2,000 free searches/month) Google Sheets (free) Setup Requirements Required Accounts & Credentials Gmail Account - OAuth2 credentials for Gmail API access OpenAI API Key - For GPT-5 Nano model Brave Search API Key - Free tier (2,000 searches/month) Google Sheets - OAuth2 credentials Google Sheets Structure Create a Google Sheet with these columns (see template link): Template Sheet: Make a copy here How to Use Clone this workflow to your n8n instance Configure credentials for Gmail, OpenAI, Brave Search, and Google Sheets Create/connect your Google Sheet using the template structure Run manually to process all sent emails and build your initial database Review results in Google Sheets - enriched with discovered contact info First Run Tips Start with a smaller Gmail query (e.g., last 6 months) to test Check Brave API quota before processing large volumes Manual trigger means you control when processing happens Processing time varies based on email volume (typically 2-5 seconds per contact) Customization Ideas Extend the Enrichment Include company information parsing Extract job titles from email signatures Automate Regular Updates Convert manual trigger to scheduled trigger Process only recent sent emails for incremental updates Add email notification when new contacts are added Integration Options Push enriched contacts to CRM (HubSpot, Salesforce) Send Slack notifications for high-value contacts Export to Airtable for relational database features Improve Accuracy Add human-in-the-loop review for uncertain extractions Implement confidence scoring for AI-extracted data Add validation checks for phone numbers and URLs Use Case Example Music Promoter Building Venue Database: Processed 1,835 sent emails to bookers and venues AI extracted contact details from 60% via email signatures Brave search found websites for remaining 40% Final database: 1,835 enriched contacts ready for outreach Time saved: ~40 hours of manual data entry Technical Notes Rate Limiting**: Brave API free tier = 2,000 searches/month Duplicates**: Handled at workflow start, not during processing Empty Results**: Stores email + name even when enrichment fails Model**: Uses GPT-5 Nano for cost-effective parsing Gmail Scope**: Reads sent emails only (not inbox) Cost Estimate For processing 1,000 contacts: Gmail API**: Free GPT-5 Nano**: ~$0.50-2 (depending on email length) Brave Search**: Free (within 2K/month limit) Google Sheets**: Free Total**: Under $2 for 1,000 enriched contacts Template Author: Questions or need help with setup? 📧 Email:xciklv@gmail.com 💼 LinkedIn:https://www.linkedin.com/in/vaclavcikl/
by Dr. Firas
💥 Automate YouTube thumbnail creation from video links (with templated.io) Who is this for? This workflow is designed for content creators, YouTubers, and automation enthusiasts who want to automatically generate stunning YouTube thumbnails and streamline their publishing workflow — all within n8n. If you regularly post videos and spend hours designing thumbnails manually, this automation is built for you. What problem is this workflow solving? Creating thumbnails is time-consuming — yet crucial for video performance. This workflow completely automates that process: No more manual design. No more downloading screenshots. No more repetitive uploads. In less than 2 minutes, you can refresh your entire YouTube thumbnail library and make your channel look brand new. What this workflow does Once activated, this workflow can: ✅ Receive YouTube video links via Telegram ✅ Extract metadata (title, description, channel info) via YouTube API ✅ Generate a custom thumbnail automatically using Templated.io ✅ Upload the new thumbnail to Google Drive ✅ Log data in Google Sheets ✅ Send email and Telegram notifications when ready ✅ Create and publish AI-generated social posts on LinkedIn, Facebook, and Twitter via Blotato Bonus: You can re-create dozens of YouTube covers in minutes — saving up to 5 hours per week and around $500/month in manual design effort. Setup 1️⃣ Get a YouTube Data API v3 key from Google Cloud Console 2️⃣ Create a Templated.io account and get your API key + template ID 3️⃣ Set up a Telegram bot using @BotFather 4️⃣ Create a Google Drive folder and copy the folder ID 5️⃣ Create a Google Sheet with columns: Date, Video ID, Video URL, Title, Thumbnail Link, Status 6️⃣ Get your Blotato API key from the dashboard 7️⃣ Connect your social media accounts to Blotato 8️⃣ Fill all credentials in the Workflow Configuration node 9️⃣ Test by sending a YouTube URL to your Telegram bot How to customize this workflow Replace the Templated.io template ID with your own custom thumbnail layout Modify the OpenAI node prompts to change text tone or style Add or remove social platforms in the Blotato section Adjust the wait time (default: 5 minutes) based on template complexity Localize or translate the generated captions as needed Expected Outcome With one Telegram message, you’ll receive: A professional custom thumbnail An instant email + Telegram notification A Google Drive link with your ready-to-use design And your social networks will be automatically updated — no manual uploads. Credits Thumbnail generation powered by Templated.io Social publishing powered by Blotato Automation orchestrated via n8n 👋 Need help or want to customize this? 📩 Contact: LinkedIn 📺 YouTube: @DRFIRASS 🚀 Workshops: Mes Ateliers n8n 🎥 Watch This Tutorial 📄 Documentation: Notion Guide Need help customizing? Contact me for consulting and support : Linkedin / Youtube / 🚀 Mes Ateliers n8n
by TOMOMITSU ASANO
{ "name": "IoT Sensor Data Aggregation with AI-Powered Anomaly Detection", "nodes": [ { "parameters": { "content": "## How it works\nThis workflow monitors IoT sensors in real-time. It ingests data via MQTT or a schedule, normalizes the format, and removes duplicates using data fingerprinting. An AI Agent then analyzes readings against defined thresholds to detect anomalies. Finally, it routes alerts to Slack or Email based on severity and logs everything to Google Sheets.\n\n## Setup steps\n1. Configure the MQTT Trigger with your broker details.\n2. Set your specific limits in the Define Sensor Thresholds node.\n3. Connect your OpenAI credential to the Chat Model node.\n4. Authenticate the Gmail, Slack, and Google Sheets nodes.\n5. Create a Google Sheet with headers: timestamp, sensorId, location, readings, analysis.", "height": 484, "width": 360 }, "id": "298da7ff-0e47-4b6c-85f5-2ce77275cdf3", "name": "Main Overview", "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ -2352, -480 ] }, { "parameters": { "content": "## 1. Data Ingestion\nCaptures sensor data via MQTT for real-time streams or runs on a schedule for batch processing. Both streams are merged for unified handling.", "height": 488, "width": 412, "color": 7 }, "id": "4794b396-cd71-429c-bcef-61780a55d707", "name": "Section: Ingestion", "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ -1822, -48 ] }, { "parameters": { "content": "## 2. Normalization & Deduplication\nSets monitoring thresholds, standardizes the JSON structure, creates a content hash, and filters out duplicate readings to prevent redundant API calls.", "height": 316, "width": 884, "color": 7 }, "id": "339e7cb7-491e-44c9-b561-983e147237d8", "name": "Section: Processing", "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ -1376, 32 ] }, { "parameters": { "content": "## 3. AI Anomaly Detection\nAn AI Agent evaluates sensor data against thresholds to identify anomalies, assigning severity levels and providing actionable recommendations.", "height": 528, "width": 460, "color": 7 }, "id": "ebcb7ca3-f70c-4a90-8a2a-f489e7be4c73", "name": "Section: AI Analysis", "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ -422, 24 ] }, { "parameters": { "content": "## 4. Routing & Archiving\nRoutes alerts based on severity (Critical = Email+Slack, Warning = Slack) and archives all data points to Google Sheets for historical analysis.", "height": 756, "width": 900, "color": 7 }, "id": "7f2b32a5-d3b2-4fea-844f-4b39b8e8a239", "name": "Section: Alerting", "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ 94, -196 ] }, { "parameters": { "topics": "sensors/+/data", "options": {} }, "id": "bc86720b-9de9-4693-b090-343d3ebad3a3", "name": "MQTT Sensor Trigger", "type": "n8n-nodes-base.mqttTrigger", "typeVersion": 1, "position": [ -1760, 88 ] }, { "parameters": { "rule": { "interval": [ { "field": "minutes", "minutesInterval": 15 } ] } }, "id": "1c38f2d0-aa00-447e-bdae-bffd08c38461", "name": "Batch Process Schedule", "type": "n8n-nodes-base.scheduleTrigger", "typeVersion": 1.2, "position": [ -1760, 280 ] }, { "parameters": { "mode": "chooseBranch" }, "id": "f9b41822-ee61-448b-b324-38483036e0e1", "name": "Merge Triggers", "type": "n8n-nodes-base.merge", "typeVersion": 3, "position": [ -1536, 184 ] }, { "parameters": { "mode": "raw", "jsonOutput": "{\n \"thresholds\": {\n \"temperature\": {\"min\": -10, \"max\": 50, \"unit\": \"C\"},\n \"humidity\": {\"min\": 20, \"max\": 90, \"unit\": \"%\"},\n \"pressure\": {\"min\": 950, \"max\": 1050, \"unit\": \"hPa\"},\n \"co2\": {\"min\": 400, \"max\": 2000, \"unit\": \"ppm\"}\n },\n \"alertConfig\": {\n \"criticalChannel\": \"#iot-critical\",\n \"warningChannel\": \"#iot-alerts\",\n \"emailRecipients\": \"ops@example.com\"\n }\n}", "options": {} }, "id": "308705a8-edc7-4435-9250-487aa528e033", "name": "Define Sensor Thresholds", "type": "n8n-nodes-base.set", "typeVersion": 3.4, "position": [ -1312, 184 ] }, { "parameters": { "jsCode": "const items = $input.all();\nconst thresholds = $('Define Sensor Thresholds').first().json.thresholds;\nconst results = [];\n\nfor (const item of items) {\n let sensorData;\n try {\n sensorData = typeof item.json.message === 'string' \n ? JSON.parse(item.json.message) \n : item.json;\n } catch (e) {\n sensorData = item.json;\n }\n \n const now = new Date();\n const reading = {\n sensorId: sensorData.sensorId || sensorData.topic?.split('/')[1] || 'unknown',\n location: sensorData.location || 'Main Facility',\n timestamp: now.toISOString(),\n readings: {\n temperature: sensorData.temperature ?? null,\n humidity: sensorData.humidity ?? null,\n pressure: sensorData.pressure ?? null,\n co2: sensorData.co2 ?? null\n },\n metadata: {\n receivedAt: now.toISOString(),\n source: item.json.topic || 'batch',\n thresholds: thresholds\n }\n };\n \n results.push({ json: reading });\n}\n\nreturn results;" }, "id": "a2008189-5ace-418b-b0db-d51d63dcf2d8", "name": "Parse Sensor Payload", "type": "n8n-nodes-base.code", "typeVersion": 2, "position": [ -1088, 184 ] }, { "parameters": { "type": "SHA256", "value": "={{ $json.sensorId + '-' + $json.timestamp + '-' + JSON.stringify($json.readings) }}", "dataPropertyName": "dataHash" }, "id": "bf8db555-a10e-4468-a44a-cdc4c97e5b80", "name": "Generate Data Fingerprint", "type": "n8n-nodes-base.crypto", "typeVersion": 1, "position": [ -864, 184 ] }, { "parameters": { "compare": "selectedFields", "fieldsToCompare": "dataHash", "options": {} }, "id": "a45405e2-d211-449d-84d7-4538eaf56fcd", "name": "Remove Duplicate Readings", "type": "n8n-nodes-base.removeDuplicates", "typeVersion": 1, "position": [ -640, 184 ] }, { "parameters": { "text": "=Analyze this IoT sensor reading and determine if there are any anomalies:\n\nSensor ID: {{ $json.sensorId }}\nLocation: {{ $json.location }}\nTimestamp: {{ $json.timestamp }}\n\nReadings:\n- Temperature: {{ $json.readings.temperature }}°C (Normal: {{ $json.metadata.thresholds.temperature.min }} to {{ $json.metadata.thresholds.temperature.max }})\n- Humidity: {{ $json.readings.humidity }}% (Normal: {{ $json.metadata.thresholds.humidity.min }} to {{ $json.metadata.thresholds.humidity.max }})\n- CO2: {{ $json.readings.co2 }} ppm (Normal: {{ $json.metadata.thresholds.co2.min }} to {{ $json.metadata.thresholds.co2.max }})\n\nProvide your analysis in this exact JSON format:\n{\n \"hasAnomaly\": true/false,\n \"severity\": \"critical\"/\"warning\"/\"normal\",\n \"anomalies\": [\"list of detected issues\"],\n \"reasoning\": \"explanation of your analysis\",\n \"recommendation\": \"suggested action\"\n}", "options": { "systemMessage": "You are an IoT monitoring expert. Analyze sensor data and detect anomalies based on the provided thresholds. Be precise and provide actionable recommendations. Always respond in valid JSON format." } }, "id": "b60194ba-7b99-44e0-b0d7-9f1632dce4d4", "name": "AI Anomaly Detector", "type": "@n8n/n8n-nodes-langchain.agent", "typeVersion": 1.7, "position": [ -416, 184 ] }, { "parameters": { "jsCode": "const item = $input.first();\nconst originalData = $('Remove Duplicate Readings').first().json;\n\nlet aiAnalysis;\ntry {\n const responseText = item.json.output || item.json.text || '';\n const jsonMatch = responseText.match(/\\{[\\s\\S]*\\}/);\n aiAnalysis = jsonMatch ? JSON.parse(jsonMatch[0]) : {\n hasAnomaly: false,\n severity: 'normal',\n anomalies: [],\n reasoning: 'Unable to parse AI response',\n recommendation: 'Manual review required'\n };\n} catch (e) {\n aiAnalysis = {\n hasAnomaly: false,\n severity: 'normal',\n anomalies: [],\n reasoning: 'Parse error: ' + e.message,\n recommendation: 'Manual review required'\n };\n}\n\nreturn [{\n json: {\n ...originalData,\n analysis: aiAnalysis,\n alertLevel: aiAnalysis.severity,\n requiresAlert: aiAnalysis.hasAnomaly && aiAnalysis.severity !== 'normal'\n }\n}];" }, "id": "a145a8c7-538c-411a-95c6-9485acdcb969", "name": "Parse AI Analysis", "type": "n8n-nodes-base.code", "typeVersion": 2, "position": [ -64, 184 ] }, { "parameters": { "rules": { "values": [ { "conditions": { "options": { "caseSensitive": true, "typeValidation": "strict" }, "combinator": "and", "conditions": [ { "id": "critical", "operator": { "type": "string", "operation": "equals" }, "leftValue": "={{ $json.alertLevel }}", "rightValue": "critical" } ] }, "renameOutput": true, "outputKey": "Critical" }, { "conditions": { "options": { "caseSensitive": true, "typeValidation": "strict" }, "combinator": "and", "conditions": [ { "id": "warning", "operator": { "type": "string", "operation": "equals" }, "leftValue": "={{ $json.alertLevel }}", "rightValue": "warning" } ] }, "renameOutput": true, "outputKey": "Warning" } ] }, "options": { "fallbackOutput": "extra" } }, "id": "1ab9785d-9f7f-4840-b1e9-0afc62b00b12", "name": "Route by Severity", "type": "n8n-nodes-base.switch", "typeVersion": 3.2, "position": [ 160, 168 ] }, { "parameters": { "sendTo": "={{ $('Define Sensor Thresholds').first().json.alertConfig.emailRecipients }}", "subject": "=CRITICAL IoT Alert: {{ $json.sensorId }} - {{ $json.analysis.anomalies[0] || 'Anomaly Detected' }}", "message": "=CRITICAL IoT SENSOR ALERT\n\nSensor: {{ $json.sensorId }}\nLocation: {{ $json.location }}\nTime: {{ $json.timestamp }}\n\nReadings:\n- Temperature: {{ $json.readings.temperature }}°C\n- Humidity: {{ $json.readings.humidity }}%\n- CO2: {{ $json.readings.co2 }} ppm\n\nAI Analysis:\n{{ $json.analysis.reasoning }}\n\nDetected Issues:\n{{ $json.analysis.anomalies.join('\\n- ') }}\n\nRecommendation:\n{{ $json.analysis.recommendation }}", "options": {} }, "id": "28201a6c-10b5-4387-be89-10a57c634622", "name": "Send Critical Email", "type": "n8n-nodes-base.gmail", "typeVersion": 2.1, "position": [ 384, -80 ], "webhookId": "35b9f8fa-4a50-456e-b552-9fd20a25ccc5" }, { "parameters": { "select": "channel", "channelId": { "__rl": true, "mode": "name", "value": "#iot-critical" }, "text": "=🚨 CRITICAL IoT ALERT\n\nSensor: {{ $json.sensorId }}\nLocation: {{ $json.location }}\n\nReadings:\n• Temperature: {{ $json.readings.temperature }}°C\n• Humidity: {{ $json.readings.humidity }}%\n• CO2: {{ $json.readings.co2 }} ppm\n\nAI Analysis: {{ $json.analysis.reasoning }}\nRecommendation: {{ $json.analysis.recommendation }}", "otherOptions": {} }, "id": "c5a297be-ccef-40ba-9178-65805262efba", "name": "Slack Critical Alert", "type": "n8n-nodes-base.slack", "typeVersion": 2.2, "position": [ 384, 112 ], "webhookId": "19113595-0208-4b37-b68c-c9788c19f618" }, { "parameters": { "select": "channel", "channelId": { "__rl": true, "mode": "name", "value": "#iot-alerts" }, "text": "=⚠️ IoT Warning\n\nSensor: {{ $json.sensorId }} | Location: {{ $json.location }}\nIssue: {{ $json.analysis.anomalies[0] || 'Threshold approaching' }}\nRecommendation: {{ $json.analysis.recommendation }}", "otherOptions": {} }, "id": "5c3d7acf-0211-44dd-9f4b-a43d3796abb1", "name": "Slack Warning Alert", "type": "n8n-nodes-base.slack", "typeVersion": 2.2, "position": [ 384, 400 ], "webhookId": "37abfb19-f82f-4449-bd69-a65635b99606" }, { "parameters": {}, "id": "6bcbb42f-ec14-4f00-a091-babcc2d2d5c4", "name": "Merge Alert Outputs", "type": "n8n-nodes-base.merge", "typeVersion": 3, "position": [ 608, 184 ] }, { "parameters": { "operation": "append", "documentId": { "__rl": true, "mode": "list", "value": "" }, "sheetName": { "__rl": true, "mode": "list", "value": "" } }, "id": "6243aa23-408d-4928-a512-811eeb3b5f9e", "name": "Archive to Google Sheets", "type": "n8n-nodes-base.googleSheets", "typeVersion": 4.5, "position": [ 832, 184 ] }, { "parameters": { "model": "gpt-4o-mini", "options": { "temperature": 0.3 } }, "id": "61081e8a-ebc9-465f-8beb-88af225e59f2", "name": "OpenAI Chat Model", "type": "@n8n/n8n-nodes-langchain.lmChatOpenAi", "typeVersion": 1.2, "position": [ -344, 408 ] } ], "pinData": {}, "connections": { "MQTT Sensor Trigger": { "main": [ [ { "node": "Merge Triggers", "type": "main", "index": 0 } ] ] }, "Batch Process Schedule": { "main": [ [ { "node": "Merge Triggers", "type": "main", "index": 1 } ] ] }, "Merge Triggers": { "main": [ [ { "node": "Define Sensor Thresholds", "type": "main", "index": 0 } ] ] }, "Define Sensor Thresholds": { "main": [ [ { "node": "Parse Sensor Payload", "type": "main", "index": 0 } ] ] }, "Parse Sensor Payload": { "main": [ [ { "node": "Generate Data Fingerprint", "type": "main", "index": 0 } ] ] }, "Generate Data Fingerprint": { "main": [ [ { "node": "Remove Duplicate Readings", "type": "main", "index": 0 } ] ] }, "Remove Duplicate Readings": { "main": [ [ { "node": "AI Anomaly Detector", "type": "main", "index": 0 } ] ] }, "AI Anomaly Detector": { "main": [ [ { "node": "Parse AI Analysis", "type": "main", "index": 0 } ] ] }, "Parse AI Analysis": { "main": [ [ { "node": "Route by Severity", "type": "main", "index": 0 } ] ] }, "Route by Severity": { "main": [ [ { "node": "Send Critical Email", "type": "main", "index": 0 }, { "node": "Slack Critical Alert", "type": "main", "index": 0 } ], [ { "node": "Slack Warning Alert", "type": "main", "index": 0 } ], [ { "node": "Merge Alert Outputs", "type": "main", "index": 0 } ] ] }, "Send Critical Email": { "main": [ [ { "node": "Merge Alert Outputs", "type": "main", "index": 0 } ] ] }, "Slack Critical Alert": { "main": [ [ { "node": "Merge Alert Outputs", "type": "main", "index": 0 } ] ] }, "Slack Warning Alert": { "main": [ [ { "node": "Merge Alert Outputs", "type": "main", "index": 0 } ] ] }, "Merge Alert Outputs": { "main": [ [ { "node": "Archive to Google Sheets", "type": "main", "index": 0 } ] ] }, "OpenAI Chat Model": { "ai_languageModel": [ [ { "node": "AI Anomaly Detector", "type": "ai_languageModel", "index": 0 } ] ] } }, "active": false, "settings": { "executionOrder": "v1" }, "versionId": "", "meta": { "instanceId": "15d6057a37b8367f33882dd60593ee5f6cc0c59310ff1dc66b626d726083b48d" }, "tags": [] }
by Simeon Penev
Who’s it for Marketing, growth, and analytics teams who want a decision-ready GA4 summary—automatically calculated, clearly color-coded, and emailed as a polished HTML report. How it works / What it does Get Client (Form Trigger)* collects *GA4 Property ID (“Account ID”), **Key Event, date ranges (current & previous), Client Name, and recipient email. Overall Metrics This Period / Previous Period (GA4 Data API)** pull sessions, users, engagement, bounce rate, and more for each range. Form Submits This Period / Previous Period (GA4 Data API)** fetch key-event counts for conversion comparisons. Code** normalizes form dates for API requests. AI Agent* builds a *valid HTML email**: Calculates % deltas, applies green for positive (#10B981) and red for negative (#EF4444) changes. Writes summary and recommendations. Produces the final HTML only. Send a message (Gmail)** sends the formatted HTML report to the specified email address with a contextual subject. How to set up 1) Add credentials: Google Analytics OAuth2, OpenAI (Chat), Gmail OAuth2. 2) Ensure the form fields match your GA4 property and event names; “Account ID” = GA4 Property ID. Property ID - https://take.ms/vO2MG Key event - https://take.ms/hxwQi 3) Publish the form URL and run a test submission. Requirements GA4 property access (Viewer/Analyst) • OpenAI API key • Gmail account with send permission. Resources Google OAuth2 (GA4) – https://docs.n8n.io/integrations/builtin/credentials/google/oauth-generic/ OpenAI credentials – https://docs.n8n.io/integrations/builtin/credentials/openai/ Gmail OAuth2 – https://docs.n8n.io/integrations/builtin/credentials/google/ GA4 Data API overview – https://developers.google.com/analytics/devguides/reporting/data/v1