by Dr. Firas
💥 Create AI Viral Videos using NanoBanana 2 PRO & VEO3.1 and Publish via Blotato Who is this for? This template is for content creators, marketers, agencies, and UGC studios who want to turn a simple Telegram message into AI-generated vertical videos, automatically published across multiple social platforms using Blotato. What problem is this workflow solving? / Use case Creating short-form video ads usually requires: Designing visuals Writing hooks and captions Generating or editing video Manually uploading to TikTok, Instagram, YouTube, Facebook, LinkedIn, X, etc. This workflow solves that by automating the full pipeline from image + idea → edited image → AI video → multi-platform post. What this workflow does Create Image with NanoBanana 2 PRO User sends a photo + caption idea to a Telegram bot. OpenAI Vision analyzes the reference image. An LLM builds a UGC-style image prompt. NanoBanana 2 PRO generates an enhanced, UGC-friendly image. Generate Video with VEO3.1 An AI Agent structures a detailed Veo prompt (scene, camera, lighting, audio). Prompt is optimized and sent to VEO3.1 reference-to-video. The result is a 9:16, ~8s vertical video downloaded back into n8n. Publish with Blotato Video is uploaded to Blotato. Posts are created for TikTok, Instagram, YouTube, Facebook, LinkedIn, and X using the AI-generated caption, title, and hashtags. A final “Published” message is sent on Telegram. Setup Create and configure: Telegram bot (token in Set: Bot Token (Placeholder) node). OpenAI credentials. Fal.ai API key (for NanoBanana 2 PRO + VEO3.1). Blotato account + API credentials and connected social accounts. Import the template into n8n and update all credential references. Test by sending a product image + short idea to your Telegram bot. How to customize this workflow to your needs Edit the UGC image prompt system message to change visual style (more cinematic, minimal, etc.). Adjust the VEO prompt optimizer to tweak duration, mood, or camera movement. Enable/disable specific Blotato platforms depending on where you want to publish. Modify the caption/hashtag generation logic to match your brand tone, language, or niche. 👋 Need help or want to customize this? 📩 Contact: LinkedIn 📺 YouTube: @DRFIRASS 🚀 Workshops: Mes Ateliers n8n 📄 Documentation: Notion Guide Need help customizing? Contact me for consulting and support : Linkedin / Youtube / 🚀 Mes Ateliers n8n
by ing.Seif
🚀 Create Pro-Level Social Media Carousels & Auto-Publish with Blotato By @nocodehack Who is this for? This workflow is built for e-commerce brands, social media managers, marketing agencies, dropshippers, content creators, and automation builders who need to produce professional carousel posts at scale. Perfect for anyone running product marketing, brand campaigns, multi-platform social media, affiliate content, or any business that publishes carousel posts regularly and wants to eliminate design costs entirely. What problem is this workflow solving? / Use case Creating professional carousel posts is: Slow** — designing even one carousel takes 30-60 minutes manually Expensive** — Fiverr/Upwork designers charge $50-100 per carousel Inconsistent** — AI-generated slides never visually match each other Unscalable** — managing multiple brands multiplies every problem Tedious** — exporting, uploading, scheduling, and publishing is repetitive busywork This workflow solves: ❌ Manual carousel design (Canva, Photoshop, Figma) ❌ Paying designers per post ❌ AI images that look obviously AI-generated ❌ Visually inconsistent slides that don't match ❌ Manual copywriting for captions and hashtags ❌ Manual uploading and publishing to each platform ❌ Managing multiple brands with different visual identities It turns one Google Sheet row into a fully designed, published carousel — across Instagram, Facebook, and X — for approximately 5 cents. What this workflow does This automation system acts as a complete AI-powered carousel design studio and publishing pipeline. Step-by-step pipeline: Step 1 — Data Pipeline (Google Sheet) Runs on a schedule (configurable interval) Pulls the next unprocessed row from Google Sheets Each row = one carousel (one brand, one product, one post) Marks the row as "Processing" to prevent duplicate execution Checks if product description and images are provided — if missing, auto-scrapes from the product URL using Jina.ai (free, no account needed) Merges all data into one clean payload for the AI Step 2 — AI Creative Direction (Claude) Sends all product data (description, images as base64, brand logo, creative specifications) to Claude via the Anthropic API Claude acts as an executive creative director — not just generating content, but building a complete visual identity first: Color palette (2-3 hex colors) Typography style and hierarchy Lighting direction and mood Signature design element Background texture concept Then generates for each slide: headline, body copy, layout approach, and a detailed 80+ word image prompt A 2000-word system prompt with banned elements list eliminates the generic AI look (no waves, no scattered leaves, no flat backgrounds, no Canva-style templates) Every image prompt ends with a negative prompt / AVOID block — same concept as Stable Diffusion negative prompts, applied to Gemini Output is structured JSON via a parser — no freeform text that could break the pipeline Also generates the Instagram caption and hashtags Step 3 — Image Generation with Visual Consistency Loop This is the core innovation of the workflow** Slides are generated sequentially, NOT in parallel — this is critical For slide 1: Gemini generates the image from the prompt + product reference images For slide 2+: The workflow fetches all previously generated slides, converts them to base64, and attaches them as reference images alongside the current prompt The text prompt explicitly instructs: "Match the exact typography, color palette, and lighting from the attached previous slides" This creates a double enforcement system — visual reference + written instruction Result: every slide in the carousel shares the same visual identity without using templates or presets Images are generated via NanoBanana Pro (Gemini image generation API) Each generated slide is uploaded to Blotato media storage and saved to a global memory array for the next iteration Uses $getWorkflowStaticData('global') to persist slide URLs across loop iterations Step 4 — Publishing & Status Update Collects all uploaded slide URLs in correct order Reads the "Socials" field from the Google Sheet (comma-separated: instagram, facebook, x) Routes to the correct platform via a Switch node Publishes via Blotato API — supports immediate posting or scheduled posting (ISO 8601 format) One row can publish to all three platforms simultaneously Updates the Google Sheet row: Status → "Published" + direct Post URL If anything breaks: Status → "Failed" with error details ➡️ Result: One Google Sheet row in, one fully designed and published multi-platform carousel out. 5 cents. 5 minutes. Setup Required accounts & API keys: Google Sheets** — read/write access to your content spreadsheet Anthropic** — Claude API key (creative direction + copywriting) Google AI / Gemini** — API key for image generation via NanoBanana Pro ($300 free credit per new Gmail) Blotato** — API key for media upload + multi-platform publishing Jina.ai** — free web scraping, no account required (10M tokens free) Google Sheet structure: Column Description Brand Logo URL Direct link to your brand logo — placed on every slide automatically Product URL Link to product page — used for auto-scraping if description/images are empty Product Description Optional — write it yourself for best results, or leave blank to auto-scrape Product Images URL Direct links to product photos (comma-separated for multiple) Specification Creative direction hint (e.g. "dark cinematic luxury" or "bright playful minimal") — leave empty for AI to decide Post Date YYYY-MM-DD format — workflow only picks up rows matching today's date Post Hour now for immediate publish, or 14:00 / 2pm for scheduled Socials Comma-separated platforms: instagram, facebook, x Status Leave empty — auto-filled: Processing → Published / Failed Post URL Leave empty — auto-filled with direct link to live post Configuration steps: Import the workflow JSON into n8n Add all required API credentials in n8n's credential manager Create your Google Sheet using the template provided (link in resources) Set your Blotato profile IDs in each publishing node (one per platform) Map platform outputs in the Switch node Verify the Gemini image generation endpoint in the NanoBanana Pro node Test with one row before activating production mode Recommended hosting: n8n is free and open source but needs a server. A VPS with at least 2GB RAM handles image generation and multiple API calls without issues. The workflow runs 24/7 on schedule. How to customize Change AI model:** Swap Claude for GPT-4o or Gemini in the LLM Chain node — the structured output parser works with any model Change slide count:** Edit the system prompt and user prompt (currently locked to 3 slides) Change visual style:** Edit the creative direction in the system prompt — modify banned elements, change composition approaches, adjust the quality standard Add platforms:** Add new outputs to the Switch node + new Blotato publish nodes (Blotato supports TikTok, LinkedIn, Pinterest, Threads, YouTube, Bluesky) Add approval step:** Insert a Wait node before publishing to manually review before posting Change image hosting:** Swap Blotato Upload for Cloudinary or any S3-compatible storage Change scraper:** Swap Jina.ai for any other web scraping tool Adjust scheduling:** Modify the Schedule Trigger interval and use the Post Hour column for per-post timing Multi-brand setup:** Each row can have a different brand logo and creative specification — the AI generates a fresh visual identity per row Cost breakdown per carousel (approx.) Component Cost Claude (creative direction + copy, ~8K tokens) ~$0.02 Gemini (3 slide images via NanoBanana Pro) ~$0.03 Jina.ai (web scraping) Free Blotato (publishing) Per plan Total per carousel ~$0.05 Compare: Fiverr/Upwork designers charge $50-100 per carousel post. This workflow does it for 5 cents. Gemini offers $300 free credit per new Gmail account — enough for thousands of carousels before spending anything. Expected outcome You get a fully automated carousel production system that can: Generate agency-quality carousel designs from a spreadsheet Maintain visual consistency across all slides without templates Handle multiple brands with completely different visual identities Publish to Instagram, Facebook, and X simultaneously Schedule content weeks in advance Scale from 1 carousel/day to dozens without additional effort Eliminate design costs almost entirely Typical use cases E-commerce product marketing (daily product carousels) Brand awareness campaigns across multiple platforms Affiliate marketing content at scale Social media agency client deliverables Dropshipping product promotion Multi-brand social media management Content calendar automation A/B testing different creative directions for the same product Watch the full step-by-step walkthrough. 🎥 Video Tutorial 👋 Need help or want to customize? 📩 Contact: LinkedIn 📺 YouTube: @nocodehack 🌐 Resources & Downloads: nocodehack.io
by Václav Čikl
Overview Transform your Gmail sent folder into a comprehensive, enriched contact database automatically. This workflow processes hundreds or thousands of sent emails, extracting and enriching contact information using AI and web search – saving days of manual work. What This Workflow Does Loads sent Gmail messages and extracts basic contact information Deduplicates contacts against your existing Google Sheets database Searches for email conversation history with each contact AI-powered extraction from email threads (phone, socials, websites) Fallback web search via Brave API when no email history exists Saves enriched data to Google Sheets with all discovered contact details Perfect For Musicians & bands** organizing booker/venue contacts Freelancers & agencies** building client databases Sales teams** enriching prospect lists from outbound campaigns Consultants** creating structured contact databases from years of emails Key Features Intelligent Two-Path Enrichment Path A (Email History)**: Analyzes existing email threads to extract contact details from signatures and message content Path B (Web Search)**: Falls back to Brave API search + HTML scraping when no email history exists AI-Powered Data Extraction Uses GPT-5 Nano to intelligently parse: Phone numbers Website URLs LinkedIn profiles Instagram, Twitter, Facebook, Youtube, TikTok, LinkTree, BandCamp... Alternative email addresses Built-in Deduplication Prevents duplicate entries by checking existing Google Sheets records before processing. Free-Tier Friendly Runs entirely on free tiers: Gmail API (free) OpenAI GPT-5 Nano (cost-effective) Brave Search API (2,000 free searches/month) Google Sheets (free) Setup Requirements Required Accounts & Credentials Gmail Account - OAuth2 credentials for Gmail API access OpenAI API Key - For GPT-5 Nano model Brave Search API Key - Free tier (2,000 searches/month) Google Sheets - OAuth2 credentials Google Sheets Structure Create a Google Sheet with these columns (see template link): Template Sheet: Make a copy here How to Use Clone this workflow to your n8n instance Configure credentials for Gmail, OpenAI, Brave Search, and Google Sheets Create/connect your Google Sheet using the template structure Run manually to process all sent emails and build your initial database Review results in Google Sheets - enriched with discovered contact info First Run Tips Start with a smaller Gmail query (e.g., last 6 months) to test Check Brave API quota before processing large volumes Manual trigger means you control when processing happens Processing time varies based on email volume (typically 2-5 seconds per contact) Customization Ideas Extend the Enrichment Include company information parsing Extract job titles from email signatures Automate Regular Updates Convert manual trigger to scheduled trigger Process only recent sent emails for incremental updates Add email notification when new contacts are added Integration Options Push enriched contacts to CRM (HubSpot, Salesforce) Send Slack notifications for high-value contacts Export to Airtable for relational database features Improve Accuracy Add human-in-the-loop review for uncertain extractions Implement confidence scoring for AI-extracted data Add validation checks for phone numbers and URLs Use Case Example Music Promoter Building Venue Database: Processed 1,835 sent emails to bookers and venues AI extracted contact details from 60% via email signatures Brave search found websites for remaining 40% Final database: 1,835 enriched contacts ready for outreach Time saved: ~40 hours of manual data entry Technical Notes Rate Limiting**: Brave API free tier = 2,000 searches/month Duplicates**: Handled at workflow start, not during processing Empty Results**: Stores email + name even when enrichment fails Model**: Uses GPT-5 Nano for cost-effective parsing Gmail Scope**: Reads sent emails only (not inbox) Cost Estimate For processing 1,000 contacts: Gmail API**: Free GPT-5 Nano**: ~$0.50-2 (depending on email length) Brave Search**: Free (within 2K/month limit) Google Sheets**: Free Total**: Under $2 for 1,000 enriched contacts Template Author: Questions or need help with setup? 📧 Email:xciklv@gmail.com 💼 LinkedIn:https://www.linkedin.com/in/vaclavcikl/
by franck fambou
Overview This advanced automation workflow enables deep web scraping combined with Retrieval-Augmented Generation (RAG) to transform websites into intelligent, queryable knowledge bases. The system recursively crawls target websites, extracts content, and indexes all data in a vector database for AI conversational access. How the system works Intelligent Web Scraping and RAG Pipeline Recursive Web Scraper - Automatically crawls every accessible page of a target website Data Extraction - Collects text, metadata, emails, links, and PDF documents Supabase Integration - Stores content in PostgreSQL tables for scalability RAG Vectorization - Generates embeddings and stores them for semantic search AI Query Layer - Connects embeddings to an AI chat engine with citations Error Handling - Automatically retriggers failed queries Setup Instructions Estimated setup time: 30-45 minutes Prerequisites Self-hosted n8n instance (v0.200.0 or higher) Supabase account and project (PostgreSQL enabled) OpenAI/Gemini/Claude API key for embeddings and chat Optional: External vector database (Pinecone, Qdrant) Detailed configuration steps Step 1: Supabase configuration Project creation**: New Supabase project with PostgreSQL enabled Generating credentials**: API keys (anon key and service_role key) and connection string Security configuration**: RLS policies according to your access requirements Step 2: Connect Supabase to n8n Configure Supabase node**: Add credentials to n8n Credentials Test connection**: Verify with a simple query Configure PostgreSQL**: Direct connection for advanced operations Step 3: Preparing the database Main tables**: pages: URLs, content, metadata, scraping statuses documents: Extracted and processed PDF files embeddings: Vectors for semantic search links: Link graph for navigation Management functions**: Scripts to reactivate failed URLs and manage retries Step 4: Configuring automation Recursive scraper**: Starting URL, crawling depth, CSS selectors HTTP extraction**: User-Agent, headers, timeouts, and retry policies Supabase backup**: Batch insertion, data validation, duplicate management Step 5: Error handling and re-executions Failure monitoring**: Automatic detection of failed URLs Manual triggers**: Selective re-execution by domain or date Recovery sub-streams**: Retry logic with exponential backoff Step 6: RAG processing Embedding generation**: Text-embedding models with intelligent chunking Vector storage**: Supabase pgvector or external database Conversational engine**: Connection to chat models with source citations Data structure Main Supabase tables | Table | Content | Usage | |-------|---------|-------| | pages | URLs, HTML content, metadata | Main storage for scraped content | | documents | PDF files, extracted text | Downloaded and processed documents | | embeddings | Vectors, text chunks | Semantic search and RAG | | links | Link graph, navigation | Relationships between pages | Use cases Business and enterprise Competitive intelligence with conversational querying Market research from complex web domains Compliance monitoring and regulatory watch Research and academia Literature extraction with semantic search Building datasets from fragmented sources Legal and technical Scraping legal repositories with intelligent queries Technical documentation transformed into a conversational assistant Key features Advanced scraping Recursive crawling with automatic link discovery Multi-format extraction (HTML, PDF, emails) Intelligent error handling and retry Intelligent RAG Contextual embeddings for semantic search Multi-document queries with citations Intuitive conversational interface Performance and scalability Processing of thousands of pages per execution Embedding cache for fast responses Scalable architecture with Supabase Technical Architecture Main flow: Target URL → Recursive scraping → Content extraction → Supabase storage → Vectorization → Conversational interface Supported types: HTML pages, PDF documents, metadata, links, emails Performance specifications Capacity**: 10,000+ pages per run Response time**: < 5 seconds for RAG queries Accuracy**: >90% relevance for specific domains Scalability**: Distributed architecture via Supabase Advanced configuration Customization Crawling depth and scope controls Domain and content type filters Chunking settings to optimize RAG Monitoring Real-time monitoring in Supabase Cost and performance metrics Detailed conversation logs
by Dr. Firas
💥 Automate YouTube thumbnail creation from video links (with templated.io) Who is this for? This workflow is designed for content creators, YouTubers, and automation enthusiasts who want to automatically generate stunning YouTube thumbnails and streamline their publishing workflow — all within n8n. If you regularly post videos and spend hours designing thumbnails manually, this automation is built for you. What problem is this workflow solving? Creating thumbnails is time-consuming — yet crucial for video performance. This workflow completely automates that process: No more manual design. No more downloading screenshots. No more repetitive uploads. In less than 2 minutes, you can refresh your entire YouTube thumbnail library and make your channel look brand new. What this workflow does Once activated, this workflow can: ✅ Receive YouTube video links via Telegram ✅ Extract metadata (title, description, channel info) via YouTube API ✅ Generate a custom thumbnail automatically using Templated.io ✅ Upload the new thumbnail to Google Drive ✅ Log data in Google Sheets ✅ Send email and Telegram notifications when ready ✅ Create and publish AI-generated social posts on LinkedIn, Facebook, and Twitter via Blotato Bonus: You can re-create dozens of YouTube covers in minutes — saving up to 5 hours per week and around $500/month in manual design effort. Setup 1️⃣ Get a YouTube Data API v3 key from Google Cloud Console 2️⃣ Create a Templated.io account and get your API key + template ID 3️⃣ Set up a Telegram bot using @BotFather 4️⃣ Create a Google Drive folder and copy the folder ID 5️⃣ Create a Google Sheet with columns: Date, Video ID, Video URL, Title, Thumbnail Link, Status 6️⃣ Get your Blotato API key from the dashboard 7️⃣ Connect your social media accounts to Blotato 8️⃣ Fill all credentials in the Workflow Configuration node 9️⃣ Test by sending a YouTube URL to your Telegram bot How to customize this workflow Replace the Templated.io template ID with your own custom thumbnail layout Modify the OpenAI node prompts to change text tone or style Add or remove social platforms in the Blotato section Adjust the wait time (default: 5 minutes) based on template complexity Localize or translate the generated captions as needed Expected Outcome With one Telegram message, you’ll receive: A professional custom thumbnail An instant email + Telegram notification A Google Drive link with your ready-to-use design And your social networks will be automatically updated — no manual uploads. Credits Thumbnail generation powered by Templated.io Social publishing powered by Blotato Automation orchestrated via n8n 👋 Need help or want to customize this? 📩 Contact: LinkedIn 📺 YouTube: @DRFIRASS 🚀 Workshops: Mes Ateliers n8n 🎥 Watch This Tutorial 📄 Documentation: Notion Guide Need help customizing? Contact me for consulting and support : Linkedin / Youtube / 🚀 Mes Ateliers n8n
by moosa
What this workflow does Fully production-ready B2B lead outreach pipeline that: Takes industry keywords from a form trigger (or you can manually add rows to Google Sheets) Scrapes targeted LinkedIn leads using Apify (peakydev~leads-scraper-ppe actor) Filters for valid emails Automatically creates company + contact records in HubSpot CRM Generates highly personalized, non-salesy cold emails using GPT (tailored to the company’s industry) Logs every lead to Google Sheets with "Pending" status Waits for human approval or rejection — triggered directly from Google Sheets via two webhooks: Approve (via button/script in sheet) → sends the email via Gmail Reject (via button/script in sheet) → automatically rewrites a softer, more value-focused version with a different angle → updates the same row in the sheet Why this is useful Most outreach automations send emails blindly. This one gives you full control with a human-in-the-loop layer inside Google Sheets + automatic intelligent rewrite on rejection — which greatly improves reply rates, reduces spam complaints, and protects your sender reputation. Ideal if you: Run outbound campaigns at reasonable scale Already live in Google Sheets for lead review Want clean HubSpot CRM records before sending anything Need traceable approval (who approved what, when) Often hear “too salesy” and want the AI to adapt automatically How to use Import the workflow into n8n Connect the required credentials: Apify API token HubSpot App Token (Private App) Gmail OAuth2 Google Sheets OAuth2 OpenAI API key Replace placeholders: Your Google Sheet ID in the “Leads Log” node Your name & signature in the AI prompts Any test email addresses if needed Activate the main Form Trigger (Lead Campaign Setup) to start campaigns Review & act from Google Sheets: Leads appear in your sheet with "Pending" status Use simple buttons or a dropdown + Apps Script (code examples provided in workflow sticky notes) to trigger: Approve → POST to /webhook/approved Reject → POST to /webhook/rejected Required credentials Apify HubSpot (App Token) Gmail OAuth2 Google Sheets OAuth2 OpenAI Once set up, you get a beautiful hybrid system: generate leads automatically → review & decide in familiar Google Sheets → one-click action → n8n handles sending or smart rewriting. Enjoy — and feel free to share your reply rates or any tweaks you make after running a few campaigns! 🪄
by Hyrum Hurst
AI Agent Lead Funnel for AI Agencies An End-to-End Automation That Turns Demos Into Booked Calls This n8n workflow is a full inbound → outbound hybrid funnel designed for AI agencies. It captures warm leads through instant AI value, then automatically follows up with personalized, context-aware outreach and reminders until the lead either replies or books a call. No cold scraping. No manual follow-ups. Just leverage + timing. 🚀 How the Workflow Works 📋 PART 1 — Lead Capture & Instant Value 1 — Share High-Impact AI Image Edits You post before/after examples using the NanoBanna / Gemini image-editing model on social platforms. Each post includes a link to a lightweight form. The visual results do the selling for you. 2 — Lead Submits Image & Details The form collects: Image upload Edit instructions Name Email Company name This filters for high-intent prospects only. 3 — AI Edits the Image Instantly Once submitted, the workflow: Sends the image + instructions to the AI image editor Preserves lighting and camera angle unless specified Generates a polished result in seconds 4 — Result Delivered via Email The edited image is emailed directly to the user with: A friendly confirmation message Soft positioning for future work This establishes trust before any sales motion happens. 5 — Lead Is Logged Automatically All lead data is saved to Google Sheets: Name Company Email Timestamp This becomes your live CRM of warm inbound leads. 🤖 PART 2 — AI-Driven Personalized Outreach 6 — AI Analyzes the Lead An AI sales agent: Looks at the company name + context Reviews a library of proven automation ideas Either selects the best fit or creates a simple custom one 7 — AI Writes a Personalized Outreach Email The agent generates a short email that: Mentions a specific automation already built States you can help implement it quickly Invites them to book a call via your calendar No marketing fluff. No generic pitches. Every email feels hand-written. 8 — Outreach Email Is Sent Automatically The email is sent from your inbox (Outlook, Gmail, SMTP, etc.) and includes: Their name Their company A clear calendar booking link 📬 PART 3 — Smart Follow-Up System 9 — Wait 48 Hours The workflow pauses to give the lead time to respond naturally. 10 — Check for a Reply After 48 hours: If the lead replied → they are tagged as Interested If no reply → continue to follow-up (Current reply detection is placeholder logic and can be swapped for a live inbox listener.) 11 — AI Writes a Polite Follow-Up If there’s no response, an AI agent writes: A short, non-pushy follow-up Referencing the original automation idea Under 60 words 12 — Follow-Up Email Is Sent The follow-up goes out automatically and keeps the conversation alive without manual effort. 📈 Why This Workflow Converts So Well Instant Value First Leads experience AI results before being pitched anything. Context-Aware Outreach Every email is personalized based on the lead, not a template. Built-In Persistence The system follows up automatically — no leads fall through the cracks. Fully Automated Once live, this workflow handles: Lead capture AI delivery Outreach Follow-ups CRM updates You just keep posting content. 🔧 Setup Requirements To deploy this workflow, connect: Google Gemini API** (image editing + agents) Email provider** Outlook Gmail SMTP Google Sheets** Columns: Name, Company, Email, Time, Status Calendar booking link** Example: https://cal.com/your-link All credentials are modular and easily swappable. 🎯 Summary This n8n automation turns attention into action by: Delivering immediate AI value Following up with relevant, personalized ideas Nudging leads toward a booked call — automatically It’s not just a lead funnel. It’s an AI sales assistant that runs 24/7.
by Ehsan
Who is this for? This workflow is for Product Managers, Indie Hackers, and Customer Success teams who collect feature requests but struggle to notify specific users when those features actually ship. It helps you turn old feedback into customer loyalty and potential upsells. What it does This workflow creates a "Semantic Memory" of user requests. Instead of relying on exact keyword tags, it uses Vector Embeddings to understand the meaning of a request. For example, if a user asks for "Night theme," and months later you release "Dark Mode," this workflow understands they are the same thing, finds that user, and drafts a personal email to them. How it works Listen: Receives new requests via Tally Forms, vectorizes the text using Nomic Embed Text (via Ollama or OpenAI), and stores them in Supabase. Watch: Monitors your Changelog (RSS) or waits for a manual trigger when you ship a new feature. Match: Performs a Vector Similarity Search in Supabase to find users who requested semantically similar features in the past. Notify: An AI Agent drafts a hyper-personalized email connecting the user's specific past request to the new feature, saving it as a Gmail Draft (for safety). Requirements Supabase Project:** You need a project with the vector extension enabled. AI Model:* This template is pre-configured for *Ollama (Local)** to keep it free, but works perfectly with OpenAI. Tally Forms & Gmail:** For input and output. Setup steps Database Setup (Crucial): Copy the SQL script provided in the workflow's Red Sticky Note and run it in your Supabase SQL Editor. This creates the necessary tables and the vector search function. Credentials: Add your credentials for Tally, Supabase, and Gmail. URL Config: Update the HTTP Request node with your specific Supabase Project URL. SQL Script Open your Supabase SQL Editor and paste this script to set up the tables and search function: -- 1. Enable Vector Extension create extension if not exists vector; -- 2. Create Request Table (Smart Columns) create table feature_requests ( id bigint generated by default as identity primary key, content text, metadata jsonb, embedding vector(768), -- 768 for Nomic, 1536 for OpenAI created_at timestamp with time zone default timezone('utc'::text, now()), user_email text generated always as (metadata->>'user_email') stored, user_name text generated always as (metadata->>'user_name') stored ); -- 3. Create Search Function create or replace function match_feature_requests ( query_embedding vector(768), match_threshold float, match_count int ) returns table ( id bigint, user_email text, user_name text, content text, similarity float ) language plpgsql as $$ begin return query select feature_requests.id, feature_requests.user_email, feature_requests.user_name, feature_requests.content, 1 - (feature_requests.embedding <=> query_embedding) as similarity from feature_requests where 1 - (feature_requests.embedding <=> query_embedding) > match_threshold order by feature_requests.embedding <=> query_embedding limit match_count; end; $$; ⚠️ Dimension Warning: This SQL is set up for 768 dimensions (compatible with the local nomic-embed-text model included in the template). If you decide to switch the Embeddings node to use OpenAI's text-embedding-3-small, you must change all instances of 768 to 1536 in the SQL script above before running it. How to customize Change Input:** Swap the Tally node for Typeform, Intercom, or Google Sheets. Change AI:** The template includes notes on how to swap the local Ollama nodes for OpenAI nodes if you prefer cloud hosting. Change Output:** Swap Gmail for Slack, SendGrid, or HubSpot to notify your sales team instead of the user directly.
by TOMOMITSU ASANO
{ "name": "IoT Sensor Data Aggregation with AI-Powered Anomaly Detection", "nodes": [ { "parameters": { "content": "## How it works\nThis workflow monitors IoT sensors in real-time. It ingests data via MQTT or a schedule, normalizes the format, and removes duplicates using data fingerprinting. An AI Agent then analyzes readings against defined thresholds to detect anomalies. Finally, it routes alerts to Slack or Email based on severity and logs everything to Google Sheets.\n\n## Setup steps\n1. Configure the MQTT Trigger with your broker details.\n2. Set your specific limits in the Define Sensor Thresholds node.\n3. Connect your OpenAI credential to the Chat Model node.\n4. Authenticate the Gmail, Slack, and Google Sheets nodes.\n5. Create a Google Sheet with headers: timestamp, sensorId, location, readings, analysis.", "height": 484, "width": 360 }, "id": "298da7ff-0e47-4b6c-85f5-2ce77275cdf3", "name": "Main Overview", "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ -2352, -480 ] }, { "parameters": { "content": "## 1. Data Ingestion\nCaptures sensor data via MQTT for real-time streams or runs on a schedule for batch processing. Both streams are merged for unified handling.", "height": 488, "width": 412, "color": 7 }, "id": "4794b396-cd71-429c-bcef-61780a55d707", "name": "Section: Ingestion", "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ -1822, -48 ] }, { "parameters": { "content": "## 2. Normalization & Deduplication\nSets monitoring thresholds, standardizes the JSON structure, creates a content hash, and filters out duplicate readings to prevent redundant API calls.", "height": 316, "width": 884, "color": 7 }, "id": "339e7cb7-491e-44c9-b561-983e147237d8", "name": "Section: Processing", "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ -1376, 32 ] }, { "parameters": { "content": "## 3. AI Anomaly Detection\nAn AI Agent evaluates sensor data against thresholds to identify anomalies, assigning severity levels and providing actionable recommendations.", "height": 528, "width": 460, "color": 7 }, "id": "ebcb7ca3-f70c-4a90-8a2a-f489e7be4c73", "name": "Section: AI Analysis", "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ -422, 24 ] }, { "parameters": { "content": "## 4. Routing & Archiving\nRoutes alerts based on severity (Critical = Email+Slack, Warning = Slack) and archives all data points to Google Sheets for historical analysis.", "height": 756, "width": 900, "color": 7 }, "id": "7f2b32a5-d3b2-4fea-844f-4b39b8e8a239", "name": "Section: Alerting", "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ 94, -196 ] }, { "parameters": { "topics": "sensors/+/data", "options": {} }, "id": "bc86720b-9de9-4693-b090-343d3ebad3a3", "name": "MQTT Sensor Trigger", "type": "n8n-nodes-base.mqttTrigger", "typeVersion": 1, "position": [ -1760, 88 ] }, { "parameters": { "rule": { "interval": [ { "field": "minutes", "minutesInterval": 15 } ] } }, "id": "1c38f2d0-aa00-447e-bdae-bffd08c38461", "name": "Batch Process Schedule", "type": "n8n-nodes-base.scheduleTrigger", "typeVersion": 1.2, "position": [ -1760, 280 ] }, { "parameters": { "mode": "chooseBranch" }, "id": "f9b41822-ee61-448b-b324-38483036e0e1", "name": "Merge Triggers", "type": "n8n-nodes-base.merge", "typeVersion": 3, "position": [ -1536, 184 ] }, { "parameters": { "mode": "raw", "jsonOutput": "{\n \"thresholds\": {\n \"temperature\": {\"min\": -10, \"max\": 50, \"unit\": \"C\"},\n \"humidity\": {\"min\": 20, \"max\": 90, \"unit\": \"%\"},\n \"pressure\": {\"min\": 950, \"max\": 1050, \"unit\": \"hPa\"},\n \"co2\": {\"min\": 400, \"max\": 2000, \"unit\": \"ppm\"}\n },\n \"alertConfig\": {\n \"criticalChannel\": \"#iot-critical\",\n \"warningChannel\": \"#iot-alerts\",\n \"emailRecipients\": \"ops@example.com\"\n }\n}", "options": {} }, "id": "308705a8-edc7-4435-9250-487aa528e033", "name": "Define Sensor Thresholds", "type": "n8n-nodes-base.set", "typeVersion": 3.4, "position": [ -1312, 184 ] }, { "parameters": { "jsCode": "const items = $input.all();\nconst thresholds = $('Define Sensor Thresholds').first().json.thresholds;\nconst results = [];\n\nfor (const item of items) {\n let sensorData;\n try {\n sensorData = typeof item.json.message === 'string' \n ? JSON.parse(item.json.message) \n : item.json;\n } catch (e) {\n sensorData = item.json;\n }\n \n const now = new Date();\n const reading = {\n sensorId: sensorData.sensorId || sensorData.topic?.split('/')[1] || 'unknown',\n location: sensorData.location || 'Main Facility',\n timestamp: now.toISOString(),\n readings: {\n temperature: sensorData.temperature ?? null,\n humidity: sensorData.humidity ?? null,\n pressure: sensorData.pressure ?? null,\n co2: sensorData.co2 ?? null\n },\n metadata: {\n receivedAt: now.toISOString(),\n source: item.json.topic || 'batch',\n thresholds: thresholds\n }\n };\n \n results.push({ json: reading });\n}\n\nreturn results;" }, "id": "a2008189-5ace-418b-b0db-d51d63dcf2d8", "name": "Parse Sensor Payload", "type": "n8n-nodes-base.code", "typeVersion": 2, "position": [ -1088, 184 ] }, { "parameters": { "type": "SHA256", "value": "={{ $json.sensorId + '-' + $json.timestamp + '-' + JSON.stringify($json.readings) }}", "dataPropertyName": "dataHash" }, "id": "bf8db555-a10e-4468-a44a-cdc4c97e5b80", "name": "Generate Data Fingerprint", "type": "n8n-nodes-base.crypto", "typeVersion": 1, "position": [ -864, 184 ] }, { "parameters": { "compare": "selectedFields", "fieldsToCompare": "dataHash", "options": {} }, "id": "a45405e2-d211-449d-84d7-4538eaf56fcd", "name": "Remove Duplicate Readings", "type": "n8n-nodes-base.removeDuplicates", "typeVersion": 1, "position": [ -640, 184 ] }, { "parameters": { "text": "=Analyze this IoT sensor reading and determine if there are any anomalies:\n\nSensor ID: {{ $json.sensorId }}\nLocation: {{ $json.location }}\nTimestamp: {{ $json.timestamp }}\n\nReadings:\n- Temperature: {{ $json.readings.temperature }}°C (Normal: {{ $json.metadata.thresholds.temperature.min }} to {{ $json.metadata.thresholds.temperature.max }})\n- Humidity: {{ $json.readings.humidity }}% (Normal: {{ $json.metadata.thresholds.humidity.min }} to {{ $json.metadata.thresholds.humidity.max }})\n- CO2: {{ $json.readings.co2 }} ppm (Normal: {{ $json.metadata.thresholds.co2.min }} to {{ $json.metadata.thresholds.co2.max }})\n\nProvide your analysis in this exact JSON format:\n{\n \"hasAnomaly\": true/false,\n \"severity\": \"critical\"/\"warning\"/\"normal\",\n \"anomalies\": [\"list of detected issues\"],\n \"reasoning\": \"explanation of your analysis\",\n \"recommendation\": \"suggested action\"\n}", "options": { "systemMessage": "You are an IoT monitoring expert. Analyze sensor data and detect anomalies based on the provided thresholds. Be precise and provide actionable recommendations. Always respond in valid JSON format." } }, "id": "b60194ba-7b99-44e0-b0d7-9f1632dce4d4", "name": "AI Anomaly Detector", "type": "@n8n/n8n-nodes-langchain.agent", "typeVersion": 1.7, "position": [ -416, 184 ] }, { "parameters": { "jsCode": "const item = $input.first();\nconst originalData = $('Remove Duplicate Readings').first().json;\n\nlet aiAnalysis;\ntry {\n const responseText = item.json.output || item.json.text || '';\n const jsonMatch = responseText.match(/\\{[\\s\\S]*\\}/);\n aiAnalysis = jsonMatch ? JSON.parse(jsonMatch[0]) : {\n hasAnomaly: false,\n severity: 'normal',\n anomalies: [],\n reasoning: 'Unable to parse AI response',\n recommendation: 'Manual review required'\n };\n} catch (e) {\n aiAnalysis = {\n hasAnomaly: false,\n severity: 'normal',\n anomalies: [],\n reasoning: 'Parse error: ' + e.message,\n recommendation: 'Manual review required'\n };\n}\n\nreturn [{\n json: {\n ...originalData,\n analysis: aiAnalysis,\n alertLevel: aiAnalysis.severity,\n requiresAlert: aiAnalysis.hasAnomaly && aiAnalysis.severity !== 'normal'\n }\n}];" }, "id": "a145a8c7-538c-411a-95c6-9485acdcb969", "name": "Parse AI Analysis", "type": "n8n-nodes-base.code", "typeVersion": 2, "position": [ -64, 184 ] }, { "parameters": { "rules": { "values": [ { "conditions": { "options": { "caseSensitive": true, "typeValidation": "strict" }, "combinator": "and", "conditions": [ { "id": "critical", "operator": { "type": "string", "operation": "equals" }, "leftValue": "={{ $json.alertLevel }}", "rightValue": "critical" } ] }, "renameOutput": true, "outputKey": "Critical" }, { "conditions": { "options": { "caseSensitive": true, "typeValidation": "strict" }, "combinator": "and", "conditions": [ { "id": "warning", "operator": { "type": "string", "operation": "equals" }, "leftValue": "={{ $json.alertLevel }}", "rightValue": "warning" } ] }, "renameOutput": true, "outputKey": "Warning" } ] }, "options": { "fallbackOutput": "extra" } }, "id": "1ab9785d-9f7f-4840-b1e9-0afc62b00b12", "name": "Route by Severity", "type": "n8n-nodes-base.switch", "typeVersion": 3.2, "position": [ 160, 168 ] }, { "parameters": { "sendTo": "={{ $('Define Sensor Thresholds').first().json.alertConfig.emailRecipients }}", "subject": "=CRITICAL IoT Alert: {{ $json.sensorId }} - {{ $json.analysis.anomalies[0] || 'Anomaly Detected' }}", "message": "=CRITICAL IoT SENSOR ALERT\n\nSensor: {{ $json.sensorId }}\nLocation: {{ $json.location }}\nTime: {{ $json.timestamp }}\n\nReadings:\n- Temperature: {{ $json.readings.temperature }}°C\n- Humidity: {{ $json.readings.humidity }}%\n- CO2: {{ $json.readings.co2 }} ppm\n\nAI Analysis:\n{{ $json.analysis.reasoning }}\n\nDetected Issues:\n{{ $json.analysis.anomalies.join('\\n- ') }}\n\nRecommendation:\n{{ $json.analysis.recommendation }}", "options": {} }, "id": "28201a6c-10b5-4387-be89-10a57c634622", "name": "Send Critical Email", "type": "n8n-nodes-base.gmail", "typeVersion": 2.1, "position": [ 384, -80 ], "webhookId": "35b9f8fa-4a50-456e-b552-9fd20a25ccc5" }, { "parameters": { "select": "channel", "channelId": { "__rl": true, "mode": "name", "value": "#iot-critical" }, "text": "=🚨 CRITICAL IoT ALERT\n\nSensor: {{ $json.sensorId }}\nLocation: {{ $json.location }}\n\nReadings:\n• Temperature: {{ $json.readings.temperature }}°C\n• Humidity: {{ $json.readings.humidity }}%\n• CO2: {{ $json.readings.co2 }} ppm\n\nAI Analysis: {{ $json.analysis.reasoning }}\nRecommendation: {{ $json.analysis.recommendation }}", "otherOptions": {} }, "id": "c5a297be-ccef-40ba-9178-65805262efba", "name": "Slack Critical Alert", "type": "n8n-nodes-base.slack", "typeVersion": 2.2, "position": [ 384, 112 ], "webhookId": "19113595-0208-4b37-b68c-c9788c19f618" }, { "parameters": { "select": "channel", "channelId": { "__rl": true, "mode": "name", "value": "#iot-alerts" }, "text": "=⚠️ IoT Warning\n\nSensor: {{ $json.sensorId }} | Location: {{ $json.location }}\nIssue: {{ $json.analysis.anomalies[0] || 'Threshold approaching' }}\nRecommendation: {{ $json.analysis.recommendation }}", "otherOptions": {} }, "id": "5c3d7acf-0211-44dd-9f4b-a43d3796abb1", "name": "Slack Warning Alert", "type": "n8n-nodes-base.slack", "typeVersion": 2.2, "position": [ 384, 400 ], "webhookId": "37abfb19-f82f-4449-bd69-a65635b99606" }, { "parameters": {}, "id": "6bcbb42f-ec14-4f00-a091-babcc2d2d5c4", "name": "Merge Alert Outputs", "type": "n8n-nodes-base.merge", "typeVersion": 3, "position": [ 608, 184 ] }, { "parameters": { "operation": "append", "documentId": { "__rl": true, "mode": "list", "value": "" }, "sheetName": { "__rl": true, "mode": "list", "value": "" } }, "id": "6243aa23-408d-4928-a512-811eeb3b5f9e", "name": "Archive to Google Sheets", "type": "n8n-nodes-base.googleSheets", "typeVersion": 4.5, "position": [ 832, 184 ] }, { "parameters": { "model": "gpt-4o-mini", "options": { "temperature": 0.3 } }, "id": "61081e8a-ebc9-465f-8beb-88af225e59f2", "name": "OpenAI Chat Model", "type": "@n8n/n8n-nodes-langchain.lmChatOpenAi", "typeVersion": 1.2, "position": [ -344, 408 ] } ], "pinData": {}, "connections": { "MQTT Sensor Trigger": { "main": [ [ { "node": "Merge Triggers", "type": "main", "index": 0 } ] ] }, "Batch Process Schedule": { "main": [ [ { "node": "Merge Triggers", "type": "main", "index": 1 } ] ] }, "Merge Triggers": { "main": [ [ { "node": "Define Sensor Thresholds", "type": "main", "index": 0 } ] ] }, "Define Sensor Thresholds": { "main": [ [ { "node": "Parse Sensor Payload", "type": "main", "index": 0 } ] ] }, "Parse Sensor Payload": { "main": [ [ { "node": "Generate Data Fingerprint", "type": "main", "index": 0 } ] ] }, "Generate Data Fingerprint": { "main": [ [ { "node": "Remove Duplicate Readings", "type": "main", "index": 0 } ] ] }, "Remove Duplicate Readings": { "main": [ [ { "node": "AI Anomaly Detector", "type": "main", "index": 0 } ] ] }, "AI Anomaly Detector": { "main": [ [ { "node": "Parse AI Analysis", "type": "main", "index": 0 } ] ] }, "Parse AI Analysis": { "main": [ [ { "node": "Route by Severity", "type": "main", "index": 0 } ] ] }, "Route by Severity": { "main": [ [ { "node": "Send Critical Email", "type": "main", "index": 0 }, { "node": "Slack Critical Alert", "type": "main", "index": 0 } ], [ { "node": "Slack Warning Alert", "type": "main", "index": 0 } ], [ { "node": "Merge Alert Outputs", "type": "main", "index": 0 } ] ] }, "Send Critical Email": { "main": [ [ { "node": "Merge Alert Outputs", "type": "main", "index": 0 } ] ] }, "Slack Critical Alert": { "main": [ [ { "node": "Merge Alert Outputs", "type": "main", "index": 0 } ] ] }, "Slack Warning Alert": { "main": [ [ { "node": "Merge Alert Outputs", "type": "main", "index": 0 } ] ] }, "Merge Alert Outputs": { "main": [ [ { "node": "Archive to Google Sheets", "type": "main", "index": 0 } ] ] }, "OpenAI Chat Model": { "ai_languageModel": [ [ { "node": "AI Anomaly Detector", "type": "ai_languageModel", "index": 0 } ] ] } }, "active": false, "settings": { "executionOrder": "v1" }, "versionId": "", "meta": { "instanceId": "15d6057a37b8367f33882dd60593ee5f6cc0c59310ff1dc66b626d726083b48d" }, "tags": [] }
by Gtaras
Who’s it for This workflow is for hotel managers, travel agencies, and hospitality teams who receive booking requests via email. It eliminates the need for manual data entry by automatically parsing emails and attachments, assigning booking cases to the right teams, and tracking performance metrics. What it does This workflow goes beyond simple automation by including enterprise-grade logic and security: 🛡️ Gatekeeper:** Watches your Gmail and filters irrelevant emails before spending money on AI tokens. 🧠 AI Brain:** Uses OpenAI (GPT-5-mini) to extract structured data from unstructured email bodies and PDF attachments. ⚖️ Business Logic:** Automatically routes tasks to different teams based on urgency, room count, and VIP status. 🔒 Security:** Catches PII (like credit card numbers) and scrubs them before they hit your database. 🚨 Safety Net:** If anything breaks, a dedicated error handling path logs the issue immediately so no booking is lost. 📈 ROI Tracking:** Calculates the time saved per booking to prove the value of automation. How to set up Create your Google Sheet: Create a new sheet and rename the tabs to: Cases, Team Assignments, Error Logs, Success Metrics. Add Credentials: Go to n8n Settings → Credentials and add your Gmail (OAuth2), Google Sheets, and OpenAI API keys. Configure User Settings: Open the "Configuration: User Settings" node at the start of the workflow. Paste your specific Google Sheet ID and Admin Email there. Adjust Business Rules: Open the "Apply Business Rules" node (Code node) to adjust the logic for team assignment (e.g., defining what counts as a "VIP" booking). Customize Templates: Modify the email templates in the Gmail nodes to match your hotel's branding. Test: Send a sample booking email to yourself to verify the filters and data extraction. Setup requirements Gmail account (OAuth2 connected) Google Sheets (with the 4 tabs listed below) OpenAI API key (GPT-5-mini recommended) n8n Cloud or self-hosted instance How to customize Filter Booking Emails:** Update the trigger node keywords to match your specific email subjects (e.g., "Reservation", "Booking Request"). Apply Business Rules:** Edit the Javascript in the Code node to fit your company’s internal logic (e.g., changing priority thresholds). New Metrics:** Add new columns in the Google Sheet (e.g., “Revenue Metrics”) and map them in the "Update Sheet" node. AI Model:** Switch to GPT-5 if you need higher reasoning capabilities for complex PDF layouts. Google Sheets Structure Description This workflow uses a Google Sheets document with four main tabs to track and manage hotel booking requests. 1. Cases This is the main data log for all incoming booking requests. case_id:** Unique identifier generated by the workflow. processed_date:** Timestamp when the workflow processed the booking. travel_agency / contact_details:** Extracted from the email. number_of_rooms / check_in_date:** Booking details parsed by the AI. special_requests:** Optional notes (e.g., airport transfer). assigned_team / priority:** Automatically set based on business rules. days_until_checkin:** Dynamic field showing urgency. 2. Team Assignments Stores internal routing and assignment details. timestamp:** When the case was routed. case_id:** Link to the corresponding record in the Cases tab. assigned_team / team_email:** Which department handles this request. priority:** Auto-set based on room count or urgency. 3. Error Log A critical audit trail that captures details about any failed processing steps. error_type:** Categorization of the failure (e.g., MISSING_REQUIRED_FIELDS). error_message:** Detailed technical explanation for debugging. original_sender / snippet:** Context to help you manually process the request if needed. 4. Success Metrics Tracks the results of your automation to prove its value. processing_time_seconds:** The time savings achieved by the automation (run time vs. human time). record_updated:** Confirmation that the database was updated. 🙋 Support If you encounter any issues during setup or have questions about customization, please reach out to our dedicated support email: foivosautomationhelp@gmail.com
by Gtaras
Overview Manual financial reconciliation is tedious and prone to error. This workflow functions as an AI Financial Controller, automatically monitoring your inbox for invoices, receipts, and bills, extracting the data using OCR, and syncing it to Google Sheets for approval. Unlike simple scrapers, this workflow uses a "Guardrail" AI agent to filter out non-financial emails (like newsletters) before they are processed, ensuring only actual transactions are recorded. Who is it for? Finance Teams:** To automate the collection of vendor invoices. Freelancers:** To track expenses and receipts for tax season. Operations Managers:** To monitor budget spend and categorize costs automatically. How it works Ingest: The workflow watches a specific Gmail label (e.g., "INBOX") for new emails. Guardrail: A Gemini-powered AI agent analyzes the email text to determine if it is a valid financial transaction. If not, the workflow stops. Extraction (OCR): If an attachment exists: An AI Agent (GPT-4o) extracts data from the PDF. If no attachment: An AI Agent extracts data directly from the email body. Validation: Code nodes check for missing fields or invalid amounts. Business Logic: The system automatically assigns General Ledger (GL) categories (e.g., "Uber" -> "Travel") and sets approval statuses based on amount thresholds. Sync: Validated data is logged to Google Sheets, and a confirmation email is sent. Errors are logged to a separate error sheet. How to set up Google Sheets: Copy this Google Sheet template to your drive. It contains the necessary tabs (Invoices, Error Logs, Success Metrics). Configure Workflow: Open the node named "Configuration: User Settings". Paste your Google Sheet ID (found in the URL of your new sheet). Enter the Admin Email address where you want to receive error notifications. Credentials: Connect your Gmail account. Connect your Google Sheets account. Connect your OpenAI (for OCR) and Google Gemini/PaLM (for Guardrails) accounts. Requirements n8n version 1.0 or higher. Gmail account. OpenAI API Key. Google Gemini (PaLM) API Key.
by Simeon Penev
Who’s it for Marketing, growth, and analytics teams who want a decision-ready GA4 summary—automatically calculated, clearly color-coded, and emailed as a polished HTML report. How it works / What it does Get Client (Form Trigger)* collects *GA4 Property ID (“Account ID”), **Key Event, date ranges (current & previous), Client Name, and recipient email. Overall Metrics This Period / Previous Period (GA4 Data API)** pull sessions, users, engagement, bounce rate, and more for each range. Form Submits This Period / Previous Period (GA4 Data API)** fetch key-event counts for conversion comparisons. Code** normalizes form dates for API requests. AI Agent* builds a *valid HTML email**: Calculates % deltas, applies green for positive (#10B981) and red for negative (#EF4444) changes. Writes summary and recommendations. Produces the final HTML only. Send a message (Gmail)** sends the formatted HTML report to the specified email address with a contextual subject. How to set up 1) Add credentials: Google Analytics OAuth2, OpenAI (Chat), Gmail OAuth2. 2) Ensure the form fields match your GA4 property and event names; “Account ID” = GA4 Property ID. Property ID - https://take.ms/vO2MG Key event - https://take.ms/hxwQi 3) Publish the form URL and run a test submission. Requirements GA4 property access (Viewer/Analyst) • OpenAI API key • Gmail account with send permission. Resources Google OAuth2 (GA4) – https://docs.n8n.io/integrations/builtin/credentials/google/oauth-generic/ OpenAI credentials – https://docs.n8n.io/integrations/builtin/credentials/openai/ Gmail OAuth2 – https://docs.n8n.io/integrations/builtin/credentials/google/ GA4 Data API overview – https://developers.google.com/analytics/devguides/reporting/data/v1