by Yaron Been
Support Director Agent with Customer Support Team Description Complete AI-powered customer support department with a Support Director agent orchestrating specialized support team members for comprehensive customer service operations. Overview This n8n workflow creates a comprehensive customer support department using AI agents. The Support Director agent analyzes support requests and delegates tasks to specialized agents for tier 1 support, technical assistance, customer success, knowledge management, escalation handling, and quality assurance. Features Strategic Support Director agent using OpenAI O3 for complex support decision-making Six specialized support agents powered by GPT-4.1-mini for efficient execution Complete customer support lifecycle coverage from first contact to resolution Automated technical troubleshooting and documentation creation Customer success and retention strategies Escalation management for priority issues Quality assurance and performance monitoring Team Structure Support Director Agent**: Strategic support oversight and task delegation (O3 model) Tier 1 Support Agent**: First-line support, basic troubleshooting, account assistance Technical Support Specialist**: Complex technical issues, API debugging, integrations Customer Success Advocate**: Onboarding, feature adoption, retention strategies Knowledge Base Manager**: Help articles, FAQs, documentation creation Escalation Handler**: Priority issues, VIP customers, crisis management Quality Assurance Specialist**: Support quality monitoring, performance analysis How to Use Import the workflow into your n8n instance Configure OpenAI API credentials for all chat models Deploy the webhook for chat interactions Send support requests via chat (e.g., "Customer can't connect to our API endpoint") The Support Director will analyze and delegate to appropriate specialists Receive comprehensive support solutions and documentation Use Cases Complete Support Cycle**: Inquiry triage → Resolution → Follow-up → Quality review Technical Documentation**: API troubleshooting guides, integration manuals Customer Onboarding**: Welcome sequences, feature tutorials, training materials Escalation Management**: VIP support protocols, complaint resolution procedures Quality Monitoring**: Response evaluation, team performance analytics Knowledge Base**: Self-service content creation, FAQ optimization Requirements n8n instance with LangChain nodes OpenAI API access (O3 for Support Director, GPT-4.1-mini for specialists) Webhook capability for chat interactions Optional: Integration with CRM, helpdesk, or ticketing systems Cost Optimization O3 model used only for strategic Support Director decisions GPT-4.1-mini provides 90% cost reduction for specialist tasks Parallel processing enables simultaneous agent execution Solution template library reduces redundant response generation Integration Options Connect to helpdesk systems (Zendesk, Freshdesk, Intercom, etc.) Integrate with CRM platforms (Salesforce, HubSpot, etc.) Link to knowledge base systems (Confluence, Notion, etc.) Connect to monitoring tools for proactive support Building Blocks Disclaimer Important Note: This workflow is designed as a foundational building block for your customer support automation. While it provides a comprehensive multi-agent framework, you may need to customize prompts, add specific integrations, or modify agent behaviors to match your exact business requirements and support processes. Consider this a starting point that can be extended and tailored to your unique customer support needs. Contact & Resources Website**: nofluff.online YouTube**: @YaronBeen LinkedIn**: Yaron Been Tags #CustomerSupport #HelpDesk #TechnicalSupport #CustomerSuccess #SupportAutomation #QualityAssurance #KnowledgeManagement #EscalationManagement #ServiceExcellence #CustomerExperience #n8n #OpenAI #MultiAgentSystem #SupportTech #CX #Troubleshooting #CustomerCare #SupportOps
by n8n Automation Expert | Template Creator | 2+ Years Experience
🌦️ Intelligent Aquaculture Automation for Indonesia Transform your fish farming operation with this cutting-edge n8n workflow that combines Indonesia's official BMKG weather data with IoT-powered feeding automation. This system intelligently reduces feed by 20% when rain probability exceeds 60%, preventing overfeeding during adverse weather conditions that could compromise water quality and fish health. 🚀 Key Features 🌦️ Real-time BMKG Integration: Fetches official Indonesian weather forecasts every 12 hours using BMKG's public API with precise ADM4 regional targeting 🤖 Smart Decision Engine: Advanced JavaScript algorithms analyze 6-hour and 12-hour rain probabilities to make optimal feeding decisions automatically 📱 ESP8266 IoT Control: Seamlessly sends HTTP webhook commands to your ESP8266/ESP32-based fish feeder hardware with JSON payloads 💬 Rich Telegram Notifications: Comprehensive reports including weather analysis, feeding decisions, hardware status, and next feeding schedule ⏰ Precision Scheduling: Automated execution at 05:30 and 16:30 WIB (Indonesian Western Time) with cron-based triggers 📊 Activity Logging: Complete audit trail with timestamps, weather data, and feeding decisions for operational monitoring 🛠️ Technical Architecture Core Node Components: Schedule Trigger:** Automated twice-daily execution HTTP Request:** BMKG API integration with timeout handling Code (JavaScript):** Weather parsing and feeding ratio calculations IF Condition:** Intelligent branching based on configurable rain thresholds Telegram:** Formatted notifications with markdown support Set Variables:** Secure credential management with placeholder tokens 📋 Prerequisites ✅ n8n Instance: Self-hosted or cloud deployment ✅ Telegram Bot: Create via @BotFather for notifications ✅ ESP8266/ESP32: Hardware with servo motor for automated feeding ✅ Arduino Skills: Basic programming knowledge for hardware setup ✅ Indonesian Location: Uses BMKG API with ADM4 regional codes ⚙️ Configuration Requirements 📍 Location Settings: Update latitude, longitude, and BMKG ADM4 code in the Config node 🤖 Telegram Bot: Configure bot token and chat ID in credentials 🔗 ESP8266 Webhook: Set your device's IP address for hardware communication 📊 Feeding Parameters: Customize rain threshold (default: 60%) and feed reduction (default: -20%) 🎯 Perfect For 🏭 Commercial Aquaculture: Large-scale fish farming operations requiring weather-aware feeding 🏠 Hobbyist Enthusiasts: Home aquarium and pond automation projects 🌱 Smart Agriculture: Integration with comprehensive farm management ecosystems 🔧 IoT Learning: Educational platform for weather-based automation development 🌍 Environmental Research: Combining meteorological data with livestock care protocols 📊 Rich Output Examples The workflow generates detailed Telegram reports featuring: Current Weather Analysis:** 6-hour and 12-hour rain probability breakdowns Feeding Decision Logic:** Clear rationale for feed adjustments with percentages Hardware Confirmation:** ESP8266 response status and command execution verification Schedule Preview:** Next automated feeding time with countdown Historical Logs:** Comprehensive activity tracking for pattern analysis 🔧 Hardware Integration Guide Designed for ESP8266-based feeders accepting HTTP POST commands. The workflow transmits structured JSON containing: { "command": "FEED_REDUCE_20", "feed_ratio": -20, "rain_prob": 75, "timestamp": "2024-09-18T10:30:00Z", "location": "Main Pond" } 🌍 Regional Adaptation Indonesia-Optimized: Built specifically for BMKG's official weather API with ADM4 regional precision Global Compatibility: Easily adaptable for international weather services by modifying HTTP requests and parsing logic Scalable Architecture: Supports multiple pond locations with separate ADM4 configurations 🔒 Security & Credentials All API keys use {{PLACEHOLDER}} format for secure credential management No hardcoded sensitive information in workflow nodes Telegram bot tokens managed through n8n's credential system ESP8266 webhooks support local network security 📈 Performance Benefits 20% Feed Optimization:** Automatic reduction during high rain probability periods Water Quality Protection:** Prevents overfeeding that degrades aquatic environment Cost Efficiency:** Reduces feed waste while maintaining fish health 24/7 Monitoring:** Continuous weather analysis without manual intervention Scalable Operations:** Supports multiple feeding locations from single workflow
by WeblineIndia
⚙️ Advanced Equipment Health Monitor with MS Teams Integration (n8n | API | Google Sheets | MSTeams) This n8n workflow automatically monitors equipment health by fetching real-time metrics like temperature, voltage and operational status. If any of these parameters cross critical thresholds, an alert is instantly sent to a Microsoft Teams channel and the event is logged in Google Sheets. The workflow runs every 15 minutes by default. ⚡ Quick Implementation Steps Import the workflow JSON into your n8n instance. Open the "Set Config" node and update: API endpoint Teams webhook URL Threshold values Google Sheet ID Activate the workflow to start receiving alerts every 15 minutes. 🎯 Who’s It For Renewable energy site operators (solar, wind) Plant maintenance and operations teams Remote infrastructure monitoring services IoT-integrated energy platforms Enterprise environments using Microsoft Teams 🛠 Requirements | Tool | Purpose | |------|---------| | n8n Instance | To run and schedule automation | | HTTP API | Access to your equipment or IoT platform health API | | Microsoft Teams | Incoming Webhook URL configured | | Google Sheets | Logging and analytics | | SMTP (optional) | For email-based alternatives or expansions | 🧠 What It Does Runs every 15 minutes** to check the latest equipment metrics. Compares values** (temperature, voltage, status) against configured thresholds. Triggers a Microsoft Teams message** when a threshold is breached. Appends the alert data** to a Google Sheet for logging and review. 🧩 Workflow Components Set Node:** Configures thresholds, endpoints, webhook URL and Sheet ID. Cron Node:** Triggers the check every 15 minutes. HTTP Request Node:** Pulls data from your equipment health monitoring API. IF Node:** Evaluates if conditions are within or outside defined limits. MS Teams Alert Node:** Sends structured alerts using a Teams incoming webhook. Google Sheets Node:** Logs alert details for recordkeeping and analytics. 🔧 How To Set Up – Step-by-Step Import Workflow: In n8n, click Import and upload the provided .json file. Update Configurations: Open the Set Config node. Replace the placeholder values: apiEndpoint: URL to fetch equipment data. teamsWebhookUrl: Your MS Teams channel webhook. temperatureThreshold: Example = 80 voltageThreshold: Example = 400 googleSheetId: Google Sheet ID (must be shared with n8n service account). Check Webhook Integration: Ensure your MS Teams webhook is properly authorized and points to a live channel. Run & Monitor: Enable the workflow and view logs/alerts. Adjust thresholds as needed. 🧪 How To Customize | Customization | How | |---------------|-----| | Add more parameters (humidity, pressure) | Extend the HTTP + IF node conditions | | Change alert frequency | Edit the Cron node | | Use Slack or Email instead of Teams | Replace MS Teams node with Slack or Email node | | Add PDF Report Generation | Use HTML → PDF node and email the report | | Export to Database | Add a PostgreSQL or MySQL node instead of Google Sheets | ➕ Add‑ons (Advanced) | Add-on | Description | |--------|-------------| | 📦 Auto-Ticketing | Auto-create issues in Jira, Trello or ClickUp for serious faults | | 📊 Dashboard Sync | Send real-time logs to BigQuery or InfluxDB | | 🧠 Predictive Alerts | Use machine learning APIs to flag anomalies | | 🗂 Daily Digest | Compile all incidents into a daily summary email or Teams post | | 📱 Mobile Alert | Integrate Twilio for SMS alerts or WhatsApp notifications | 📈 Example Use Cases Monitor solar inverter health for overheating or voltage drops. Alert field engineers via Teams when a wind turbine sensor fails. Log and visualize hardware issues for weekly analytics. Automate SLA compliance tracking through timely notifications. Ensure distributed infrastructure (e.g., substations) are always in operational range. 🧯 Troubleshooting Guide | Issue | Possible Cause | Solution | |-------|----------------|----------| | No Teams alert | Incorrect webhook URL or formatting | Recheck the Teams webhook and payload | | Workflow not triggering | Cron node misconfigured | Ensure it’s set to run every 15 mins and workflow is active | | Google Sheet not updating | Sheet ID is wrong or not shared | Share Sheet with your n8n Google service account | | No data from API | Endpoint URL is down or wrong | Test the endpoint manually with Postman or browser | 📞 Need Assistance? Need help tailoring this to your exact equipment type or expanding the workflow? 👉 Contact WeblineIndia – Expert automation partners for renewable energy, infrastructure and enterprise workflows.
by Oneclick AI Squad
This workflow automates flight price comparison across multiple booking platforms (Kayak, Skyscanner, Expedia, Google Flights). It accepts natural language queries, extracts flight details using NLP, scrapes prices in parallel, identifies the best deals, and sends professional email reports with comprehensive price breakdowns and booking links. 📦 What You'll Get A fully functional, production-ready n8n workflow that: ✅ Compares flight prices across 4 major platforms (Kayak, Skyscanner, Expedia, Google Flights) ✅ Accepts natural language requests ("Flight from NYC to London on March 25") ✅ Sends beautiful email reports with best deals ✅ Returns real-time JSON responses for web apps ✅ Handles errors gracefully with helpful messages ✅ Includes detailed documentation with sticky notes 🚀 Quick Setup (3 Steps) Step 1: Import Workflow to n8n Copy the JSON from the first artifact (workflow file) Open n8n → Go to Workflows Click "Import from File" → Paste JSON → Click Import ✅ Workflow imported successfully! Step 2: Setup Python Scraper On your server (where n8n SSH nodes will connect): Navigate to your scripts directory cd /home/oneclick-server2/ Create the scraper file nano flight_scraper.py Copy the entire Python script from the second artifact Save with Ctrl+X, then Y, then Enter Make it executable chmod +x flight_scraper.py Install required packages pip3 install selenium Install Chrome and ChromeDriver sudo apt update sudo apt install -y chromium-browser chromium-chromedriver Test the scraper python3 flight_scraper.py JFK LHR 2025-03-25 2025-03-30 round-trip 1 economy kayak Expected Output: Delta|$450|7h 30m|0|10:00 AM|6:30 PM|https://kayak.com/... British Airways|$485|7h 45m|0|11:30 AM|8:15 PM|https://kayak.com/... ... Step 3: Configure n8n Credentials A. Setup SMTP (for sending emails): In n8n: Credentials → Add Credential → SMTP Fill in details: Host: smtp.gmail.com Port: 587 User: your-email@gmail.com Password: [Your App Password] For Gmail Users: Enable 2FA: https://myaccount.google.com/security Create App Password: https://myaccount.google.com/apppasswords Use the 16-character password in n8n B. Setup SSH (already configured if you used existing credentials): In workflow, SSH nodes use: ilPh8oO4GfSlc0Qy Verify credential exists and points to correct server Update path if needed: /home/oneclick-server2/ C. Activate Workflow: Click the workflow toggle → Active ✅ Webhook is now live! 🎯 How to Use Method 1: Direct Webhook Call curl -X POST https://your-n8n-domain.com/webhook/flight-price-compare \ -H "Content-Type: application/json" \ -d '{ "message": "Flight from Mumbai to Dubai on 15th March, round-trip returning 20th March", "email": "user@example.com", "name": "John Doe" }' Response: { "success": true, "message": "Flight comparison sent to user@example.com", "route": "BOM → DXB", "bestPrice": 450, "airline": "Emirates", "totalResults": 18 } Method 2: Natural Language Queries The workflow understands various formats: ✅ All these work: "Flight from New York to London on 25th March, one-way" "NYC to LHR March 25 round-trip return March 30" "I need a flight from Mumbai to Dubai departing 15th March" "JFK LHR 2025-03-25 2025-03-30 round-trip" Supported cities (auto-converts to airport codes): New York → JFK London → LHR Mumbai → BOM Dubai → DXB Singapore → SIN And 20+ more cities Method 3: Structured JSON { "from": "JFK", "to": "LHR", "departure_date": "2025-03-25", "return_date": "2025-03-30", "trip_type": "round-trip", "passengers": 1, "class": "economy", "email": "user@example.com", "name": "John" } 📧 Email Report Example Users receive an email like this: FLIGHT PRICE COMPARISON Route: JFK → LHR Departure: 25 Mar 2025 Return: 30 Mar 2025 Trip Type: round-trip Passengers: 1 🏆 BEST DEAL British Airways Price: $450 Duration: 7h 30m Stops: Non-stop Platform: Kayak 💰 Save $85 vs highest price! 📊 ALL RESULTS (Top 10) British Airways - $450 (Non-stop) - Kayak Delta - $475 (Non-stop) - Google Flights American Airlines - $485 (Non-stop) - Expedia Virgin Atlantic - $495 (Non-stop) - Skyscanner United - $520 (1 stop) - Kayak ... Average Price: $495 Total Results: 23 Prices subject to availability. Happy travels! ✈️ 🔧 Customization Options Change Scraping Platforms Add more platforms: Duplicate an SSH scraping node Change platform parameter: kayak → new-platform Add scraping logic in flight_scraper.py Connect to "Aggregate & Analyze Prices" node Remove platforms: Delete unwanted SSH node Workflow continues with remaining platforms Modify Email Format Edit the "Format Email Report" node: // Change to HTML format const html = ` <!DOCTYPE html> <html> <body> Flight Deals Best price: ${bestDeal.currency}${bestDeal.price} </body> </html> `; return [{ json: { subject: "...", html: html, // Instead of text ...data } }]; Then update "Send Email Report" node: Change emailFormat to html Use {{$json.html}} instead of {{$json.text}} Add More Cities/Airports Edit "Parse & Validate Flight Request" node: const airportCodes = { ...existing codes..., 'berlin': 'BER', 'rome': 'FCO', 'barcelona': 'BCN', // Add your cities here }; Change Timeout Settings In each SSH node, add: "timeout": 30000 // 30 seconds 🐛 Troubleshooting Issue: "No flights found" Possible causes: Scraper script not working Website structure changed Dates in past Invalid airport codes Solutions: Test scraper manually cd /home/oneclick-server2/ python3 flight_scraper.py JFK LHR 2025-03-25 "" one-way 1 economy kayak Check if output shows flights If no output, check Chrome/ChromeDriver installation Issue: "Connection refused" (SSH) Solutions: Verify SSH credentials in n8n Check server is accessible: ssh user@your-server Verify path exists: /home/oneclick-server2/ Check Python installed: which python3 Issue: "Email not sending" Solutions: Verify SMTP credentials Check email in spam folder For Gmail: Confirm App Password is used (not regular password) Test SMTP connection: telnet smtp.gmail.com 587 Issue: "Webhook not responding" Solutions: Ensure workflow is Active (toggle on) Check webhook path: /webhook/flight-price-compare Test with curl command (see "How to Use" section) Check n8n logs: Settings → Log Streaming Issue: "Scraper timing out" Solutions: In flight_scraper.py, increase wait times time.sleep(10) # Instead of time.sleep(5) Or increase WebDriverWait timeout WebDriverWait(driver, 30) # Instead of 20 📊 Understanding the Workflow Node-by-Node Explanation 1. Webhook - Receive Flight Request Entry point for all requests Accepts POST requests Path: /webhook/flight-price-compare 2. Parse & Validate Flight Request Extracts flight details from natural language Converts city names to airport codes Validates required fields Returns helpful errors if data missing 3. Check If Request Valid Routes to scraping if valid Routes to error response if invalid 4-7. Scrape [Platform] (4 nodes) Run in parallel for speed Each calls Python script with platform parameter Continue on failure (don't break workflow) Return pipe-delimited flight data 8. Aggregate & Analyze Prices Collects all scraper results Parses flight data Finds best overall deal Finds best non-stop flight Calculates statistics Sorts by price 9. Format Email Report Creates readable text report Includes route details Highlights best deal Lists top 10 results Shows statistics 10. Send Email Report Sends formatted email to user Uses SMTP credentials 11. Webhook Response (Success) Returns JSON response immediately Includes best price summary Confirms email sent 12. Webhook Response (Error) Returns helpful error message Guides user on what's missing 🎨 Workflow Features ✅ Included Features Natural Language Processing**: Understands flexible input formats Multi-Platform Comparison**: 4 major booking sites Parallel Scraping**: All platforms scraped simultaneously Error Handling**: Graceful failures, helpful messages Email Reports**: Professional format with all details Real-Time Responses**: Instant webhook feedback Sticky Notes**: Detailed documentation in workflow Airport Code Mapping**: Auto-converts 20+ cities 🚧 Not Included (Easy to Add) Price Alerts**: Monitor price drops (add Google Sheets) Analytics Dashboard**: Track searches (add Google Sheets) SMS Notifications**: Send via Twilio Slack Integration**: Post to channels Database Logging**: Store searches in PostgreSQL Multi-Currency**: Show prices in the user's currency 💡 Pro Tips Tip 1: Speed Up Scraping Use faster scraping service (like ScraperAPI): // Replace SSH nodes with HTTP Request nodes { "url": "http://api.scraperapi.com", "qs": { "api_key": "YOUR_KEY", "url": "https://kayak.com/flights/..." } } Tip 2: Cache Results Add caching to avoid duplicate scraping: // In Parse node, check cache first const cacheKey = ${origin}-${dest}-${departureDate}; const cached = await $cache.get(cacheKey); if (cached && Date.now() - cached.time < 3600000) { return cached.data; // Use 1-hour cache } Tip 3: Add More Platforms Easy to add Momondo, CheapOair, etc.: Add function in flight_scraper.py Add SSH node in workflow Connect to aggregator Tip 4: Improve Date Parsing Handle more formats: // Add to Parse node const formats = [ 'DD/MM/YYYY', 'MM-DD-YYYY', 'YYYY.MM.DD', // Add your formats ];
by Aitor | 1Node
This automated n8n workflow streamlines lead qualification by taking structured lead data from Tally forms, enriching it with Qwen-3’s AI analysis, and promptly notifying your sales or delivery teams. It provides concise summaries, actionable insights, and highlights missing information to focus outreach efforts efficiently. The workflow includes security best practices to prevent prompt injections and ensures data integrity and privacy throughout. Requirements Tally Forms A Tally account with an active lead qualification form Webhook integration enabled to send form responses to n8n Qwen-3 Large Language Model API key and access to your chosen AI model via OpenRouter Gmail Notification Gmail account credentials connected in n8n Workflow Breakdown Trigger: Receive Tally form submission via n8n Webhook The workflow starts from a Webhook node listening for POST requests from your Tally form. Extract and map Tally form data Parse JSON to obtain fields like Company Name, Full Name, Work Email, Employee Count, Industry, Main Challenges Encountered, Goals With the Project, Urgency or Date When Solution Is Needed, Estimated Budget, and Anything Else We Should Know. Construct the Lead Qualification prompt Combine a secure system prompt with user data from the form. This prompt instructs Qwen-3 to generate summaries, identify key challenges, recommend action points, suggest follow-up questions, and more. Send notification with AI analysis Deliver the formatted message through your chosen channel(s) such as email or Slack, enabling your team to quickly act on qualified leads. Potential Improvements Capture Lead Role and Authority:** Add fields to the form for role and decision-making authority to improve lead qualification accuracy. Expand Notification Channels:** Include SMS or Microsoft Teams notifications alongside email and Slack for better team reach. Automate Lead Scoring:** Incorporate a numeric or qualitative lead score based on key input factors to prioritize follow-ups. Integrate CRM Task Creation:** Automatically create follow-up tasks or reminders in CRM systemss. 🙋♂️ Need Help? Feel free to contact us at 1 Node Get instant access to a library of free resources we created.
by Yashraj singh sisodiya
Summarize YouTube Videos with Gemini AI, Google Sheets & WhatsApp/Telegram Aim The aim of the YouTube Video Summarizer Workflow is to automate the process of summarizing or extracting transcripts from YouTube videos with the help of Gemini AI, while optionally storing results and distributing them to users via WhatsApp, Telegram, or Google Sheets. This enables fast, consistent generation and sharing of English summaries or transcripts from public YouTube content. Goal The goal is to: Allow users to submit a YouTube link through various channels (Form Webhook, WhatsApp, Telegram). Use Gemini AI to either summarize the content or transcribe the complete video, always outputting in English. Return the output to the user via their original channel and optionally log it to Google Sheets for record-keeping. Requirements The workflow relies on specific integrations and configurations: n8n Platform**: Self-hosted or cloud n8n instance to host and automate the workflow. Node Requirements**: Form/Webhook Trigger: Web form for pasting the YouTube link. WhatsApp Trigger: Starts workflow from incoming WhatsApp messages (YouTube link as input). Telegram Trigger: Initiates workflow from Telegram chat messages containing YouTube links. Gemini AI Node: Consumes the YouTube link and processes it for summarization or transcription (always in English). Google Sheets Node: Writes the result (summary/transcript) into a Google Sheet for logging and future reference. WhatsApp/Telgram Send Message Nodes: Delivers summarized results or transcripts back to the user on the same platform where they triggered the workflow. Credentials**: Gemini/Google AI Platform account for AI summarization and transcription. Google Sheets account for storing output. WhatsApp Business API for WhatsApp automation. Telegram Bot API for Telegram automation. Input Requirements**: Publicly-accessible YouTube video link (max ~30 min, as per summarized logic). Output**: English video summary or full transcript, delivered via user’s requested channel and/or stored in Google Sheets. API Usage The workflow integrates several APIs for optimal automation: Gemini AI API**: Used in the main summarization node. Receives the YouTube link and a prompt with detailed instructions. Returns either a clear, concise English summary or a full transcript translated into English, handling Hindi, English, or mixed-language videos. [Ref: Workflow JSON] Google Sheets API**: Used to log the output for each processed video, making it easy to reference histories or track requests. [Ref: Workflow JSON] WhatsApp Business API**: Sends back the summary or transcript to the user who initiated via WhatsApp. [Ref: Workflow JSON] Telegram Bot API**: Sends results back to Telegram users directly in chat. [Ref: Workflow JSON] Output Formatting/Conversion The AI output is always in English, tailored to the option chosen (summary vs transcript). Structured output: Bulleted, neutral, and easy to read, suitable for sharing with users or for business documentation. Google Sheets node maps and writes each video’s results to a dedicated row for easy history review. How to Use By default, the workflow uses a manual trigger via a web form, but you may add triggers for WhatsApp or Telegram to suit your needs. Users paste a YouTube link, then select whether they want a summary or transcript (based on your implementation logic). Results are returned in their channel and optionally logged to your Google Sheet. All processing is handled securely using your Gemini API credentials. You can expand this logic by adding more integrations (email, Slack, etc.). Customising this Workflow Custom prompts can be written for different styles or output formats (e.g., SEO key points, step-by-step guides). Add logic for batch processing multiple videos or bulk export to different cloud drives. Integrate into central dashboards, CRMs, or content pipelines using n8n’s hundreds of available integrations. Good to Know Gemini pricing:** At the time of writing, each YouTube video summarization costs $0.039 USD. See official Gemini Pricing for current rates. Geo-restriction:** The Gemini video model may be geo-restricted (error: “model not found” outside some regions). Video Limits:** Intended for videos up to ~30 minutes for best processing reliability. Scaling:** Can be easily adapted for high-volume operations using n8n’s queue and scheduling features. Workflow Summary The YouTube Video Summarizer Workflow automates summarizing and transcribing YouTube videos using AI and n8n. Users send video links via web forms, WhatsApp, or Telegram. Results are generated via Gemini, sent back in-app, and logged to Google Sheets, enabling effortless knowledge sharing and organizational automation at scale. Timestamp: 12:37 PM IST, Wednesday, September 17, 2025
by Malte Sohns
Monitor and manage Docker containers from Telegram with AI log analysis This workflow gives you a smart Telegram command center for your homelab. It lets you monitor Docker containers, get alerts the moment something fails, view logs, and restart services remotely. When you request logs, they're automatically analyzed by an LLM so you get a clear, structured breakdown instead of raw terminal output. Who it's for Anyone running a self-hosted environment who wants quick visibility and control without SSHing into a server. Perfect for homelab enthusiasts, self-hosters, and DevOps folks who want a lightweight on-call assistant. What it does Receives container heartbeat alerts via webhook Sends Telegram notifications for status changes or failures Lets you request logs or restart services from chat Analyzes logs with GPT and summarizes them clearly Supports manual “status” and “update all containers” commands Requirements Telegram Bot API credentials SSH access to your Docker host How to set it up Create a Telegram bot and add its token as credentials Enter your server SSH credentials in the SSH node Deploy the workflow and set your webhook endpoint Tailor container names or heartbeat logic to your environment Customize it Swap SSH commands for Kubernetes if you're on k8s Change the AI model to another provider Extend with health checks or auto-healing logic
by SpaGreen Creative
Shopify Auto Send WhatsApp Thank-You Messages & Loyalty Coupon Using Rapiwa API Who is this for? This workflow is for Shopify store owners, marketers, and support teams who want to automatically message their high-value customers on WhatsApp when new discount codes are created. What this workflow does Fetches customer data from Shopify Filters customers where total_spent > 5000 Cleans phone numbers (removes non-digit characters) and normalizes them to an international format Verifies numbers via the Rapiwa API (verify-whatsapp endpoint) Sends coupon or thank-you messages to verified numbers via the Rapiwa send-message endpoint Logs each send attempt to Google Sheets with status and validity Uses batching (SplitInBatches) and Wait nodes to avoid rate limits Key features Automated trigger: Shopify webhook (discounts/create) or manual trigger Targeted sending to high-value customers Pre-send verification to reduce failed sends Google Sheets logging and status updates Rate-limit protection using Wait node #How to use? Step-by-step setup 1) Prepare a Google Sheet Columns: name, number, status, validity, check (optional) Example row: Abdul Mannan | 8801322827799 | not sent | unverified | check 2) Configure n8n credentials Shopify: store access token (X-Shopify-Access-Token) Rapiwa: Bearer token (HTTP Bearer credential) Google Sheets: OAuth2 credentials and sheet access 3) Configure the nodes Webhook/Trigger: Shopify discounts/create or Manual Trigger HTTP Request (Shopify): /admin/api/<version>/customers.json Code node: filter customers total_spent > 5000 and map fields SplitInBatches: batching/looping Code (clean number): waNoStr.replace(/\D/g, "") HTTP Request (Rapiwa verify): POST https://app.rapiwa.com/api/verify-whatsapp body { number } IF node: check data.exists to decide branch HTTP Request (Rapiwa send-message): POST https://app.rapiwa.com/api/send-message body { number, message_type, message } Google Sheets Append/Update: write status and validity Wait: add 2–5 seconds delay between sends 4) Test with a small batch Run manually with 2–5 records first and verify results Google Sheet column structure A Google Sheet formatted like this ➤ Sample | Name | Number | Status | Validity | | -------------- | ------------- | -------- | ---------- | | Abdul Mannan | 8801322827798 | not sent | unverified | | Abdul Mannan | 8801322827799 | sent | verified | Requirements Shopify Admin API access (store access token) Rapiwa account and Bearer token Google account and Google Sheet (OAuth2 setup) n8n instance (nodes used: HTTP Request, Code, SplitInBatches, IF, Google Sheets, Wait) Customization ideas Adjust the filter (e.g., order count, customer tags) Use message templates to insert name and coupon code per customer Add an SMS or email fallback for unverified numbers Send a run summary to admin (Slack / email) Store logs in a database for deeper analysis Important notes data.exists may be a boolean or a string — normalize it in a Code node before using in an IF node Ensure Google Sheets column names match exactly Store Rapiwa and Shopify tokens securely in n8n credentials Start with small batches for testing and scale gradually Useful Links Dashboard:** https://app.rapiwa.com Official Website:** https://rapiwa.com Documentation:** https://docs.rapiwa.com Support & Help WhatsApp**: Chat on WhatsApp Discord**: SpaGreen Community Facebook Group**: SpaGreen Support Website**: https://spagreen.net Developer Portfolio**: Codecanyon SpaGreen
by Ahmed Sherif
AI-Powered Lead Scraping Automation using APIFY Scraper and Gemini Filtering to Google Sheets This is a fully automated, end-to-end pipeline designed to solve the challenge of inconsistent and low-quality lead data from large-scale scraping operations. The system programmatically fetches raw lead information from sources like Apollo or via Apify, processes it through an intelligent validation layer, and delivers a clean, deduplicated, and ready-to-use dataset directly into Google Sheets. By integrating Google Gemini for data cleansing, it moves beyond simple presence checks to enforce data hygiene and standardization, ensuring that sales teams only engage with properly formatted and complete leads. This automation eliminates hours of manual data cleaning, accelerates the speed from lead acquisition to outreach, and significantly improves the integrity of the sales pipeline. Features Batch Processing**: Systematically processes up to 1000 leads per batch and automatically loops through the entire dataset. This ensures stable, memory-efficient operation even with tens of thousands of scraped contacts. AI Validation**: Google Gemini acts as a data quality gatekeeper. It validates the presence and plausible format of critical fields (e.g., First Name, Company Name) and cleanses data by correcting common formatting issues. Smart Deduplication**: Before appending a new lead, the system cross-references its email address against the entire Google Sheet to prevent duplicate entries, ensuring a single source of truth. Auto Lead IDs**: Generates a unique, sequential ID for every new lead in the format AP-DDMMYY-xxxx. This provides a consistent reference key for tracking and CRM integration. Data Quality Reports**: Delivers real-time operational visibility by sending a concise summary to a Telegram channel after each batch, detailing success, warning, and error counts. Rate Limiting**: Incorporates a 30-second delay between batches to respect Google Sheets API limits, preventing throttling and ensuring reliable, uninterrupted execution. How It Works The workflow is initiated by an external trigger, such as a webhook, carrying the raw scraped data payload. It authenticates and fetches the complete list of leads from the Apify or Apollo API endpoint. The full list is automatically partitioned into manageable batches of 1000 leads for efficient processing. Each lead is individually passed to the Gemini AI Agent, which validates that required fields like Name, Email, and Company are present and correctly formatted. Validated leads are assigned a unique Lead ID, and all data fields are standardized for consistency. The system performs a lookup in the target Google Sheet to confirm the lead's email does not already exist. Clean, unique leads are appended as a new row to the designated spreadsheet. A completion notice is sent via the Telegram Bot, summarizing the batch results with clear statistics. Requirements Apify/Apollo API access credentials. Google Cloud project with OAuth2 credentials for Google Sheets API access. A configured Telegram Bot with its API Token and a target Chat ID. A Google Gemini API Key for data validation and cleansing. This system is ideal for sales and marketing operations teams managing high-volume lead generation campaigns, providing automated data quality assurance and accelerating pipeline development.
by Zain Khan
AI Product Photography With Nano Banana and Jotform 📸✨ Automate your product visuals! This n8n workflow instantly processes new product photography requests from Jotform or Google Sheets, uses an AI agent (Gemini Nano Banana) to generate professional AI product photography based on your product details and reference images, saves the final image to Google Drive, and updates the photo link in your Google Sheet for seamless record keeping. How it Works This n8n workflow operates as a fully automated pipeline for generating and managing AI product photographs: Trigger: The workflow is triggered either manually, on a set schedule (e.g., hourly), or immediately upon a new submission from the connected Jotform (or when new "Pending" rows are detected in the Google Sheet on a scheduled or manual run). Data Retrieval: If triggered by a schedule or manually, the workflow fetches new rows with a "Status" of "Pending" from the designated Google Sheet. Data Preparation: The input data (Product Name, Description, Requirements, and URLs for the Product and Reference Images) is prepared. The Product and Reference Images are downloaded using HTTP Requests. AI Analysis & Prompt Generation: An AI agent (using the Gemini model) analyzes the product details and image requirements, then generates a refined, professional prompt for the image generation model. AI Photo Generation: The generated prompt, along with the downloaded product and reference images, is sent to the image generation model, referred to as "Gemini Nano Banana" (a powerful Google AI model for image generation), to create the final, high-quality AI product photograph. File Handling: The raw image data is converted into a binary file format. Storage: The generated photograph is saved with the Product Name as the filename to your specified Google Drive folder. Record Update: The workflow updates the original row in the Google Sheet, changing the "Status" to "Completed" and adding the public URL of the newly saved image in the "Generated Image" column. If the trigger was from Jotform, a new record is appended to the Google Sheet. Requirements To use this workflow, you'll need the following accounts and credentials configured in n8n: n8n Account:** Your self-hosted or cloud n8n instance. Google Sheets/Drive Credentials:* An *OAuth2* or *API Key** credential for the Google Sheets and Google Drive nodes to read input and save the generated image. Google Gemini API Key:* An API key for the Google Gemini nodes to access the AI agent for prompt generation and the image generation service (Gemini Nano Banana*). Jotform Credential (Optional):* A Jotform credential is only required if you want to use the Jotform Webhook trigger. *Sign up for Jotform here:** https://www.jotform.com/?partner=zainurrehman A Google Sheet and Jotform:** with columns/fields for: Product Name, Product Description, Product Image (URL), Requirement, Reference Image 1 (URL), Reference Image 2 (URL), Status, and a blank Generated Image column. How to Use 1. Set Up Your Integrations Add the necessary Credentials (Google Sheets, Google Drive, Gemini API, and optionally Jotform) in your n8n settings. Specify the Google Sheet Document ID and Sheet Name in the Google Sheet nodes. In the Upload to Drive node, select your desired Drive ID and Folder ID where the final images should be saved. 2. Prepare Input Data You can start the workflow either by: Submitting a Form:* Fill out and submit the connected *Jotform** with the product details and image links. Adding to a Sheet:* Manually add a new row to your Google Sheet with all the product and image details, ensuring the *Status* is set to *"Pending"**. 3. Run the Workflow For Jotform Trigger:* Once the workflow is *Active**, a Jotform submission will automatically start the process. For Scheduled/Manual Trigger:* Activate the *Schedule Trigger* for automatic runs (e.g., hourly), or click the *Manual Trigger* node and select *"Execute Workflow"** to process all current "Pending" requests in the Google Sheet. The generated photograph will be uploaded to Google Drive, and its link will be automatically recorded in the "Generated Image" column in your Google Sheet.
by DIGITAL BIZ TECH
AI-Powered LinkedIn Post Generator Workflow Overview This workflow is a two-part intelligent content creation system built in n8n, designed to generate professional and on-brand LinkedIn posts. It combines a conversational frontend agent that interacts naturally with users and a backend post generation engine powered by structured templates and Mistral Cloud AI models. Workflow Structure Frontend:** Conversational “LinkedIn Agent” that guides the user. Backend:** “Post Generator” engine that produces final, high-quality content using dynamic templates. LinkedIn Agent (Frontend Flow) Trigger:** When chat message received Starts the workflow whenever a user sends a message to the chatbot or embedded interface. Agent:** LinkedIn Agent Welcomes the user and lists 7 available post templates: Educational Promotional Discussion Case Study & Testimonial News Personal General Prompts the user to select a template number. Asks for a topic after the user’s choice. Sends both template number and topic to the backend using a Tool call. Memory:** Simple Memory1 Stores the last 10 messages to maintain conversational context. LLM Model:** Mistral Cloud Chat Model1 Used for reasoning, conversational responses, and user guidance. Tool Used:** template Invokes another trigger in the same workflow: When Executed by Another Workflow. Passes the user’s chosen template and topic to the backend. Post Generation Engine (Backend Flow) Trigger:** When Executed by Another Workflow Receives payload from the template tool (template ID + topic). Router Node:** Switch between templates Directs flow to the correct post template logic based on user’s choice (1–7). Example: 1 → Knowledge & Educational 2 → Promotion 3 → Discussion 4 → Case Study & Testimonial etc. Prompt Template Nodes:** Each Set node defines a large, structured prompt containing: Specific tone, audience, and purpose rules Example hooks and CTAs Layout and line formatting instructions “FORBIDDEN PHRASES” list (e.g., no “game-changer”, “revolutionary”) Expert Writer Agent:** post generator A specialized agent node that receives the selected prompt template. Generates the final LinkedIn post text using strict formatting and tone rules. Model: Mistral Cloud Chat Model Output:** The generated post text is sent back to the template tool and displayed to the user in chat. Integrations Used | Service | Purpose | Credential | |----------|----------|-------------| | Mistral Cloud | LLM & post generation | Mistral Cloud account dbt | | n8n Agent Framework | Multi-agent orchestration | Native | | Chat UI / Webhook | Frontend interaction | Custom embedded UI or webhook trigger | Agent System Prompt Summary > “You are an intelligent LinkedIn assistant that helps users craft posts. List available templates, guide them to select one, and collect a topic. Then use the provided template tool to request the backend writer to generate a final post.” Backend writer’s system prompt: > “You are an expert LinkedIn marketing leader. Generate structured, professional posts for AI/automation topics. Avoid hype, buzzwords, and clichés. Keep sentences short, tone confident, and use strong openers.” Key Features ✅ Dual-agent architecture (Frontend Assistant + Backend Writer) ✅ 7 dynamic content templates for flexibility ✅ Conversational chat interface for ease of use ✅ Strict brand tone enforcement with style rules ✅ Fully automated generation and return of final post in chat Summary > A modular, agent-based n8n workflow for automated LinkedIn post creation, featuring conversational input, structured templates, and AI-generated output powered by Mistral Cloud. Perfect for content teams, social media managers, and AI automation startups. Need Help or More Workflows? Want to customize this workflow for your business. Our team at Digital Biz Tech can tailor it precisely to your use case — from automation logic to AI-powered content engines. 💡 We can help you set it up for free — from connecting credentials to deploying it live. Contact: shilpa.raju@digitalbiz.tech Website: https://www.digitalbiz.tech LinkedIn: https://www.linkedin.com/company/digital-biz-tech/ You can also DM us on LinkedIn for any help.
by Camille Roux
Create a reusable “photos to post” queue from your Lightroom Cloud album—ideal for Lightroom-to-Instagram automation with n8n. It discovers new photos, stores clean metadata in a Data Table, and generates AI alt text to power on-brand captions and accessibility. Use it together with “Lightroom Image Webhook (Direct JPEG for Instagram)” and “Instagram Auto-Publisher for Lightroom Photos (AI Captions).” What it’s for Automate Lightroom to Instagram; centralize photo data for scheduled IG posting; prep AI-ready alt text and metadata for consistent, hands-free publishing. Parameters to set Lightroom Cloud credentials (client/app + API key) Album/collection ID to monitor in Lightroom Cloud Data Table name for the posting queue (e.g., Photos) AI settings: language/tone for alt text (concise, brand-aware) Image analysis URL: public endpoint of Workflow 2 (Lightroom Image Webhook) Works best with Workflow 2: Lightroom Image Webhook (Direct JPEG for Instagram) Workflow 3: Instagram Auto-Publisher for Lightroom Photos (AI Captions) Learn more & stay in the loop Want the full story (decisions, trade-offs, and tips) behind this Lightroom Cloud → Instagram automation? 👉 Read the write-up on my blog: camilleroux.com If you enjoy street & urban photography or you’re curious how I use these n8n workflows day-to-day: 👉 Follow my photo account on Instagram: @camillerouxphoto 👉 Follow me on other networks: links available on my site (X, Bluesky, Mastodon, Threads)