by Yaron Been
Comprehensive SEO Strategy with O3 Director & GPT-4 Specialist Team Trigger When chat message received → User submits an SEO request (e.g., “Help me rank for project management software”). The message goes straight to the SEO Director Agent. SEO Director Agent (O3) Acts like the head of SEO strategy. Uses the Think node to plan and decide which specialists to call. Delegates tasks to relevant agents. Specialist Agents (GPT-4.1-mini) Each agent has its own OpenAI model connection for lightweight cost-efficient execution. Tasks include: Keyword Research Specialist → Keyword discovery, clustering, competitor analysis. SEO Content Writer → Generates optimized blog posts, landing pages, etc. Technical SEO Specialist → Site audit, schema markup, crawling fixes. Link Building Strategist → Backlink strategies, outreach campaign ideas. Local SEO Specialist → Local citations, GMB optimization, geo-content. Analytics Specialist → Reports, performance insights, ranking metrics. Feedback Loop Each agent sends results back to the SEO Director. Director compiles insights into a comprehensive SEO campaign plan. ✅ Why This Setup Works Well O3 Model for Director** → Handles reasoning-heavy orchestration (strategy, delegation). GPT-4.1-mini for Specialists** → Cheap, fast, task-specific execution. Parallel Execution** → All specialists can run at the same time. Scalable & Modular** → You can add/remove agents depending on campaign needs. Sticky Notes** → Already document the workflow (great for onboarding & sharing).
by WeblineIndia
Webhook from IoT Devices → Jira Maintenance Ticket → Slack Factory Alert This workflow automates predictive maintenance by receiving IoT machine-failure webhooks, creating Jira maintenance tickets, checking technician availability in Slack and sending the alert to the correct Slack channel. If an active technician is available, the system notifies the designated technician channel; if not, it escalates automatically to your chosen emergency/escalation channel. ⚡ Quick Implementation: Start Using in 10 Seconds Import the workflow JSON into n8n. Add Slack API credentials (with all required scopes). Add Jira Cloud credentials. Select Slack channels for: Technician alerts Emergency/escalation alerts Deploy the webhook URL to your IoT device. Run a test event. What It Does This workflow implements a real-time predictive maintenance automation loop. An IoT device sends machine data — such as temperature, vibration and timestamps — to an n8n webhook whenever a potential failure is detected. The workflow immediately evaluates whether the values exceed a defined safety threshold. If a failure condition is detected, a Jira maintenance ticket is automatically created with all relevant machine information. The workflow then gathers all technicians from your selected Slack channel and checks each technician’s presence status in real time. A built-in decision engine chooses the first available technician. If someone is active, the workflow sends a maintenance alert to your technician channel. If no technicians are available, the workflow escalates the alert to your chosen emergency channel to avoid operational downtime. This eliminates manual monitoring, accelerates response times and ensures no incident goes unnoticed — even if the team is unavailable. Who’s It For This workflow is ideal for: Manufacturing factories Industrial automation setups IoT monitoring systems Warehouse operations Maintenance & facility management teams Companies using Jira + Slack Organizations implementing predictive maintenance or automated escalation workflows Requirements to Use This Workflow You will need: An n8n instance (Cloud or Self-hosted) Slack App with the scopes: users:read users:read.presence channels:read chat:write Jira Cloud credentials (email + API token) Slack channels of your choice for: Technician alerts Emergency/escalation alerts IoT device capable of POST webhook calls Machine payload must include: machineId temperature vibration timestamp How It Works & How To Set Up 🔧 High-Level Workflow Logic IoT Webhook receives machine data. IF Condition checks whether values exceed safety thresholds. Jira Ticket is created with machine details if failure detected. Slack Channel Members are fetched from your selected technician channel. Loop Through Technicians to check real-time presence. Code Node determines: first available (active) technician or fallback mode if none available IF Condition checks technician availability. Slack Notification is sent to: your chosen technician channel if someone is available your chosen emergency/escalation channel if no one is online 🛠 Step-by-Step Setup Instructions Import Workflow: n8n → Workflows → Import from File → Select JSON. Configure Slack: Add required scopes (users:read, users:read.presence, channels:read, chat:write) and reconnect credentials. Select Slack Channels: Choose any Slack channels you want for technician notifications and emergency alerts—no fixed naming is required. Configure Jira: Add credentials, select project and issue type, and set priority mapping if needed. Deploy Webhook: Copy the n8n webhook URL and configure your IoT device to POST machine data. Test System: Send a test payload to ensure Jira tickets are created and Slack notifications route correctly based on technician availability. This setup allows real-time monitoring, automated ticket creation and flexible escalation — reducing manual intervention and ensuring fast maintenance response. How To Customize Nodes Webhook Node Add security tokens Change webhook path Add response message IF Node (Threshold Logic) Lower/raise temperature threshold Change OR to AND Add more conditions (humidity, RPM, pressure) Jira Node Customize fields like summary, labels or assign issues based on technician availability Slack Presence Node Add DND checks Treat “away” as “available” during night shift Combine multiple channels Code Node Randomly rotate technicians Pick technician with lowest alert count Keep a history log Add-Ons SMS fallback notifications (Twilio) WhatsApp alerts Telegram alerts Notify supervisors via email Store machine failures into Google Sheets Push metrics into PowerBI Auto-close Jira tickets after normalizing machine values Create a daily maintenance report Use Case Examples Overheating Machine Alert – Detect spikes and notify technician instantly. Vibration Pattern Anomaly Detection – Trigger early maintenance before full breakdown. Multi-Shift Technician Coverage – Automatically switch to emergency mode when no technician is online. Factory Night-Shift Automation – Night alerts automatically escalate without manual verification. Warehouse Robotics Malfunction – Sends instant Slack + Jira alerts when robots overheat or jam. Troubleshooting Guide | Issue | Possible Cause | Solution | | ----------------------------- | ----------------------------------- | -------------------------------------------- | | Webhook returns no data | Wrong endpoint or method | Use POST + correct URL | | Slack presence returns error | Missing Slack scopes | Add users:read.presence | | Jira ticket not created | Invalid project key or credentials | Reconfigure Jira API credentials | | All technicians show offline | Wrong channel or IDs | Ensure correct channel members | | Emergency alert not triggered | Code node returning incorrect logic | Test code with all technicians set to “away” | | Slack message fails | Wrong channel ID | Replace with correct Slack channel | Need Help? If you need help customizing this workflow, adding new automation features, connecting additional systems or building enterprise IoT maintenance solutions, our n8n automation development team at WeblineIndia team can help. We can assist with: Workflow setup Advanced alert logic Integrating SMS / WhatsApp / Voice alerts Custom escalation rules Industrial IoT integration Reach out anytime for support or enhancements.
by WeblineIndia
(Retail) Auto-Tag High-Risk SKUs This workflow automatically monitors product sales in your WooCommerce store, detects fast-selling items, applies risk tags and sends a clear alert to Slack—so you never miss products that need attention. This workflow checks your WooCommerce store every day, reviews product sales from the last 14 days and calculates how fast each product is selling. Based on sales volume, it assigns a risk level (OK, Watchlist, High-Risk or Critical), updates product tags in WooCommerce and sends a single, easy-to-read Slack alert for products that need attention. You receive: Daily automated sales analysis** Automatic risk tagging inside WooCommerce** One clean Slack alert with product name, units sold and risk level** Ideal for store owners and operations teams who want proactive inventory control without manual reports. Quick Start – Implementation Steps Connect your WooCommerce API credentials. Connect your Slack workspace and choose an alert channel. Adjust sales thresholds if needed (optional). Activate the workflow — daily monitoring starts automatically. What It Does This workflow automates inventory risk detection: Runs automatically on a daily schedule. Fetches completed WooCommerce orders from the last 14 days. Fetches product details from WooCommerce. Counts how many units of each product were sold. Assigns a risk level: OK Watchlist High-Risk Critical Updates product tags in WooCommerce based on risk. Combines all risky products into one list. Sends a single Slack alert summarizing: Product name Units sold Risk level This prevents stock issues and highlights fast-selling products early. Who’s It For This workflow is ideal for: WooCommerce store owners E-commerce operations teams Inventory & supply chain managers Marketing teams tracking fast-selling products Businesses managing limited or high-demand stock Anyone who wants automated inventory visibility Requirements to Use This Workflow To run this workflow, you need: n8n instance** (cloud or self-hosted) WooCommerce store** with REST API access WooCommerce API keys** (Read + Write) Slack workspace** with API access Basic understanding of WooCommerce products & orders How It Works Daily Trigger – Workflow runs at a scheduled time. Fetch Orders – Gets completed orders from the last 14 days. Fetch Products – Retrieves product details. Calculate Sales & Risk – Counts sold units and assigns risk level. Split by Risk – Routes products based on risk category. Update Product Tags – Applies correct WooCommerce tags. Merge Results – Combines all risky products. Build Alert Message – Creates a readable Slack message. Send Slack Alert – Sends one summary alert to your team. Setup Steps Import the workflow JSON into n8n. Configure WooCommerce credentials in all WooCommerce nodes. Ensure risk tags exist in WooCommerce: Watchlist High-Risk Critical Connect your Slack API credentials. Select the Slack channel for alerts. Review or adjust sales thresholds in the risk calculation node. Activate the workflow. How To Customize Nodes Customize Risk Thresholds Update the Calculate Risk code node to change when products move into: Watchlist High-Risk Critical Customize WooCommerce Tags Replace tag IDs in the Update Product nodes with your own tag IDs. Customize Slack Alerts You can add: Emojis Mentions (@channel, @team) Product links Stock status or category info Add-Ons (Optional Enhancements) You can extend this workflow to: Include stock quantity checks Send separate alerts per risk level Create weekly or monthly summaries Store alerts in Google Sheets or Airtable Add email or SMS notifications Predict out-of-stock dates Add AI-based sales trend insights Use Case Examples 1\. Inventory Risk Monitoring Detect products that may go out of stock soon. 2\. Sales Trend Tracking Identify fast-selling products automatically. 3\. Operations Alerts Notify teams before stock issues occur. 4\. Marketing Signals Spot trending products for promotions. 5\. Daily Store Health Check Get a quick snapshot of product risk every day. Troubleshooting Guide IssuePossible CauseSolutionNo Slack alertNo risky productsCheck thresholdsTags not updatedWrong tag IDVerify WooCommerce tag IDsUnits sold = 0Orders not completedCheck order status filterWorkflow not runningSchedule disabledEnable Schedule TriggerSlack errorInvalid credentialsReconnect Slack account Need Help? If you need help customizing, scaling or extending this workflow—such as adding forecasting, dashboards or multi-store support—the WeblineIndia team can help you build production-ready e-commerce automation.
by IranServer.com
Automate IP geolocation and HTTP port scanning with Google Sheets trigger This n8n template automatically enriches IP addresses with geolocation data and performs HTTP port scanning when new IPs are added to a Google Sheets document. Perfect for network monitoring, security research, or maintaining an IP intelligence database. Who's it for Network administrators, security researchers, and IT professionals who need to: Track IP geolocation information automatically Monitor HTTP service availability across multiple ports Maintain centralized IP intelligence in spreadsheets Automate repetitive network reconnaissance tasks How it works The workflow triggers whenever a new row containing an IP address is added to your Google Sheet. It then: Fetches geolocation data using the ip-api.com service to get country, city, coordinates, ISP, and organization information Updates the spreadsheet with the geolocation details Scans common HTTP ports (80, 443, 8080, 8000, 3000) to check service availability Records port status back to the same spreadsheet row, showing which services are accessible The workflow handles both successful connections and various error conditions, providing a comprehensive view of each IP's network profile. Requirements Google Sheets API access** - for reading triggers and updating data Google Sheets document** with at least an "IP" column header How to set up Create a Google Sheet with columns: IP, Country, City, Lat, Lon, ISP, Org, Port_80, Port_443, Port_8000, Port_8080, Port_3000 Configure Google Sheets credentials in both the trigger and update nodes Update the document ID in the Google Sheets Trigger and both Update nodes to point to your spreadsheet Test the workflow by adding an IP address to your sheet and verifying the automation runs How to customize the workflow Modify port list**: Edit the "Edit Fields" node to scan different ports by changing the ports array Add more geolocation fields**: The ip-api.com response includes additional fields like timezone, zip code, and AS number Change trigger frequency**: Adjust the polling interval in the Google Sheets Trigger for faster or slower monitoring Add notifications**: Insert Slack, email, or webhook nodes to alert when specific conditions are detected Filter results**: Add IF nodes to process only certain IP ranges or geolocation criteria
by vinci-king-01
Daily Stock Regulatory News Aggregator with Compliance Alerts and Google Sheets Tracking 🎯 Target Audience Compliance officers and regulatory teams Financial services firms monitoring regulatory updates Investment advisors tracking regulatory changes Risk management professionals Corporate legal departments Stock traders and analysts monitoring regulatory news 🚀 Problem Statement Manually monitoring regulatory updates from multiple agencies (SEC, FINRA, ESMA) is time-consuming and error-prone. This template automates daily regulatory news monitoring, aggregates updates from major regulatory bodies, filters for recent announcements, and instantly alerts compliance teams to critical regulatory changes, enabling timely responses and maintaining regulatory compliance. 🔧 How it Works This workflow automatically monitors regulatory news daily, scrapes the latest updates from major regulatory agencies using AI-powered web scraping, filters for updates from the last 24 hours, and sends Slack alerts while logging all updates to Google Sheets for historical tracking. Key Components Daily Schedule Trigger - Automatically runs the workflow every 24 hours to check for regulatory updates Regulatory Sources Configuration - Defines the list of regulatory agencies and their URLs to monitor (SEC, FINRA, ESMA) Batch Processing - Iterates through regulatory sources one at a time for reliable processing AI-Powered Scraping - Uses ScrapeGraphAI to intelligently extract regulatory updates including title, summary, date, agency, and source URL Data Flattening - Transforms scraped data structure into individual update records Time Filtering - Filters updates to keep only those from the last 24 hours Historical Tracking - Logs all filtered updates to Google Sheets for compliance records Compliance Alerts - Sends Slack notifications to compliance teams when new regulatory updates are detected 💰 Key Features Automated Regulatory Monitoring Daily Execution**: Runs automatically every 24 hours without manual intervention Multi-Agency Support**: Monitors SEC, FINRA, and ESMA simultaneously Error Handling**: Gracefully handles scraping errors and continues processing other sources Smart Filtering Time-Based Filtering**: Automatically filters updates to show only those from the last 24 hours Date Validation**: Discards updates with unreadable or invalid dates Recent Updates Focus**: Ensures compliance teams only receive actionable, timely information Alert System Compliance Alerts**: Instant Slack notifications for new regulatory updates Structured Data**: Alerts include title, summary, date, agency, and source URL Dedicated Channel**: Posts to designated compliance alerts channel for team visibility 📊 Output Specifications The workflow generates and stores structured data including: | Output Type | Format | Description | Example | |-------------|--------|-------------|---------| | Regulatory Updates | JSON Object | Extracted regulatory update information | {"title": "SEC Announces New Rule", "date": "2024-01-15", "agency": "SEC"} | | Update History | Google Sheets | Historical regulatory update records with timestamps | Columns: Title, Summary, Date, Agency, Source URL, Scraped At | | Slack Alerts | Messages | Compliance notifications for new updates | "📢 New SEC update: [Title] - [Summary]" | | Error Logs | System Logs | Scraping error notifications | "❌ Error scraping FINRA updates" | 🛠️ Setup Instructions Estimated setup time: 15-20 minutes Prerequisites n8n instance with community nodes enabled ScrapeGraphAI API account and credentials Google Sheets API access (OAuth2) Slack workspace with API access Google Sheets spreadsheet for regulatory update tracking Step-by-Step Configuration 1. Install Community Nodes Install ScrapeGraphAI community node npm install n8n-nodes-scrapegraphai 2. Configure ScrapeGraphAI Credentials Navigate to Credentials in your n8n instance Add new ScrapeGraphAI API credentials Enter your API key from ScrapeGraphAI dashboard Test the connection to ensure it's working 3. Set up Google Sheets Connection Add Google Sheets OAuth2 credentials Authorize access to your Google account Create or identify the spreadsheet for regulatory update tracking Note the spreadsheet ID and sheet name (default: "RegUpdates") 4. Configure Slack Integration Add Slack API credentials to your n8n instance Create or identify Slack channel: #compliance-alerts Test Slack connection with a sample message Ensure the bot has permission to post messages 5. Customize Regulatory Sources Open the "Regulatory Sources" Code node Update the urls array with additional regulatory sources if needed: const urls = [ 'https://www.sec.gov/news/pressreleases', 'https://www.finra.org/rules-guidance/notices', 'https://www.esma.europa.eu/press-news', // Add more URLs as needed ]; 6. Configure Google Sheets Update documentId in "Log to Google Sheets" node with your spreadsheet ID Update sheetName to match your sheet name (default: "RegUpdates") Ensure the sheet has columns: Title, Summary, Date, Agency, Source URL, Scraped At Create the sheet with proper column headers if starting fresh 7. Customize Slack Channel Open "Send Compliance Alert" Slack node Update the channel name (default: "#compliance-alerts") Customize the message format if needed Test with a sample message 8. Adjust Schedule Open "Daily Regulatory Poll" Schedule Trigger Modify hoursInterval to change frequency (default: 24 hours) Set specific times if needed for daily execution 9. Customize Scraping Prompt Open "Scrape Regulatory Updates" ScrapeGraphAI node Adjust the userPrompt to extract different or additional fields Modify the JSON schema in the prompt if needed Change the number of updates extracted (default: 5 most recent) 10. Test and Validate Run the workflow manually to verify all connections Check Google Sheets for data structure and format Verify Slack alerts are working correctly Test error handling with invalid URLs Validate date filtering is working properly 🔄 Workflow Customization Options Modify Monitoring Frequency Change hoursInterval in Schedule Trigger for different frequencies Switch to multiple times per day for critical monitoring Add multiple schedule triggers for different agency checks Extend Data Collection Modify ScrapeGraphAI prompt to extract additional fields (documents, categories, impact level) Add data enrichment nodes for risk assessment Integrate with regulatory databases for more comprehensive tracking Add sentiment analysis for regulatory updates Enhance Alert System Add email notifications alongside Slack alerts Create different alert channels for different agencies Add priority-based alerting based on update keywords Integrate with SMS or push notification services Add webhook integrations for other compliance tools Advanced Analytics Add data visualization nodes for regulatory trend analysis Create automated compliance reports with summaries Integrate with business intelligence tools Add machine learning for update categorization Track regulatory themes and topics over time Multi-Source Support Add support for additional regulatory agencies Implement agency-specific scraping strategies Add regional regulatory sources (FCA, BaFin, etc.) Include state-level regulatory updates 📈 Use Cases Compliance Monitoring**: Automatically track regulatory updates to ensure timely compliance responses Risk Management**: Monitor regulatory changes that may impact business operations or investments Regulatory Intelligence**: Build historical databases of regulatory announcements for trend analysis Client Communication**: Stay informed to provide timely updates to clients about regulatory changes Legal Research**: Track regulatory developments for legal research and case preparation Investment Strategy**: Monitor regulatory changes that may affect investment decisions 🚨 Important Notes Respect website terms of service and rate limits when scraping regulatory sites Monitor ScrapeGraphAI API usage to manage costs Ensure Google Sheets has proper column structure before first run Set up Slack channel before running the workflow Consider implementing rate limiting for multiple regulatory sources Keep credentials secure and rotate them regularly Test with one regulatory source first before adding multiple sources Verify date formats are consistent across different regulatory agencies Be aware that some regulatory sites may have anti-scraping measures 🔧 Troubleshooting Common Issues: ScrapeGraphAI connection errors: Verify API key and account status Google Sheets logging failures: Check spreadsheet ID, sheet name, and column structure Slack notification failures: Verify channel name exists and bot has permissions Date filtering issues: Ensure dates from scraped content are in a parseable format Validation errors: Check that scraped data matches expected schema Empty results: Verify regulatory sites are accessible and haven't changed structure Optimization Tips: Start with one regulatory source to test the workflow Monitor API usage and costs regularly Use batch processing to avoid overwhelming scraping services Implement retry logic for failed scraping attempts Consider caching mechanisms for frequently checked sources Adjust the number of updates extracted based on typical volume Support Resources: ScrapeGraphAI documentation and API reference Google Sheets API documentation Slack API documentation for webhooks n8n community forums for workflow assistance n8n documentation for node configuration SEC, FINRA, and ESMA official websites for source verification
by n8n Automation Expert | Template Creator | 2+ Years Experience
🌤️ Automated Indonesian Weather Monitoring with Smart Notifications Stay ahead of weather changes with this comprehensive monitoring system that fetches real-time data from Indonesia's official meteorological agency (BMKG) and delivers beautiful, actionable weather reports directly to your Telegram. ⚡ What This Workflow Does This intelligent weather monitoring system automatically: Fetches Official Data**: Connects to BMKG's public weather API for accurate Indonesian forecasts Smart Processing**: Analyzes temperature, humidity, precipitation, and wind conditions Risk Assessment**: Generates contextual warnings for extreme weather conditions Automated Alerts**: Sends formatted weather reports to Telegram every 6 hours Error Handling**: Includes robust error detection and notification system 🎯 Perfect For Local Communities**: Keep neighborhoods informed about weather changes Business Operations**: Plan outdoor activities and logistics based on weather Emergency Preparedness**: Receive early warnings for extreme weather conditions Personal Planning**: Never get caught unprepared by sudden weather changes Agricultural Monitoring**: Track conditions affecting farming and outdoor work 🛠️ Key Features 🔄 Automated Scheduling**: Runs every 6 hours with manual trigger option 📊 Comprehensive Reports**: Current conditions + 6-hour detailed forecasts ⚠️ Smart Warnings**: Contextual alerts for temperature extremes and rain probability 🎨 Beautiful Formatting**: Rich Telegram messages with emojis and structured data 🔧 Error Recovery**: Automatic error handling with notification system 📍 Location-Aware**: Supports any Indonesian location via BMKG regional codes 📋 What You'll Get Each weather report includes: Current temperature, humidity, and weather conditions 6-hour detailed forecast with timestamps Wind speed and direction information Rain probability and visibility data Personalized warnings and recommendations Average daily statistics and trends 🚀 Setup Requirements Telegram Bot Token**: Create a bot via @BotFather Chat ID**: Your personal or group chat identifier BMKG Location Code**: Regional administrative code for your area 💡 Pro Tips Customize the location by changing the adm4 parameter in the HTTP request Adjust scheduling interval based on your monitoring needs Modify warning thresholds in the processing code Add multiple chat IDs for broader distribution Integrate with other n8n workflows for advanced automation 🌟 Why Choose This Template Production Ready**: Includes comprehensive error handling and logging Highly Customizable**: Easy to modify for different locations and preferences Official Data Source**: Uses Indonesia's trusted meteorological service User-Friendly Output**: Clean, readable reports perfect for daily use Scalable Design**: Easily extend for multiple locations or notification channels Transform your weather awareness with this professional-grade monitoring system that brings Indonesia's official weather data right to your fingertips! Keywords: weather monitoring, BMKG API, Telegram notifications, Indonesian weather, automated alerts, meteorological data, weather forecasting, n8n automation, weather API integration
by Oneclick AI Squad
This automated n8n workflow checks daily travel itineraries, syncs upcoming trips to Google Calendar, and sends reminder notifications to travelers via email or SMS. Perfect for travel agencies, tour operators, and organizations managing group trips to keep travelers informed about their schedules and bookings. What This Workflow Does Automatically checks travel itineraries every day Identifies today's trips and upcoming departures Syncs trip information to Google Calendar Sends personalized reminders to assigned travelers Tracks reminder delivery status and logs activities Handles both email and SMS notification preferences Provides pre-travel checklists and booking confirmations Manages multi-day trip schedules and activities Main Components Daily Travel Check** - Triggers daily to check travel itineraries Read Travel Itinerary** - Retrieves today's trips and bookings from database/Excel Filter Today's Trips** - Identifies trips departing today and upcoming activities Has Trips Today?** - Checks if there are any trips scheduled Read Traveler Contacts** - Gets traveler contact information for assigned trips Sync to Google Calendar** - Creates/updates trip events in Google Calendar Create Traveler Reminders** - Generates personalized reminder messages with travel details Split Into Batches** - Processes reminders in manageable batches Email or SMS?** - Routes based on traveler communication preferences Prepare Email Reminders** - Creates detailed email reminder content with checklists Prepare SMS Reminders** - Creates SMS reminder content optimized for text Read Reminder Log** - Checks previous reminder history Update Reminder Log** - Records sent reminders with timestamps Save Reminder Log** - Saves updated log data for audit trail Essential Prerequisites Travel itinerary database/Excel file with trip assignments Traveler contact database with email and phone numbers Google Calendar API access and credentials SMTP server for email notifications SMS service provider (Twilio, Nexmo, etc.) for text reminders Reminder log file for tracking sent notifications Booking confirmation system (flight, hotel, transport) Required Data Files trip_itinerary.xlsx: Trip ID | Trip Name | Date | Departure Time | Duration Departure Location | Destination | Hotel | Flight Number Assigned Travelers | Status | Booking Reference | Cost traveler_contacts.xlsx: Traveler ID | First Name | Last Name | Email | Phone Preferred Contact | Assigned Trips | Passport Number | Emergency Contact reminder_log.xlsx: Log ID | Date | Traveler ID | Trip ID | Contact Method Status | Sent Time | Message Preview | Confirmation Key Features ⏰ Daily Automation: Runs automatically every day at scheduled times 📅 Calendar Sync: Syncs trips to Google Calendar for easy viewing 📧 Smart Reminders: Sends email or SMS based on traveler preference 👥 Batch Processing: Handles multiple travelers efficiently 📊 Activity Logging: Tracks all reminder activities and delivery status 🔄 Duplicate Prevention: Avoids sending multiple reminders 📱 Multi-Channel: Supports both email and SMS notifications ✈️ Travel-Specific: Includes flight numbers, locations, accommodation details 📋 Pre-Travel Checklist: Provides comprehensive packing and document reminders 🌍 Multi-Destination: Manages complex multi-stop itineraries Quick Setup Import workflow JSON into n8n Configure daily trigger schedule (recommended: 6 AM and 6 PM) Set up trip itinerary and traveler contact files Connect Google Calendar API credentials Configure SMTP server for emails Set up SMS service provider (Twilio, Nexmo, or similar) Map Excel sheet columns to workflow variables Test with sample trip data Activate workflow Parameters to Configure schedule_file_path: Path to trip itinerary file contacts_file_path: Path to traveler contacts file reminder_hours: Hours before departure to send reminder (default: 24) google_calendar_id: Google Calendar ID for syncing trips google_api_credentials: Google Calendar API credentials smtp_host: Email server settings smtp_user: Email username smtp_password: Email password sms_api_key: SMS service API key sms_phone_number: SMS sender phone number reminder_log_path: Path to reminder log file Sample Reminder Messages Email Subject: "✈️ Travel Reminder: [Trip Name] Today at [Time]" Email Body: Hello [Traveler Name], Your trip is happening today! Here are your travel details: Trip: [Trip Name] Departure: [Departure Time] From: [Departure Location] To: [Destination] Flight/Transport: [Flight Number] Hotel: [Hotel Name] Duration: [X] days Pre-Travel Checklist: ☑ Passport and travel documents ☑ Travel insurance documents ☑ Hotel confirmations ☑ Medications and toiletries ☑ Weather-appropriate clothing ☑ Phone charger and adapters ⚠️ Please arrive at the departure point 2 hours early! Have a wonderful trip! SMS: "✈️ Travel Reminder: '[Trip Name]' departs at [Time] today from [Location]. Arrive 2 hours early! Flight: [Number]" Tomorrow Evening Preview (SMS): "📅 Tomorrow: '[Trip Name]' departs at [Time] from [Location]. Pack tonight! ([X] days)" Use Cases Daily trip departure reminders for travelers Last-minute itinerary change notifications Flight cancellation and delay alerts Hotel check-in and checkout reminders Travel document expiration warnings Group tour activity scheduling Adventure/hiking trip departure alerts Business travel itinerary updates Family vacation coordination Study abroad program notifications Multi-city tour route confirmations Transport connection reminders Advanced Features Reminder Escalation 24-hour reminder: Full details with checklist 6-hour reminder: Quick confirmation with transport details 2-hour reminder: Urgent departure notification Conditional Logic Different messages for single-day vs. multi-day trips Domestic vs. international travel variations Group size-based messaging Weather-based travel advisories Integration Capabilities Connect to airline APIs for real-time flight status Link to hotel management systems for check-in info Integrate weather services for destination forecasts Sync with payment systems for booking confirmations Troubleshooting | Issue | Solution | |-------|----------| | Reminders not sending | Check email/SMS credentials and service quotas | | Calendar sync failing | Verify Google Calendar API permissions | | Duplicate reminders | Check for overlapping reminder time windows | | Missing traveler data | Verify contact file formatting and column mapping | | Batch processing slow | Reduce batch size in Split Into Batches node | Security Considerations Store API credentials in n8n environment variables Use OAuth2 for Google Calendar authentication Encrypt sensitive data in reminder logs Implement role-based access to trip data Audit log all reminder activities Comply with GDPR/privacy regulations for traveler data Performance Metrics Processing Time**: ~2-5 seconds per 50 travelers Success Rate**: >99% for delivery logging Calendar Sync**: Real-time updates Batch Limit**: 10 travelers per batch (configurable) Support & Maintenance Review reminder logs weekly for delivery issues Update traveler contacts as needed Monitor email/SMS service quotas Test workflow after system updates Archive old reminder logs monthly
by Rahul Joshi
📊 Description Eliminate manual troubleshooting with an AI-powered autonomous recovery engine for n8n 🤖. This system monitors your entire n8n instance for failures, analyzes the root cause using Azure OpenAI, and automatically repairs broken workflows in real-time. By distinguishing between temporary network glitches and logic errors, it either retries the execution or dynamically patches the workflow JSON via the n8n API. It provides a "closed-loop" automation experience, ensuring your mission-critical processes stay online without human intervention 🔍⚡. What This Template Does Monitors All Workflows: Captures errors globally across your instance via the Error Trigger. Prevents Infinite Loops: Automatically filters out errors originating from the healing system itself. Fetches Live Context: Pulls the complete, latest JSON structure of the failing workflow for analysis. AI Root Cause Analysis: Uses Azure OpenAI (GPT-4o) to diagnose if the issue is a "RETRY" (timeout/rate limit) or a "FIX" (invalid parameter/logic). Autonomous Patching: For logic errors, a JavaScript engine injects the AI's corrected values directly into the workflow code. Human-in-the-Loop Alerts: Sends detailed Slack notifications for successful auto-fixes or requests manual help if the error is too complex. Key Benefits ✅ 24/7 Autonomous Reliability: Workflows fix themselves while you sleep, reducing downtime significantly. ✅ Intelligent Recovery: Moves beyond simple retries by actually correcting "hard" errors like broken IDs or missing parameters. ✅ Production-Grade Governance: Adds a safety layer to your automation stack that learns and adapts to errors. Features Global Error Listener: Catch-all trigger for instance-wide monitoring. Dual-Path Recovery: Distinct logic branches for transient vs. permanent failures. Wait-State Logic: Built-in "cool down" periods to respect external API rate limits during retries. AI Patching Engine: Structured output parsing ensures the AI provides valid, deployable code changes. Slack Integration: Real-time visibility into the "healing" process with deep-links to specific executions. Requirements n8n Instance: (Cloud or Self-hosted) with API access enabled. Azure OpenAI Account: With GPT-4o deployment and valid API credentials. n8n API Key: To allow the system to read and update workflows. Slack App: For receiving diagnostic alerts and success notifications. Target Audience Enterprise Automation Teams: Managing high volumes of mission-critical workflows. n8n Power Users: Looking to build "bulletproof" automation infrastructure. SaaS Founders: Ensuring customer-facing integrations remain stable. Managed Service Providers (MSPs): Offering proactive automation monitoring and maintenance.
by Rahul Joshi
📘 Description This workflow performs automated inventory reconciliation between Notion (physical counts) and Airtable (system counts), ensuring both databases stay synchronized. It fetches records from both systems, merges them into a unified comparison payload, validates the structure, and calculates discrepancies. If a mismatch is detected, the workflow automatically updates Airtable with the corrected count and notifies the operations team on Slack. If everything matches, a simple “No action needed” Slack message is sent. Any malformed or incomplete payloads are logged into Google Sheets for audit tracking. ⚙️ What This Workflow Does (Step-by-Step) 🟢 Manual Trigger – Execute Workflow Starts the reconciliation process on demand. 📥 Fetch Records from Notion Retrieves physical stock data (cycle count) stored in Notion. 📦 Fetch Records from Airtable Loads inventory data from Airtable’s system-of-record table. 🔀 Merge Notion + Airtable Inputs Combines both datasets into a single payload for unified processing. 🔍 Validate Payload Structure (IF Node) Ensures that key fields (like id) exist. Valid → continue Invalid → logged to Google Sheets. 🧾 Log Invalid Versioning Requests to Google Sheets Stores broken or incomplete payload entries for later review. 🧮 Build Combined Notion + Airtable Payload (Code Node) Constructs the structured comparison object: { notion: {...}, airtable: [...] } 📊 Compare Notion Record With Airtable Record (Code Node) Performs core reconciliation logic: Matches items by name Compares physical vs. system count Calculates difference Determines if a correction is needed If mismatch → flagged for update. 🔎 Check If Record Requires Update (IF Node) Branches logic into: Mismatch → Update Airtable + Alert Match → No action summary 🛠️ Update Airtable Record With Corrected Count Writes the accurate physical count from Notion into Airtable. 🧠 Configure GPT-4o – Slack Summary Models Two models: For “no action needed” summaries For “Airtable updated” discrepancy alerts 🤖 Generate Slack Summary / Generate Slack Summary1 AI produces short, precise, operations-friendly Slack messages based on whether a discrepancy existed. 💬 Slack – Send Summary Notification / Send Update Notification Sends final Slack message to the operations user, confirming: Stock match status Updates made Item details Difference values 🧩 Prerequisites Notion API integration Airtable API credentials Azure OpenAI GPT-4o Slack API connection Google Sheets OAuth 💡 Key Benefits ✔ Eliminates manual reconciliation errors ✔ Keeps Airtable continuously aligned with real physical counts ✔ Provides instant Slack visibility to operations teams ✔ Logs all invalid or malformed cases ✔ Centralizes Notion + Airtable consistency checks 👥 Perfect For Operations teams managing multi-system inventory Warehouse cycle count workflows Audit-driven companies needing accurate stock data Businesses using Notion + Airtable as parallel systems
by WeblineIndia
Zoho CRM - Conversation Intelligence Analyzer This workflow automatically processes customer call recordings, transcribes them using OpenAI Whisper, extracts key topics, identifies commitments, analyzes sentiment, generates follow-up suggestions and updates the corresponding Zoho CRM Lead — all without manual efforts. It eliminates the need for listening to calls or writing summaries and equips your sales team with instant AI-generated insights. ⚡ Quick Start (Fast Setup) Import the workflow JSON into n8n. Add Zoho CRM OAuth2 & OpenAI API credentials. Copy the webhook URL and configure your telephony system to POST call recordings. Map Zoho custom fields. Upload a test recording → Confirm CRM updates → Activate workflow. 📘 What It Does This workflow turns every incoming call recording into structured insights which your sales & customer support team can immediately use. When a recording is received, the call is automatically transcribed using OpenAI’s Whisper model. That transcript is then processed by multiple AI nodes that detect topics, customer sentiment, commitments and possible follow-up actions. All extracted data — such as mood, sentiment score, subjects, action items and commitments is merged into a clean result object and pushed to the matching Lead in Zoho CRM. The sales team gets ready-to-use call intelligence instantly, improving decision-making, accuracy and speed. This automation works 24/7 and replaces hours of manual review work with reliable AI-generated summaries. 👤 Who’s It For Sales & Customer support teams using Zoho CRM. Support teams handling inbound/outbound calls. Businesses wanting call analytics without manual transcription. Zoho CRM admins who want automation with minimal maintenance. Organizations using telephony/VoIP systems that support call exports. 🧾 Requirements To use this workflow, you need: An n8n instance (self-hosted or cloud) Zoho CRM OAuth2 credentials OpenAI API key (Whisper + GPT models) A telephony system capable of POSTing audio files to a webhook Zoho fields to store: Topics Main subject Action items Sentiment Mood Follow-up text Commitments (optional) ⚙️ How It Works & How to Set Up 1. Webhook Trigger Your call system sends an audio file (.mp3, .wav, etc.) to the webhook. The workflow starts instantly—no polling required. 2. Workflow Configuration Static values like: sentimentThreshold = 0.7 minCommitmentConfidence = 0.8 ensure consistent logic across nodes. 3. Audio Transcription (OpenAI Whisper) The audio file is converted to text. This transcript becomes the base for all analysis nodes. 4. Key Topic Extraction AI identifies: Key topics Main subject Important action items 5. Sentiment & Mood Analysis AI analyzes: Customer mood Sales rep tone Overall sentiment Sentiment score 6. Commitment Extraction AI detects commitments using a structured JSON schema. 7. Follow-up Generation GPT generates 3–5 follow-up suggestions based on the transcript & commitments. 8. Combine All Insights A Set node merges transcription, topics, sentiment, commitments and follow-up text. 9. Update Zoho CRM Lead Updates Zoho custom fields so the sales team gets immediate insights. 🛠 How to Customize Nodes Transcription Node Switch to another Whisper/GPT model Add language options Topic Extraction Add more attributes (risks, objections, intent) Sentiment Analysis Tune thresholds Add more emotion labels Commitment Extraction Modify schema Add filtering logic CRM Update Map to different fields Append notes instead of overwriting ➕ Add-Ons (Optional Enhancements) Slack/Teams alerts for negative sentiment Email transcripts to teams Save files to Google Drive / S3 Create Zoho tasks from commitments Multi-language transcription Sales rep performance scoring 💼 Use Case Examples Sales Call Analysis** – Auto-summarize calls for follow-up. Support Hotline Monitoring** – Detect customer frustration. QA Audits** – Auto-generate evaluation notes. Voice-to-CRM Logging** – Store conversation data automatically. Compliance Tracking** – Capture legally relevant commitments. 🛠 Troubleshooting Guide | Issue | Possible Cause | Solution | |------|----------------|----------| | Workflow not triggered | Telephony not hitting webhook | Recheck webhook URL & logs | | Transcript empty | Unsupported/corrupted audio | Validate file before sending | | CRM not updating | Wrong Zoho field IDs | Verify field IDs in Zoho | | Commitments missing | Transcript unclear | Improve audio quality or edit schema | | Sentiment inaccurate | Model interpretation | Adjust sentimentThreshold | 🤝 Need Help? If you want to customize this workflow, integrate telephony systems or want to build advanced level CRM automation, then our n8n workflow development team at WeblineIndia team is happy to help. We’re here to support setup, scaling, and custom enhancements.
by Joseph LePage
n8n Creators Leaderboard Workflow Why Use This Workflow? The n8n Creators Leaderboard Workflow is a powerful tool for analyzing and presenting detailed statistics about workflow creators and their contributions within the n8n community. It provides users with actionable insights into popular workflows, community trends, and top contributors, all while automating the process of data retrieval and report generation. Benefits Discover Popular Workflows**: Identify workflows with the most unique visitors and inserters (weekly and monthly). Understand Community Trends**: Gain insights into what workflows are resonating with the community. Recognize Top Contributors**: Highlight impactful creators to foster collaboration and inspiration. Save Time with Automation**: Automates data fetching, processing, and reporting for efficiency. Use Cases For Workflow Creators**: Track performance metrics of your workflows to optimize them for better engagement. For Community Managers**: Identify trends and recognize top contributors to improve community resources. For New Users**: Explore popular workflows as inspiration for building your own automations. How It Works This workflow aggregates data from GitHub repositories containing statistics about workflow creators and their templates. It processes this data, filters it based on user input, and generates a detailed Markdown report using an AI agent. Key Features Data Aggregation: Fetches creator and workflow statistics from GitHub JSON files. Custom Filtering: Focuses on specific creators based on a username provided via chat. AI-Powered Reports: Generates comprehensive Markdown reports with summaries, tables, and insights. Output Flexibility: Saves reports locally with timestamps for easy access. Data Retrieval & Processing Creators Data**: Retrieved via an HTTP Request node from a JSON file containing aggregated statistics about creators. Workflows Data**: Pulled from another JSON file with workflow metrics like visitor counts and inserter statistics. Data Merging**: Combines creator and workflow data by matching usernames to provide enriched statistics. Report Generation The AI agent generates a Markdown report that includes: A summary of the creator’s contributions. A table of workflows with key metrics (e.g., unique visitors, inserters). Insights into trends or community feedback. The report is saved locally as a file with a timestamp for tracking purposes. Quick Start Guide Prerequisites Ensure your n8n instance is running. Verify that the GitHub base URL and file variables are correctly set in the Global Variables node. Confirm that your OpenAI credentials are configured for the AI Agent node. How to Start Activate the Workflow: Make sure the workflow is active in your n8n environment. Trigger via Chat: Use the Chat Trigger node to initiate the workflow by sending a message like: show me stats for username [desired_username] Replace [desired_username] with the username you want to analyze. Processing & Report Generation: The workflow fetches data, processes it, and generates a Markdown report. View Output: The final report is saved locally as a file (with a timestamp), which you can review to explore leaderboard insights.
by Rahi
WABA Message Journey Flow Documentation This document outlines the automated workflow for sending WhatsApp messages to contacts, triggered hourly and managed through disposition and message count logic. The workflow is designed to ensure contacts receive messages based on their status and the frequency of previous interactions. Trigger and Data Retrieval The journey begins with a time-based trigger and data retrieval from the Supabase contacts table. Trigger: A "Schedule Trigger3" node initiates the workflow every hour. This ensures that the system regularly checks for contacts requiring messages. Get Contacts: The "Get many rows1" node (Supabase) then retrieves all relevant contact data from the contacts_ampere table in Supabase. This brings in contact details such as name, phone, Disposition, Count, and last_message_sent. Disposition-Based Segregation After retrieving the contacts, the workflow segregates them based on their Disposition status. Disposition Switch: The "Disposition Switch" node acts as the primary routing mechanism. It evaluates the Disposition field of each contact and directs them to different branches of the workflow based on predefined categories. Case 0: new_lead: Contacts with the disposition new_lead are routed to the "Count Switch" for further processing. Cases 1-4: The workflow also includes branches for test_ride, Booking, walk_in, and Sale dispositions, though the detailed logic for these branches is not fully laid out in the provided JSON beyond the switch nodes ("Switch2", "Switch3", "Switch4", "Switch5"). The documentation focuses on the new_lead disposition's detailed flow, which can be replicated for others. Message Count Logic (for new_lead Disposition) For contacts identified as new_lead, the workflow uses a "Count Switch" to determine which message in the sequence should be sent. Count Switch: This node evaluates the Count field for each new_lead contact. This Count likely represents the number of messages already sent to the contact within this specific journey. Count = 0: Directs to "Loop Over Items1" (first message in sequence). Count = 1: Directs to "Loop Over Items2" (second message in sequence). Count = 2: Directs to "Loop Over Items3" (third message in sequence). Count = 3: Directs to "Loop Over Items4" (fourth message in sequence). Looping and Interval Check Each "Loop Over Items" node processes contacts in batches and incorporates an "If Interval" check (except for Loop Over Items1). Loop Over Items (e.g., "Loop Over Items1", "Loop Over Items2", "Loop Over Items3", "Loop Over Items4"): These nodes iterate through the contacts received from the "Count Switch" output. Interval Logic: "If Interval" (for Count = 1 from "Loop Over Items2"): Checks if the interval is greater than or equal to 4. This interval value is handled by a separate Supabase cron job, which updates it every minute based on Current time - last api hit time in hours. "If Interval1" (for Count = 2 from "Loop Over Items3"): Checks if the interval is exactly 24 hours. "If2" (for Count = 3 from "Loop Over Items4"): Checks if the interval is exactly 24 hours. Sending WhatsApp Messages If a contact passes the interval check (or immediately for Count = 0), a WhatsApp message is sent using the Gallabox API. HTTP Request Nodes (e.g., "new_lead_0", "new_lead_", "new_lead_3", "new_lead_2"): These nodes are responsible for sending the actual WhatsApp messages via the Gallabox API. They are configured with: Method: POST URL: https://server.gallabox.com/devapi/messages/whatsapp Authentication: apiKey and apiSecret are used in the headers. Body: Contains channelId, channelType (whatsapp), and recipient (including name and phone). WhatsApp Message Content: Includes type: "template" and templateName (e.g., testing_rahi, wu_2, testing_rahi_1). The bodyValues dynamically insert the contact's name and other details. Some messages also include buttonValues for quick replies (e.g., "Show me Brochure"). Logging and Updating Contact Status After a message is sent (or attempted), the workflow logs the interaction and updates the contact's record. Create Logs (e.g., "Create Logs", "Create Logs1", "Create Logs2", "Create Logs3"): These Supabase nodes record details of the message send attempt into the logs_nurture_ampere table. This includes: message_id (from the Gallabox API response body) phone and name of the contact disposition and mes_count (which is Count + 1 from the contacts table) last_sent (timestamp from Gallabox API response headers) status_code and status_message (from Gallabox API response or error). These nodes are configured to "continueRegularOutput" on error, meaning the workflow will attempt to proceed even if logging fails. Status Code Check (e.g., "If StatusCode", "If StatusCode 202", "If StatusCode 203", "If StatusCode 204"): Immediately after attempting to create a log, an "If" node checks if the status_code from the message send attempt is "202" (indicating acceptance by the messaging service). Update Contact Row (e.g., "Update a row1", "Update a row2", "Update a row3", "Update a row4"): If the status code is 202, these Supabase nodes update the contacts_ampere table for the specific contact. The Count for the contact is incremented by 1 (Count + 1). The last_message_sent field is updated with the date from the Gallabox API response headers. These nodes are also configured to "continueRegularOutput" on error. This structured flow ensures that contacts are nurtured through a sequence of WhatsApp messages, with each interaction logged and the contact's status updated for future reference and continuation of the journey.