by Oneclick AI Squad
This automated n8n workflow continuously tracks real-time flight fare changes by querying airline APIs (e.g., Amadeus, Skyscanner). It compares new prices with historical fares and sends instant notifications to users when a fare drop is detected. All tracked data is structured and logged for audit and analysis. Key Insights Works post-booking to track price fluctuations for booked routes. Supports multiple fare sources for improved accuracy and comparison. Users are notified instantly via email, SMS, or Slack for high-value drops. Historical pricing data is stored for trend analysis and refund eligibility checks. Can be extended to monitor specific routes or apply airline-specific refund rules. Workflow Process Schedule Trigger Initiates the fare check every 6 hours. Fetch Flight Fare Data Queries APIs (Amadeus, Skyscanner) for current flight fares. Get Tracked Bookings Retrieves tracked routes from the internal database. Compare Fares Detects price drops compared to original booking fares. Update Fare History Table Logs the new fare and timestamp into the fare_tracking table. Classify Drops Determines priority based on absolute and percentage savings. Notify Users Email Alerts: For all medium/high priority drops. SMS Alerts: For savings > \$100 or >15%. Slack Notifications: For internal alerts and rebooking suggestions. Log Activity Stores all sync actions and notifications in fare_alert_logs. Usage Guide Import** the workflow into your n8n instance. Set up** API credentials for Amadeus and Skyscanner. Configure email, SMS (Twilio), and Slack credentials. Update the booking database with valid records (with route, fare, timestamp). Set schedule frequency (e.g., every 6 hours). Review logs regularly to monitor fare alert activity and system health. Prerequisites Valid accounts and credentials for: Amadeus API Skyscanner API SendGrid (or SMTP) for email Twilio for SMS Slack workspace & bot token PostgreSQL or MySQL database for fare tracking Tracked booking dataset (with routes, fares, and user contacts) Customization Options Adjust alert thresholds in the comparison logic (e.g., trigger only if fare drops > \$50 or >10%). Add new notification channels (e.g., WhatsApp, Telegram). Extend logic to track multi-leg or roundtrip fares. Integrate airline refund APIs (where supported) to auto-initiate refund or credit requests. Excel Output Columns When exporting or logging fare tracking data to Excel or CSV, use the following structure: | flight\_number | airline | departure | arrival | departure\_time | arrival\_time | current\_fare | route | timestamp | | -------------- | --------------- | ---------------------------- | ------------------------- | ------------------------- | ------------------------- | ------------- | ------- | ------------------------ | | AT5049 | Royal Air Maroc | John F Kennedy International | Los Angeles International | 2025-07-21T06:00:00+00:00 | 2025-07-21T08:59:00+00:00 | 235 | JFK-LAX | 2025-07-21T13:04:14.000Z | | BA1905 | British Airways | John F Kennedy International | Los Angeles International | 2025-07-21T06:00:00+00:00 | 2025-07-21T08:59:00+00:00 | 479 | JFK-LAX | 2025-07-21T13:04:14.000Z |
by Harsh Maniya
📱🤖 Create Stunning AI Images Directly from WhatsApp with Gemini This workflow transforms your WhatsApp into a personal AI image generation studio. Simply send a text message with your idea, and this bot will use the advanced prompt engineering capabilities of Gemini 2.5 Pro to craft a detailed, artistic prompt. It then uses Gemini 2.0 Flash to generate a high-quality image from that prompt and sends it right back to your chat. It's a powerful yet simple way to bring your creative ideas to life, all from the convenience of your favorite messaging app\! What this workflow does Listens for WhatsApp Messages: The workflow starts automatically when you send a message to your connected WhatsApp number. Enhances Your Idea with AI: It takes your basic text (e.g., "a knight on a horse") and uses Gemini 2.5 Pro to expand it into a rich, detailed prompt perfect for image generation (e.g., "A cinematic, full-body shot of a stoic knight in gleaming, ornate silver armor, riding a powerful black warhorse through a misty, ancient forest. The scene is lit by ethereal morning sunbeams piercing the canopy, creating dramatic volumetric lighting and long shadows. Photorealistic, 8K, ultra-detailed, award-winning fantasy concept art."). Generates a Unique Image: It sends this enhanced prompt to the Google Gemini 2.0 Flash image generation API. Prepares the Image: The API returns the image in Base64 format, and the workflow instantly converts it into a binary file. Sends it Back to You: The final, high-quality image is sent directly back to you in the same WhatsApp chat. Nodes Used 🟢 WhatsApp Trigger: The entry point that listens for incoming messages. 🧠 LangChain Chain (LLM): Uses Gemini 2.5 Pro for advanced prompt engineering. ➡️ HTTP Request: Calls the Google Gemini 2.0 Flash API to generate the image. 🔄 Convert to File: Converts the Base64 image data into a sendable file format. 💬 WhatsApp: Sends the final image back to the user. Prerequisites To use this workflow, you will need: An n8n instance. A WhatsApp Business Account connected to n8n. You can find instructions on how to set this up in the n8n docs. A Google Gemini API Key. You can get one for free from Google AI Studio. How to use this workflow Get your Google Gemini API Key: Visit the Google AI Studio and create a new API key. Configure the Gemini 2.5 Pro Node: In the n8n workflow, select the Gemini 2.5 Pro node. Under 'Connect your account', click 'Create New' to add a new credential. Paste your Gemini API key from the previous step and save. Configure the Generate Image (HTTP Request) Node: Select the Generate Image node. In the properties panel on the right, find the Query Parameters section. In the 'Value' field for the key parameter, replace "Your API Key" with your actual Google Gemini API Key. Connect WhatsApp: Select the WhatsApp Trigger node. Follow the instructions to connect your WhatsApp Business Account credential. If you haven't created one, the node will guide you through the process. Activate and Test: Save the workflow using the button at the top right. Activate the workflow using the toggle switch. Send a message to your connected WhatsApp number (e.g., "A futuristic city in the style of Van Gogh"). The bot will process your request and send a stunning AI-generated image right back to you\!
by keisha kalra
Try It Out! This n8n template helps you create SEO-optimized Blog Posts for your businesses website or for personal use. Whether you're managing a business or helping local restaurants improve their digital presence, this workflow helps you build SEO-Optimized Blog Posts in seconds using Google Autocomplete and People Also Ask (SerpAPI). Who Is It For? This is helpful for people looking to SEO Optimize either another person's website or their own. How It Works? You start with a list of blog inspirations in Google Sheets (e.g., “Best Photo Session Spots”). The workflow only processes rows where the “Status” column is not marked as “done”, though you can remove this condition if you’d like to process all rows each time. The workflow pulls Google Autocomplete suggestions and PAA questions using: A custom-built SEO API I deployed via Render (for Google Autocomplete + PAA), SerpAPI (for additional PAA data). These search insights are merged. For example, if your blog idea is “Photo Session Spots,” the workflow gathers related Google search phrases and questions users are asking. Then, GPT-4 is used to draft a full blog post based on this data. The finished post is saved back into your Google Sheet. How To Use Fill out the “Blog Inspiration” column in your Google Sheet with the topics you want to write about. Update the OpenAI prompt in the ChatGPT node to match your tone or writing style. (Tip: Add a system prompt with context about your business or audience.) You can trigger this manually, or replace it with a cron schedule, webhook, or other event. Requirements A SerpAPI account to get PAA An OpenAI account for ChatGPT Access to Google Sheets and n8n How To Set-Up? Your Google Sheet should have three columns: "Blog Inspiration", "Status" → set this to “done” when a post has been generated, "Blog Draft" → this is automatically filled by the workflow. To use the SerpAPI HTTP Request node: 1. Drag in an HTTP Request node, 2. Set the Method and URL depending on how you're using SerpAPI: Use POST to run the actor live on each request. Use GET to fetch from a static dataset (cheaper if reusing the same data). 3. Add query parameters for your SerpAPI key and input values. 4. Test the node. Refer to this n8n documentation for more help! https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolserpapi/. The “Autocomplete” node connects to a custom web service I built and deployed using Render. I wrote a Python script (hosted on GitHub) that pulls live Google Autocomplete suggestions and PAA questions based on any topic you send. This script was turned into an API and deployed as a public web service via Render. Anyone can use it by sending a POST request to: https://seo-api2.onrender.com/get-seo-data (the URL is already in the node). Since this is hosted on Render’s free tier, if the service hasn’t been used in the past ~15 minutes, it may “go to sleep.” When that happens, the first request can take 10–30 seconds to respond while it wakes up. Happy Automating!
by Sirisak Chantanate
Workflow Overview: Extract text from image using AI is worth because you need no code. It incorporates Google Gemini 2.0 Flash model for important text extraction from image. If you code without AI, you have to use multiple condition and may cause a lot of bug but with Google Gemini, you don't need any coding and if the Pay Slip is different, Gemini will extract it automatically. Workflow description: User uses Line Messaging API to send Pay Slip image or message to the chatbot, create Line Business ID from here: Line Business Classify the message which is image or text If the message is Pay Slip image, it will process using Gemini 2.0 Flash EXP and extract important information and response in JSON format without coding by using the following prompt: Analyze image and then return in JSON Response that has the only following value: Status, From, To, Date, Amount To get Google AI Studio API Key, you can find from the following link: Google AI Studio API Key Create Google Sheets which include the fileds (Status, From, To, Date, Amount) that we have created related to the AI prompt Google Sheets as the following example: If the message is text, it will process using Gemini 2.0 Flash EXP model as the AI Assistant else if the message is image, it will extract the important fields then reply to the User and insert into Google Sheets Key Features: Extract text from image with No Code** Without N8N, we have to write code to extract text from image, but with N8N and Google Gemini 2.0 Flash EXP together, we don't need to code and it will process all slip vendors or other document vendors. Multipurpose Chatbot** this chatbot accept both text and image so we don't have to create many chatbot accounts Reduce human error** this workflow let any officer to verify document status when the job ends Note: You can change the information by changing your prompt and also Google Sheets Column names relatively.
by Davide
This workflow dynamically chooses between two new powerful Anthropic models — Claude Opus 4 and Claude Sonnet 4 — to handle user queries, based on their complexity and nature, maintaining scalability and context awareness with Anthropic web search function and Think tool. Key Advantages 🔁 Dynamic Model Selection Automatically routes each user query to either Claude Sonnet 4 (for routine tasks) or Claude Opus 4 (for complex reasoning), ensuring optimal performance and cost-efficiency. 🧠 AI Agent with Tool Use The AI agent can utilize a web search tool to retrieve up-to-date information and a Think tool for complex reasoning processes — improving response quality. 📎 Memory Integration Uses session-based memory to maintain conversational context, making interactions more coherent and human-like. 🧮 Built-in Calculation Tool Handles numeric queries using an integrated calculator tool, reducing the need for external processing. 📤 Structured Output Parser Ensures outputs are always well-structured and formatted in JSON, which improves consistency and downstream integrations. 🌐 Web Search Capability Supports real-time information retrieval for current events, statistics, or details not available in the AI’s base knowledge. Components Overview Trigger**: Listens for new chat messages. Routing Agent**: Analyzes the message and returns the best model to use. AI Agent**: Handles the conversation, decides when to use tools. Tools**: web_search for internet queries Think for reasoning Calculator for math tasks Models Used**: claude-sonnet-4-20250514: Optimized for general and business logic tasks. claude-opus-4-20250514: Best for deep, strategic, and analytical queries. How It Works Dynamic Model Selection The workflow begins when a chat message is received. The Anthropic Routing Agent analyzes the user's query to determine the most suitable model (either Claude Sonnet 4 or Claude Opus 4) based on the query's complexity and requirements. The routing agent uses predefined criteria to decide: Claude Sonnet 4: Best for standard tasks like real-time workflow routing, data validation, and routine business logic. Claude Opus 4: Reserved for complex scenarios requiring deep reasoning, advanced analysis, or high-impact decisions. Query Processing and Response Generation The selected model processes the query, leveraging tools like web_search for real-time information retrieval, Think for internal reasoning, and Calculator for numerical tasks. The AI Agent coordinates these tools, ensuring the response is accurate and context-aware. A Simple Memory node retains session context for coherent multi-turn conversations. The final response is formatted and returned to the user without intermediate steps or metadata. Set Up Steps Node Configuration Trigger: Configure the "When chat message received" node to handle incoming user queries. Routing Agent: Set up the "Anthropic Routing Agent" with the system message defining model selection logic. Ensure it outputs a JSON object with prompt and model fields. AI Model Nodes: Link the "Sonnet 4 or Opus 4" node to dynamically use the selected model. The "Sonnet 3.7" node powers the routing agent itself. Tool Integration Attach the "web_search" HTTP tool to enable internet searches, ensuring the API endpoint and headers (e.g., anthropic-version) are correctly configured. Connect auxiliary tools (Think, Calculator) to the "AI Agent" for extended functionality. Add the "Simple Memory" node to maintain conversation history. Credentials Provide an Anthropic API key to all nodes requiring authentication (e.g., model nodes, web search). Testing Activate the workflow and test with sample queries to verify: Correct model selection (e.g., Sonnet for simple queries, Opus for complex ones). Proper tool usage (e.g., web searches trigger when needed). Memory retention across chat turns. Deployment Once validated, set the workflow to active for live interactions. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Oneclick AI Squad
This guide details the setup and functionality of an automated workflow designed to monitor the health, uptime, and SLA compliance of travel supplier APIs, specifically the Amadeus Flight API and Booking.com Hotel API. The workflow runs every 10 minutes, processes health and SLA data, and sends alerts or logs based on the status. What It Monitors API Health**: UP/DOWN status with health indicators. Uptime Tracking**: Real-time availability percentage. SLA Compliance**: Automatic breach detection and alerts. Performance Rating**: Classified as EXCELLENT, GOOD, AVERAGE, or POOR. Features Runs every 10 minutes automatically. Monitors the Amadeus Flight API with a 99.5% SLA target. Monitors the Booking.com Hotel API with a 99.0% SLA target. Smart Alerts that notify via WhatsApp only on SLA breaches. Logging of results for both breaches and normal status. Workflow Steps Monitor Schedule**: Triggers the workflow every 10 minutes automatically. Amadeus Flight API**: Tests the Amadeus Flight API (GET: https://api.amadeus.com) simultaneously. Booking Hotel API**: Tests the Booking.com Hotel API (GET: https://distribution-xml.booking.com) simultaneously. Calculate Health & SLA**: Processes health, uptime, and SLA data. Alert Check**: Routes to appropriate responses based on breach status. SLA Breach Alert**: Sends an alert with throwError if an SLA breach occurs. Normal Status Log**: Records results with throwError for healthy status. Send Message**: Sends a WhatsApp message for breach alerts. How to Use Copy the JSON configuration of the workflow. Import it into your n8n workspace. Activate the workflow. Monitor results in the execution logs and WhatsApp notifications. The workflow will automatically start tracking your travel suppliers and alert you via WhatsApp only when SLA thresholds are breached. Please double-check responses to ensure accuracy. Requirements n8n account and instance setup. API credentials for Amadeus Flight API (e.g., https://api.amadeus.com). API credentials for Booking.com Hotel API (e.g., https://distribution-xml.booking.com). WhatsApp integration for sending alerts. Customizing this Workflow Adjust the Monitor Schedule interval to change the frequency (e.g., every 5 or 15 minutes). Modify SLA targets in the Calculate Health & SLA node to align with your service agreements (e.g., 99.9% for Amadeus). Update API endpoints or credentials in the Amadeus Flight API and Booking Hotel API nodes for different suppliers. Customize the Send Message node to integrate with other messaging platforms (e.g., Slack, email). Enhance the Normal Status Log to include additional metrics or export logs to a database.
by Yaron Been
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow automatically tracks brand mentions across various online platforms by scraping blog posts and articles for specific brand references. It saves you time by eliminating the need to manually search for brand mentions and provides sentiment analysis on how your brand is being discussed online. Overview This workflow automatically scrapes Medium blog posts and other online content to find mentions of specific brands (like OpenAI) and performs sentiment analysis on the content. It uses Bright Data to access content without restrictions and AI to intelligently extract brand-related information, analyze sentiment, and summarize key points about brand coverage. Tools Used n8n**: The automation platform that orchestrates the workflow Bright Data**: For scraping blog posts and articles without being blocked OpenAI**: AI agent for intelligent content analysis and sentiment extraction Google Sheets**: For storing brand mention data and sentiment analysis results How to Install Import the Workflow: Download the .json file and import it into your n8n instance Configure Bright Data: Add your Bright Data credentials to the MCP Client node Set Up OpenAI: Configure your OpenAI API credentials Configure Google Sheets: Connect your Google Sheets account and set up your brand monitoring spreadsheet Customize: Define target URLs and brand keywords to monitor Use Cases Brand Monitoring**: Track how your brand is mentioned and discussed online Public Relations**: Monitor media coverage and public sentiment about your brand Competitive Intelligence**: Track mentions of competitor brands and market perception Crisis Management**: Quickly identify negative brand mentions for rapid response Connect with Me Website**: https://www.nofluff.online YouTube**: https://www.youtube.com/@YaronBeen/videos LinkedIn**: https://www.linkedin.com/in/yaronbeen/ Get Bright Data**: https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #brandmonitoring #sentimentanalysis #brightdata #webscraping #brandmentions #n8nworkflow #workflow #nocode #mediamonitoring #brandtracking #publicrelations #brandanalytics #onlinemonitoring #contentanalysis #brandsentiment #digitalmonitoring #brandresearch #mediaanalysis #brandinsights #reputationmanagement #brandwatch #socialmediamonitoring #contentmonitoring #brandpresence #digitalpr #brandlistening #mediatracking #onlinereputation
by Omar Akoudad
The workflow is well-designed for CRM analysis with a robust quality control mechanism. The dual-AI approach ensures reliable results, while the webhook integration makes it production-ready for real-time CRM data processing. Dual-AI Architecture: Uses DeepSeek Reasoner for analysis and DeepSeek Chat for verification. Flexible Input: Supports both manual testing and production webhook integration. Quality Assurance: Built-in verification system to ensure report accuracy. Comprehensive Analysis: Covers lead conversion, upsell metrics, agent ranking, and more. Professional Output: Generates structured markdown reports with actionable insights
by Oneclick AI Squad
This n8n workflow automatically creates and sends regular performance summaries to parents using data from a Learning Management System (LMS). It pulls student grades and attendance, formats them into easy-to-read reports, and emails them without any manual work. Good to Know Fully Automated**: Generates reports and sends emails using LMS data. Regular Updates**: Sends summaries on a set schedule (e.g., every Monday at 9 AM). Clear Reports**: Includes student grades, attendance, and progress notes. Error Alerts**: Notifies admins via email if something goes wrong. Scalable**: Works for multiple students across different classes. How It Works Report Generation Flow Weekly Trigger: Starts the process every Monday at 9 AM. Fetch LMS Data: Pulls grades, attendance, and progress from the LMS. Process Data: Organizes the data into a clear report format. Generate HTML Report: Creates a readable report with student details. Send Email to Parents: Emails the report to parents’ addresses. Log Report Delivery: Records the sent reports in a log. Example Sheet Columns Student ID**: Unique identifier for each student. Name**: Full name of the student. Grade**: Current academic grade or score. Attendance**: Percentage of classes attended. Progress Notes**: Brief comments on performance. Report Date**: Date the report was generated. How to Use Import Workflow: Add the workflow to n8n using the “Import Workflow” option. Set Up LMS Access: Configure n8n with LMS credentials to fetch data. Configure Email: Add parent email addresses and set up an email service (e.g., Gmail). Activate Workflow: Save and turn on the workflow in n8n. Check Logs: Verify reports are sent and logs are updated. Requirements n8n Instance**: Self-hosted or cloud-based n8n setup. LMS Access**: API or credentials to connect to the LMS. Email Service**: SMTP setup (e.g., Gmail) for sending reports. Admin Oversight**: Someone to monitor and fix any errors. Customizing This Workflow Change Schedule**: Adjust the trigger to send reports weekly or monthly. Add More Data**: Include extra LMS fields like behavior notes. Custom Email**: Change the email template for a personalized touch.
by Baptiste Fort
Telegram Voice Message → Automatic Email Imagine: What if you could turn a simple Telegram voice message into a professional email—without typing, copying, pasting, or even opening Gmail? This workflow does it all for you: just record a voice note, and it will transcribe, format, and write a clean HTML email, then send it to the right person—all by itself. Prerequisites Create a Telegram bot** (via BotFather) and get the token. Have an OpenAI account** (API key for Whisper and GPT-4). Set up a Gmail account with OAuth2.** Import the JSON template** into your automation platform. 🧩 Detailed Flow Architecture 1. Telegram Trigger Node: Telegram Trigger This node listens to all Message events received by the specified bot (e.g., “BOT OFFICIEL BAPTISTE”). Whenever a user sends a voice message, the trigger fires automatically. > ⚠️ Only one Telegram trigger per bot is possible (API limitation). Key parameter: Trigger On: Message 2. Wait Node: Wait Used to buffer or smooth out calls to avoid collisions if you receive several voice messages in a row. 3. Retrieve the Audio File Node: Get a file Type:** Telegram (resource: file) Parameter:** fileId = {{$json"message"["file_id"]}} This node fetches the voice file from Telegram received in step 1 4. Automatic Transcription (Whisper) Node: Transcribe a recording Resource:** audio Operation:** transcribe API Key:** Your OpenAI account The audio file is sent to OpenAI Whisper: the output is clean, accurate text ready to be processed. 5. Optional Wait (Wait1) Node: Wait1 Same purpose as step 2: useful if you want to buffer or add a delay to absorb processing time. 6. Structured Email Generation (GPT-4 + Output Parser) Node: AI Agent This is the core of the flow: The transcribed text is sent to GPT-4 (or GPT-4.1-mini here, via OpenAI Chat Model) Prompt used:** You are an assistant specialized in writing professional emails. Based on the text below, you must: {{ $json.text }} Detect if there is a recipient's email address in the text (or something similar like "send to fort.baptiste.pro") If it’s not a complete address, complete it by assuming it ends with @gmail.com. Understand the user's intent (resignation, refusal, application, excuse, request, etc.) Generate a relevant and concise email subject, faithful to the content Write a professional message, structured in HTML: With a polite tone, adapted to the situation Formatted with HTML tags (`, `, etc.) No spelling mistakes in French My first name is jeremy and if the text says he is not happy, specify the wish to resign ⚠️ You must always return your answer in the following strict JSON format, with no extra text: { "email": "adresse@gmail.com", "subject": "Objet de l’email", "body": "Contenu HTML de l’email" } Everything is strictly validated and formatted with the Structured Output Parser node. 7. Automatic Email Sending (Gmail) Node: Send a message To:** {{$json.output.email}} Subject:** {{$json.output.subject}} HTML Body:** {{$json.output.body}} This node takes the JSON structure returned by the AI and sends the email via Gmail, to the right recipient, with the correct subject and full HTML formatting.
by Mohamed Abubakkar
How it Works. The workflow runs automatically every day and collects analytics data for both today and yesterday. It cleans and standardizes both datasets in the same way so they are easy to compare. After that, it measures how performance has changed from one day to the next and interprets those changes to understand trends and context. Once all calculations are finished, the AI creates a clear, easy-to-read summary of what happened. This summary is then formatted and sent through the required communication channels, while the final data is saved for tracking over time and for creating follow-up tasks if needed. Key Features: Trigger runs once per day recommended (11: 57 PM). Fetch seperate data for today and yesterday in node. Compare the two days data and highlight if traffic is less Add the trend lowTraffic for low traffic identification. Using GPT-4 Mini for human readable summary to suit different communication channels. Sending report to WhatsApp / Email Stored the final structure data to Google Sheet for future analytics and historical record Setup Steps 1. Connect Required Credentials You must connect the following credentials: Google Analytics API Google Sheets OpenAi API Email SMTP WhatsApp API ClickUp API 2. Replace Defalut Values Update the workflow with: Your Google Analytics Id's Your Google Sheet Tabs Replace SMTP Credentials, sender and recipients Change with your OpenAi API key Create your WhatApi API Credentials ClickUp API Key's 3. Customize Email Template Modify subject, message body, or formatting style based on your reporting standards. 4. Adjust Trigger You may choose: Manual Trigger Cron Trigger for daily/weekly report Webhook Trigger integrated with your system Detailed Process Flow Schedule Trigger Node Type: Trigger Node Purpose: Automates the start of the workflow. Details: Runs every day (or every hour if real-time monitoring is needed). Eliminates manual data collection and ensures consistent reporting. Analytics Reports Node Type :- Google Analytics Node Purpose: Fetch website performance data Metrics includes: users, sessions, page views Combine the Data Node Type: Merge Node (Append) Purpose: Combines today’s and yesterday’s datasets into a single item for comparison. Details: Prepares data for calculating percentage changes. Maintains proper structure for further nodes. Calculate Percent Changes Node Type: Function Node Purpose: Computes day-over-day percentage changes for users, sessions, and page views. Details: Formula: ((today - yesterday) / yesterday) * 100 Handles increases and decreases correctly. Outputs values used for trend indicators and alerts. Generate AI Summary Node Type: OpenAI Node Purpose: Produces human-readable, professional insights about the daily analytics. Details: Summarizes key changes, trends, and recommendations. Provides context such as low-traffic warnings. Text output is used in emails, WhatsApp, and ClickUp tasks. Send Email or Whatsapp to Dedicated Person / Marketing Team Node Type: Email Node / Whatspp Purpose: Sends daily alert or report emails or whatsapp. Details: Includes formatted metrics and AI summary. Email subject and body clearly indicate trends and recommendations. Workflow Benefits Fully automated daily GA reporting AI-generated summaries for clear insights Alerts only triggered when necessary Historical logging for trends and dashboards Actionable tasks automatically created in ClickUp Multi-channel delivery via Email and WhatsApp Handles low-traffic scenarios gracefully
by Oneclick AI Squad
A secure, scalable enterprise AI orchestration layer built on the Model Context Protocol (MCP). This workflow standardizes tool access across all business systems, enforces permission-based data handling, applies contextual reasoning via Claude AI, and provides a single governance plane for multi-agent AI deployments. How it works Receive AI Agent Request - Unified MCP webhook accepts tool or context requests from any agent Enterprise Auth & RBAC - Validates JWT, resolves role-based access controls, enforces tenant isolation Context Assembly - Builds full enterprise context: user profile, org policies, active sessions, prior tool calls Claude AI Orchestration - Reasons over context to select optimal tool chain, validate intent, plan execution Policy Enforcement Engine - Applies data classification, DLP rules, and geo/time-based restrictions Multi-System Tool Dispatch - Routes to CRM, ERP, HRMS, Data Warehouse, or custom APIs in parallel Response Aggregation - Merges multi-tool results, applies post-processing and redaction rules Compliance Logging - SOC2/ISO27001-ready audit trail with data lineage tracking Return Enriched Context - Delivers MCP-compliant response with reasoning trace back to agent Setup Steps Import workflow into n8n Configure credentials: Anthropic API - Claude AI for orchestration and contextual reasoning Google Sheets - RBAC policy store, session registry, audit log SMTP / Slack - Security and compliance notifications JWT Secret - For enterprise token validation Populate the RBAC policy sheet with roles, permissions, and data classifications Configure your enterprise system endpoints in the tool dispatch nodes Set your tenant IDs and org-level data policies Activate workflow and register the webhook URL with your AI agent platform Sample Enterprise MCP Request { "mcpVersion": "1.1", "agentId": "sales-agent-prod-007", "jwtToken": "eyJhbGciOiJIUzI1NiJ9...", "tenantId": "ORG-ACME-001", "userId": "john.doe@acme.com", "userRole": "sales_manager", "toolRequests": [ { "toolName": "crm.get_pipeline", "parameters": { "region": "APAC" } }, { "toolName": "erp.get_inventory", "parameters": { "sku": "PROD-001" } } ], "agentGoal": "Prepare a quarterly sales brief for the APAC team meeting", "dataClassification": "INTERNAL", "sessionId": "sess-xyz-001" } Enterprise Features Multi-tenant isolation** — strict org boundary enforcement RBAC + ABAC** — role and attribute-based access control per tool per data class Data Loss Prevention (DLP)** — redacts PII/secrets before returning to agent Contextual AI reasoning** — Claude plans the optimal tool chain for agent goals SOC2 / ISO 27001** audit trail with data lineage Geo & time-based policy** — restrict tool access by region or business hours Explore More LinkedIn & Social Automation: Contact us to design AI-powered lead nurturing, content engagement, and multi-platform reply workflows tailored to your growth strategy.