by Oneclick AI Squad
This workflow monitors active construction projects in real time, ingests weather forecasts, supplier delivery statuses, and crew/resource availability, then uses Claude AI to predict delay risk, estimate schedule impact, and generate mitigation playbooks for project managers. How it works Trigger — Webhook (on-demand) or daily schedule kick-off Load Active Projects — Pulls project list from your PM system (Procore / Airtable / Sheets) Fetch Weather Forecast — 7-day forecast for each project site location Fetch Supplier Status — Checks open purchase orders and delivery ETAs Fetch Resource Availability — Crew headcount, equipment, subcontractor status Combine Risk Data — Merges all data streams per project AI Delay Prediction — Claude AI scores delay probability and generates mitigation plan Severity Routing — Routes CRITICAL/HIGH risk projects to immediate alert path Notify Project Managers — Slack alert with risk summary and action items Update PM Dashboard — Writes prediction back to Airtable / Google Sheets Create Risk Ticket — Opens Jira / Linear issue for HIGH+ risk projects Send Daily Briefing — Email digest of all at-risk projects Setup Steps Import workflow into n8n Configure credentials: Anthropic API — Claude AI for delay prediction OpenWeatherMap API — Site weather forecasts Airtable / Google Sheets — Project & resource data Procore API — Schedule and RFI data (optional) Slack OAuth — Project manager alerts Jira API — Risk issue tracking SendGrid / SMTP — Daily email briefing Set your Airtable base ID and table names Configure Slack channel IDs per severity level Set your risk threshold (default: 60%) in the routing node Activate the workflow Sample Webhook Payload { "projectId": "PROJ-2025-042", "projectName": "Riverside Commercial Tower", "siteLocation": { "lat": -33.8688, "lon": 151.2093 }, "plannedEndDate": "2025-11-15", "currentPhase": "Structure", "forceRefresh": true } AI Prediction Criteria (Claude) Weather Risk** — Rain days, wind, temperature extremes blocking site work Supplier Risk** — Lead time slippage, back-orders, sole-source dependencies Resource Risk** — Labour shortages, equipment breakdown, subcontractor delays Schedule Slack** — Float remaining vs. risk exposure Phase Complexity** — Current phase sensitivity to external delays Historical Patterns** — Similar project delay patterns Features Multi-source real-time risk ingestion AI-powered delay probability scoring (0–100%) Automated severity routing and escalation Mitigation playbook generation per risk type Google Sheets / Airtable dashboard sync Daily briefing email and Slack digest Explore More Automation: Contact us to design AI-powered lead nurturing, content engagement, and multi-platform reply workflows tailored to your growth strategy.
by Cheng Siong Chin
How It Works This solution centralizes communication data from Slack, Microsoft Teams, Gmail, and GitHub into a unified AI-powered analysis and documentation workflow for teams managing distributed knowledge. Manual aggregation across multiple tools is time-consuming and leads to information silos that obscure key decisions and context. By automating secure data collection and normalization, the workflow enables AI models to analyze conversations, extract decisions, action items, and key themes, and convert these insights into continuously updated documentation such as design notes and knowledge base articles. This improves visibility, preserves organizational knowledge, and supports more effective collaboration and decision-making. Setup Steps Connect credentials:** Slack App API, Microsoft Teams credentials, Gmail OAuth, GitHub Personal Access Token, Anthropic API key Configure monitoring parameters:** Specify channels, repositories, and email labels to track Set schedule triggers: Prerequisites Slack workspace admin, Teams account, Gmail account, GitHub repository access, Anthropic API subscription, Notion workspace, n8n self-hosted or cloud instance. Use Cases Marketing teams aggregating customer feedback across channels; Documentation teams collecting technical updates; Customization Modify source integrations by adding/removing trigger nodes. Adjust AI prompts in Anthropic node for different analysis types. Benefits Saves 5+ hours weekly on manual data collection. Ensures no communication missed across platforms.
by Shun Nakayama
This workflow allows you to complete the entire process of creating and publishing detailed Instagram Carousels—from research to posting—without ever leaving Slack. It leverages Nano Banana Pro, a state-of-the-art image generation model capable of rendering perfect text, to create professional "consultant-style" slides that AI previously struggled with. How it works Start in Slack: You trigger the workflow by entering a topic ensuring the entire process starts in Slack. Research (AI Agent): An AI agent searches the web for deep insights on the topic. Drafting (AI Agent): Structures research into a carousel format designed for engagement. Review in Slack: The draft is sent to Slack as a formatted message. You approve it with a single click. Image Generation: Upon approval, Nano Banana Pro generates professional infographic-style images with legible, high-density text. Final Review in Slack: The created images and caption are sent back to Slack. Publish from Slack: One final approval in Slack automatically publishes the Carousel to Instagram. Setup steps Configure Credentials: OpenAI API: Required for the Research and Drafting agents (GPT-4o/GPT-5 recommended). Slack API: Required for notifications and approval buttons. Kie.ai (Nano Banana Pro): Required for high-quality text-in-image generation. Facebook Graph API: Required for publishing to Instagram. Set IDs: Open the "Slack Approval" nodes and set your Channel ID. Open the "Instagram" nodes and set your Instagram Business Account ID. Customize Prompts (Optional): Adjust the system prompts in the AI nodes to match your brand's tone of voice. Requirements n8n version**: 1.0+ (AI nodes required) Kie.ai Account**: For using the Nano Banana Pro model (excellent at rendering text). Slack Workspace**: For the Human-in-the-loop approval process.
by masahiro hanawa
Translate and localize content using DeepL and GPT-4o-mini Managing high-quality translations across multiple languages often requires more than just machine translation; it requires cultural context and quality assurance. This workflow automates the entire pipeline, from initial translation to AI-driven quality scoring and cultural localization. Who is this for? Content Teams:** To automate the first draft and review process for blog posts or documentation. Marketing Agencies:** To localize campaign copy for international markets quickly. Product Managers:** To manage UI/UX copy across different regions with consistent glossary support. How it works Content Intake: A Webhook receives the source text and a list of target languages. Language Detection & Validation: The workflow identifies the source language and validates the requested target codes. Parallel Processing: Using the Split Out node, the workflow processes each target language simultaneously. DeepL Translation: High-quality neural machine translation is performed for each language. AI Quality Review: GPT-4o-mini acts as a professional linguist, scoring the translation on accuracy, fluency, and style, and flagging any issues. Cultural Localization: A specialized node applies region-specific formatting for dates and currencies. Aggregation & Reporting: All results are unified, logged to Google Sheets, emailed via Gmail, and returned as a JSON response to the initial requester. How to set up Credentials: Connect your DeepL API, OpenAI, Google Sheets, and Gmail accounts. Google Sheets: Create a sheet with headers: Job ID, Source Text, Languages, Avg Quality, and Completed. Paste the Sheet ID into the Google Sheets node. Webhook: Use the production URL in your CMS or app to trigger the workflow with a POST request. Requirements DeepL API Key** (Free or Pro) OpenAI API Key** (for GPT-4o-mini) Google Account** (for Sheets and Gmail) How to customize Adjust AI Rubric:** Modify the "AI Quality Review" prompt to focus on specific brand voice guidelines or technical terminology. Glossary Support:** Update the DeepL node to include specific Glossary IDs for industry-specific jargon. Localization Rules:** Add more regions or specific formatting rules (like measurement conversions) in the "Apply Localization" Code node.
by Chris Mielke
This n8n template automates email labeling using AI-enhanced classification and intelligent routing Gmail users report spending significant time manually sorting email, so this tool helps alleviate that burden. How it works Gmail Trigger monitors unread emails every 2 minutes Once an email arrives, the content is extracted with HTML cleaning AI Agent (the node is set for Chat GPT-4) is used for classification & entity extraction A Structured Output Parser parses the email to JSON A 9-way category routing system categorizes the email (Inquiry, Support, Newsletter, Marketing, Personal, Urgent, Spam, Invoice, Meeting) Gmail auto-labeling is used for each category Google Sheets is used for logging (the main log that includes all emails and an error log which are emails that cannot be classified) Slack alerts are generated for high-priority/urgent emails Error handling is done with separate error logging in Google Sheets How to use Set up credentials for Google Gmail, LLM (ChatGPT, Gemini, etc.), Google Sheets, and Slack Modify the categories as needed per user preference Requirements Gmail Any LLM like ChatGPT or Google Gemini Google Drive with Google Sheets is optional for logging and error handling Slack is optional for high-priority messages
by Cheng Siong Chin
How It Works This workflow automates hospital emergency department triage by intelligently processing patient intake information through multiple AI-powered assessment stages. Designed for emergency departments, urgent care centers, and hospital admission teams, it solves the critical challenge of rapid, accurate patient prioritization during high-volume periods. The system captures initial patient data through a chat interface, uses specialized AI agents to analyze medical history and current symptoms, validates business rules for priority assignment, performs stability checks, calculates priority scores, and determines required actions. It then routes patients to appropriate care pathways while sending notifications to relevant medical teams and logging all interactions for audit compliance. The workflow leverages OpenAI models and structured JSON parsing to ensure consistent, protocol-driven triage decisions. Setup Steps Configure OpenAI credentials with API key for AI agent access Set up Hospital Triage Agent node with your clinical triage protocols Configure Patient Consent and Structured JSON checkers with validation rules Connect notification endpoints for Execute Appointment and Send Notification nodes Set up audit logging system integration in Log Interactions node Customize business rule validation parameters for your facility's triage categories Prerequisites Active OpenAI API account, hospital system API access for appointments and notifications Use Cases Emergency department patient intake, urgent care prioritization, virtual triage for telehealth Customization Modify triage agent prompts to reflect your clinical protocols, adjust priority scoring algorithms Benefits Accelerates triage processing by 60%, ensures standardized clinical assessment
by Dataki
⚠️ Disclaimer: > I am not a cybersecurity expert. This workflow was built through research and with the assistance of an LLM (Claude Opus 4.6). While it implements well-established security patterns (HMAC-SHA256, timing-safe comparison, replay protection, strict payload validation), please review the logic carefully and ensure it meets your own security requirements before deploying it in production. Who is this for? This template is for anyone exposing an n8n workflow via webhook and wanting to ensure that only authenticated, untampered requests are processed. What problem does this solve? Public webhooks are vulnerable by default. Without proper verification, anyone who discovers your URL can send forged requests, replay old ones, or inject unexpected parameters. While n8n's built-in Webhook authentication modes (Basic Auth, Header Auth, JWT) verify who is calling, they don't verify that the payload hasn't been altered, that the request is fresh, or that the data structure matches what you expect. This template adds those missing layers: Authentication** — Verifies the sender's identity through HMAC-SHA256 signature validation Integrity** — Ensures the payload hasn't been modified by signing the raw body byte-for-byte Replay protection** — Rejects requests with expired timestamps (configurable, default: 5 minutes) Payload sanitization** — Strict whitelist filtering blocks unauthorized fields before they reach your logic What this workflow does The workflow chains six security layers before any business logic runs: Webhook receives the request with Header Auth + Raw Body enabled to preserve the original payload Extract rawBody (Code node) decodes the binary into a UTF-8 string and extracts the security headers Crypto computes the HMAC-SHA256 signature of {timestamp}.{rawBody} using your HMAC secret Timing-Safe HMAC Check (Code node) validates the timestamp freshness and compares signatures using crypto.timingSafeEqual() Strict Payload Validation (Code node) parses the JSON, checks required fields, and rejects any unexpected keys AI Agent processes the prompt only after all checks pass Invalid requests are immediately rejected with 403 Forbidden (signature/timestamp failure) or 400 Bad Request (payload validation failure), with no response body to avoid leaking internal logic. Example use case The included example protects an AI Agent endpoint that expects a simple {"prompt": "..."} payload. But this is just a starting point — replace the AI Agent with any node and adapt the payload validation to your own schema. Common adaptations: CRM or SaaS event callbacks CRUD operations on a database Third-party API integrations Setup Prerequisites An n8n instance (Cloud or Self-hosted) A shared HMAC secret between the sender and this workflow — keep it safe and never expose it in workflow logs or execution data Going further This workflow is a solid starting point — it's more secure than a raw exposed webhook. However, it focuses on application-level security (authentication, integrity, replay protection, payload sanitization). For a production-grade setup, consider adding layers at the infrastructure level : Rate limiting** IP whitelisting** Reverse proxy hardening**
by Dr. Firas
💥 Automate YouTube Video Creation and Publishing with Blotato Who is this for? This workflow is designed for YouTube creators, content marketers, automation builders, and agencies who want to repurpose existing YouTube videos into new original content and automate the publishing process. It is especially useful for users already working with Telegram, Google Sheets, OpenAI, and Blotato. What problem is this workflow solving? / Use case Creating YouTube content at scale is time-consuming: extracting ideas from existing videos, rewriting scripts, generating SEO metadata, tracking content, and publishing videos all require manual work across multiple tools. This workflow solves that by: Automating content analysis and rewriting Centralizing tracking and approvals in Google Sheets Automating YouTube publishing via Blotato What this workflow does This workflow automates the full YouTube video repurposing and publishing pipeline: Receives a YouTube video URL and instructions via Telegram Logs the request in Google Sheets Extracts the YouTube video ID Retrieves the video transcript via RapidAPI Cleans and normalizes the transcript Generates a new original video script using OpenAI Generates SEO metadata (title, description, tags) in strict JSON format Updates Google Sheets with the generated content Waits for approval (status = ready) Uploads the final video to Blotato Publishes the video on YouTube Updates the status to publish in Google Sheets Setup To use this workflow, you need to configure the following services: Google Services Enable Google Sheets API in Google Cloud Console Create OAuth2 credentials Add credentials in n8n: Google Sheets OAuth2 API Credential name: Google Sheets account My Google Sheets : copy** RapidAPI (YouTube Transcript) Sign up at RapidAPI Subscribe to "YouTube Video Summarizer GPT AI" Get your API key Update in Workflow Configuration node BLOTATO (Video Publishing) Sign up at BLOTATO Get API credentials Add credentials in n8n: Blotato API Credential name: Blotato account Connect your YouTube account via BLOTATO How to customize this workflow to your needs You can easily adapt this workflow by: Changing the output language (output_lang) in the configuration node Modifying the OpenAI prompts to match your tone or niche Adjusting Google Sheets columns or approval logic Replacing YouTube with another platform supported by Blotato Extending the workflow to generate shorts, reels, or multi-platform posts The workflow is modular and designed to be extended without breaking the core logic. 🎥 Watch This Tutorial 👋 Need help or want to customize this? 📩 Contact: LinkedIn 📺 YouTube: @DRFIRASS 🚀 Workshops: Mes Ateliers n8n 📄 Documentation: Notion Guide Need help customizing? Contact me for consulting and support : Linkedin / Youtube / 🚀 Mes Ateliers n8n
by Stéphane Bordas
Who is this for? This workflow is for healthcare professionals, consultants, coaches, and service businesses who want to completely automate their appointment booking system via WhatsApp — without manual intervention for reservations, availability checks, or cancellation management. What problem is this workflow solving? / Use case Managing appointments manually via WhatsApp is extremely time-consuming: checking availability, confirmations, rescheduling, cancellations. This workflow automates the entire process — from initial request to final confirmation — allowing your clients to book, modify, or cancel appointments 24/7, in natural language, directly via WhatsApp. What this workflow does Processes multi-modal messages (text, audio, images) from WhatsApp Business API Detects message type and routes to appropriate processing (Whisper for audio, GPT-4 Vision for images) Uses AI Agent with 5 Cal.com tools to manage complete appointment lifecycle Checks real-time availability in your Cal.com calendar Books appointments autonomously without human intervention Handles cancellations and rescheduling requests Maintains conversation context with Simple Memory for natural exchanges Formats responses with Unicode bold for better WhatsApp readability Sends automated replies directly to the client The result: a fully automated 24/7 appointment management system via WhatsApp. Setup 1. WhatsApp Business API Connect your WhatsApp Business API account in n8n. Set up the webhook in Facebook Developer Console (Webhook → Messages → Subscribe). Add your phone_number_id and access token credentials. 2. Cal.com Create a Cal.com account and configure your calendar. Generate an API Key from Cal.com settings. Set up your event types (duration, availability, pricing). Add your Cal.com API credentials in n8n. 3. OpenAI Get an OpenAI API key (for GPT-4, Whisper, and Vision). Add your OpenAI credentials in n8n. The workflow uses GPT-4 for conversation, Whisper for audio transcription, and GPT-4 Vision for image analysis. 4. Customize the AI Agent Edit the System Message to define your agent's personality, tone, and business context. Adjust timezone in tool parameters (default: Europe/Paris). Configure event type IDs for different appointment types. 5. Test & activate Test with different message types (text, audio, image) from WhatsApp. Verify appointments are created correctly in Cal.com. Switch to production mode and activate the workflow. This workflow helps you build a fully autonomous AI booking assistant, transforming WhatsApp into a 24/7 appointment management system. Need help customizing? Contact me for consulting and support: LinkedIn / Youtube
by 長谷 真宏
Who is this for This workflow is perfect for busy professionals, consultants, and anyone who frequently travels between meetings. If you want to make the most of your free time between appointments and discover great nearby spots without manual searching, this template is for you. What it does This workflow automatically monitors your Google Calendar and identifies gaps between appointments. When it detects sufficient free time (configurable, default 30+ minutes), it calculates travel time to your next destination, checks the weather, and uses AI to recommend the top 3 spots to visit during your break. Recommendations are weather-aware: indoor spots like cafés in malls or stations for rainy days, and outdoor terraces or open-air venues for nice weather. How it works Schedule Trigger - Runs every 30 minutes to check your calendar Fetch Data - Gets your next calendar event and user preferences from Notion Calculate Gap Time - Determines available free time by subtracting travel time (via Google Maps) from time until your next appointment Weather Check - Gets current weather at your destination using OpenWeatherMap Smart Routing - Routes to indoor or outdoor spot search based on weather conditions AI Recommendations - GPT-4.1-mini analyzes spots and generates personalized top 3 recommendations Slack Notification - Sends a friendly message with recommendations to your Slack channel Set up steps Configure API Keys - Add your Google Maps, Google Places, and OpenWeatherMap API keys in the "Set Configuration" node Connect Google Calendar - Set up OAuth connection and select your calendar Set up Notion - Create a database for user preferences and add the database ID Connect Slack - Set up OAuth and specify your notification channel Connect OpenAI - Add your OpenAI API credentials Customize - Adjust currentLocation and minGapTimeMinutes to your needs Requirements Google Cloud account with Maps and Places APIs enabled OpenWeatherMap API key (free tier available) Notion account with a preferences database Slack workspace with bot permissions OpenAI API key How to customize Change trigger frequency: Modify the Schedule Trigger interval Adjust minimum gap time: Change minGapTimeMinutes in the configuration node Modify search radius: Edit the radius parameter in the Places API calls (default: 1000m) Customize spot types: Modify the type and keyword parameters in the HTTP Request nodes Change AI model: Switch to a different OpenAI model in the AI node Localize language: Update the AI prompt to generate responses in your preferred language
by Lee Lin
How It Works Top Branch Workflow A* 1. The Market Intelligence: Patrols the Market:** Runs hourly to scrape competitor rates for future days. Gathers Intel:** If prices spike, it instantly checks event announcements to see if a major event is driving demand. Crunches Numbers:** Calculates the exact price gap and filters out noise. 2. The Revenue Manager: Sets Strategy:** The AI Agent reviews the price gaps, competitor moves, and event signals. Reports:** Writes a strategic Executive Summary and sends it to your WhatsApp. Bottom Branch Workflow B* 3. The Consultant: Recall: When you ask a question via WhatsApp, the bot retrieves the saved analysis, historical rates, and event schedule. Answer: It acts as an on-demand analyst, conducting further analysis to give an informed answer to questions Setup Steps 1. Config: Add your hotel + competitor hotels (IDs/names) in the Config node. 2. Monitor Window: Set how far ahead you want to monitor (e.g., daysAhead = 30) in the Config node. 3. Sensitivity: Set how sensitive alerts should be (e.g., alert only if competitor moves > 10%) in the Significant Competitor Change node. 4. Connect Credentials: Amadeus (to fetch hotel prices) WhatsApp (to send alerts) Postgres/SQL (to store price snapshots, history, summary) OpenAI (for the AI Agents) 5. Event Source: Update the Fetch VCC nodes to scrape your local convention center or event site. 6. Run a test: Trigger Workflow A manually and confirm you receive a WhatsApp alert. Reply to that WhatsApp message to test Workflow B (Q&A). Use Cases & Benefits For Revenue Managers: Automate the "rate shop" routine and catch competitor moves without opening a spreadsheet. For Sales & Marketing Teams: Go beyond raw data. Pairing "what changed" with "why changed" instantly. For Hotel Leadership: Perfect for GMs and division leaders who need instant, decision-ready alerts via WhatsApp. ⚡ Zero-Touch Efficiency: Eliminates hours of manual searching by automating rate checks 3x daily. 🧠 Contextual Intelligence: Tracks price AND explains why it moved by cross-referencing local events. 🤖 Actionable Strategy: AI doesn't just report numbers; it recommends specific pricing tactics. 📉 Long-Term Vision: Builds a permanent database of rate history, enabling the AI to answer complex trend questions over time. 📬 Want to Customize This? leelin.business@gmail.com
by Ranjan Dailata
This n8n workflow automates backlink monitoring, analysis, and AI-driven interpretation for any domain or URL. It combines backlink intelligence from SE Ranking with structured reasoning and summarization powered by OpenAI GPT 4.1-mini. Instead of manually reviewing backlink reports, this workflow transforms raw backlink metrics into clear, human-readable SEO insights and persists them to multiple storage layers for reporting and tracking. Who this is for? This workflow is ideal for: SEO professionals and technical SEO teams Digital marketing agencies managing multiple domains Growth and content teams tracking backlink quality Developers building SEO intelligence pipelines Data teams using n8n for enrichment and reporting What this workflow does? Accepts a backlink query (domain, host, or URL) Uses multiple SE Ranking Backlinks API endpoints to retrieve: Backlink summary metrics Referring domains, IPs, and subnets Authority and backlink quality indicators Raw backlink lists Routes the data through an AI Agent powered by GPT-4.1-mini that: Selects the appropriate backlink dataset automatically Normalizes noisy SEO data Generates structured summaries without subjective opinions Produces a clean backlink intelligence summary Persists results to: n8n DataTables Google Sheets CSV / JSON exports Setup If you are new to SE Ranking, please signup on https://seranking.com Prerequisites Active SE Ranking API access OpenAI API key with GPT-4.1-mini enabled n8n instance (self-hosted or cloud) Basic understanding of backlink and authority metrics Import the workflow JSON into n8n Configure credentials: SE Ranking** using HTTP Header Authentication. Please make sure to set the header authentication as below. The value should contain a Token followed by a space with the SE Ranking API Key. OpenAI API (GPT-4.1-mini) Google Sheets OAuth (optional, for reporting) Open the Set Input Fields node and define: query (e.g. Backlinks Summary for https://example.com) Verify storage destinations: Google Sheet ID and sheet name n8n DataTable File export nodes (CSV / JSON) Click Execute Workflow How to customize this workflow to your needs? You can easily extend or adapt this workflow by: Switching analysis mode (domain, host, or URL) Adding historical backlink trend analysis Enhancing the AI prompt to generate: Toxic backlink alerts Link-building opportunities Competitor backlink gap analysis Replacing storage with: Databases or data warehouses Slack / Email notifications BI dashboards Scheduling the workflow for continuous backlink monitoring Summary This n8n template delivers an end-to-end backlink intelligence system from raw backlink retrieval to AI-powered interpretation and structured storage. By combining SE Ranking’s backlink data with OpenAI-driven reasoning, it eliminates manual SEO analysis and enables scalable, repeatable backlink monitoring.