by ้ท่ฐทใ็ๅฎ
Who is this for This workflow is perfect for busy professionals, consultants, and anyone who frequently travels between meetings. If you want to make the most of your free time between appointments and discover great nearby spots without manual searching, this template is for you. What it does This workflow automatically monitors your Google Calendar and identifies gaps between appointments. When it detects sufficient free time (configurable, default 30+ minutes), it calculates travel time to your next destination, checks the weather, and uses AI to recommend the top 3 spots to visit during your break. Recommendations are weather-aware: indoor spots like cafรฉs in malls or stations for rainy days, and outdoor terraces or open-air venues for nice weather. How it works Schedule Trigger - Runs every 30 minutes to check your calendar Fetch Data - Gets your next calendar event and user preferences from Notion Calculate Gap Time - Determines available free time by subtracting travel time (via Google Maps) from time until your next appointment Weather Check - Gets current weather at your destination using OpenWeatherMap Smart Routing - Routes to indoor or outdoor spot search based on weather conditions AI Recommendations - GPT-4.1-mini analyzes spots and generates personalized top 3 recommendations Slack Notification - Sends a friendly message with recommendations to your Slack channel Set up steps Configure API Keys - Add your Google Maps, Google Places, and OpenWeatherMap API keys in the "Set Configuration" node Connect Google Calendar - Set up OAuth connection and select your calendar Set up Notion - Create a database for user preferences and add the database ID Connect Slack - Set up OAuth and specify your notification channel Connect OpenAI - Add your OpenAI API credentials Customize - Adjust currentLocation and minGapTimeMinutes to your needs Requirements Google Cloud account with Maps and Places APIs enabled OpenWeatherMap API key (free tier available) Notion account with a preferences database Slack workspace with bot permissions OpenAI API key How to customize Change trigger frequency: Modify the Schedule Trigger interval Adjust minimum gap time: Change minGapTimeMinutes in the configuration node Modify search radius: Edit the radius parameter in the Places API calls (default: 1000m) Customize spot types: Modify the type and keyword parameters in the HTTP Request nodes Change AI model: Switch to a different OpenAI model in the AI node Localize language: Update the AI prompt to generate responses in your preferred language
by Lee Lin
How It Works Top Branch Workflow A* 1. The Market Intelligence: Patrols the Market:** Runs hourly to scrape competitor rates for future days. Gathers Intel:** If prices spike, it instantly checks event announcements to see if a major event is driving demand. Crunches Numbers:** Calculates the exact price gap and filters out noise. 2. The Revenue Manager: Sets Strategy:** The AI Agent reviews the price gaps, competitor moves, and event signals. Reports:** Writes a strategic Executive Summary and sends it to your WhatsApp. Bottom Branch Workflow B* 3. The Consultant: Recall: When you ask a question via WhatsApp, the bot retrieves the saved analysis, historical rates, and event schedule. Answer: It acts as an on-demand analyst, conducting further analysis to give an informed answer to questions Setup Steps 1. Config: Add your hotel + competitor hotels (IDs/names) in the Config node. 2. Monitor Window: Set how far ahead you want to monitor (e.g., daysAhead = 30) in the Config node. 3. Sensitivity: Set how sensitive alerts should be (e.g., alert only if competitor moves > 10%) in the Significant Competitor Change node. 4. Connect Credentials: Amadeus (to fetch hotel prices) WhatsApp (to send alerts) Postgres/SQL (to store price snapshots, history, summary) OpenAI (for the AI Agents) 5. Event Source: Update the Fetch VCC nodes to scrape your local convention center or event site. 6. Run a test: Trigger Workflow A manually and confirm you receive a WhatsApp alert. Reply to that WhatsApp message to test Workflow B (Q&A). Use Cases & Benefits For Revenue Managers: Automate the "rate shop" routine and catch competitor moves without opening a spreadsheet. For Sales & Marketing Teams: Go beyond raw data. Pairing "what changed" with "why changed" instantly. For Hotel Leadership: Perfect for GMs and division leaders who need instant, decision-ready alerts via WhatsApp. โก Zero-Touch Efficiency: Eliminates hours of manual searching by automating rate checks 3x daily. ๐ง Contextual Intelligence: Tracks price AND explains why it moved by cross-referencing local events. ๐ค Actionable Strategy: AI doesn't just report numbers; it recommends specific pricing tactics. ๐ Long-Term Vision: Builds a permanent database of rate history, enabling the AI to answer complex trend questions over time. ๐ฌ Want to Customize This? leelin.business@gmail.com
by Cheng Siong Chin
How It Works This workflow automates cybersecurity incident detection and response for security operations centers (SOCs) managing constant threat landscapes. Designed for security analysts, IT operations teams, and CISOs, it solves the challenge of manually triaging security alerts, validating threats, and coordinating response actions across multiple systems and stakeholders. The system schedules continuous security monitoring, generates simulated anomaly data for testing, validates behaviors through AI agents (Behavior Validator confirms threat patterns, Governance Agent assesses severity), routes incidents by criticality (low/critical), and orchestrates responses: critical threats trigger automated human reviews, escalation workflows, and Slack alerts; low-priority items receive automated remediation with Google Sheets logging. By combining AI-powered threat analysis with intelligent routing and multi-channel response coordination, organizations reduce incident response time by 80%, minimize false positives, ensure consistent threat handling, and enable security teams to focus on strategic defense rather than alert fatigue. Setup Steps Connect Schedule Trigger for continuous monitoring Configure SIEM/security data sources Add OpenAI API keys to Behavior Validator and Governance Agent nodes Define severity thresholds and threat patterns in agent prompts Link Slack webhooks for critical incident alerts and escalation channels Connect Google Sheets API for incident logging and compliance tracking Prerequisites SIEM or security monitoring platform access, OpenAI API account Use Cases Intrusion detection response, malware outbreak containment Customization Modify AI prompts for organization-specific threat models, adjust severity scoring algorithms Benefits Reduces incident response time by 80%, minimizes false positive alert fatigue
by Ranjan Dailata
This n8n workflow automates backlink monitoring, analysis, and AI-driven interpretation for any domain or URL. It combines backlink intelligence from SE Ranking with structured reasoning and summarization powered by OpenAI GPT 4.1-mini. Instead of manually reviewing backlink reports, this workflow transforms raw backlink metrics into clear, human-readable SEO insights and persists them to multiple storage layers for reporting and tracking. Who this is for? This workflow is ideal for: SEO professionals and technical SEO teams Digital marketing agencies managing multiple domains Growth and content teams tracking backlink quality Developers building SEO intelligence pipelines Data teams using n8n for enrichment and reporting What this workflow does? Accepts a backlink query (domain, host, or URL) Uses multiple SE Ranking Backlinks API endpoints to retrieve: Backlink summary metrics Referring domains, IPs, and subnets Authority and backlink quality indicators Raw backlink lists Routes the data through an AI Agent powered by GPT-4.1-mini that: Selects the appropriate backlink dataset automatically Normalizes noisy SEO data Generates structured summaries without subjective opinions Produces a clean backlink intelligence summary Persists results to: n8n DataTables Google Sheets CSV / JSON exports Setup If you are new to SE Ranking, please signup on https://seranking.com Prerequisites Active SE Ranking API access OpenAI API key with GPT-4.1-mini enabled n8n instance (self-hosted or cloud) Basic understanding of backlink and authority metrics Import the workflow JSON into n8n Configure credentials: SE Ranking** using HTTP Header Authentication. Please make sure to set the header authentication as below. The value should contain a Token followed by a space with the SE Ranking API Key. OpenAI API (GPT-4.1-mini) Google Sheets OAuth (optional, for reporting) Open the Set Input Fields node and define: query (e.g. Backlinks Summary for https://example.com) Verify storage destinations: Google Sheet ID and sheet name n8n DataTable File export nodes (CSV / JSON) Click Execute Workflow How to customize this workflow to your needs? You can easily extend or adapt this workflow by: Switching analysis mode (domain, host, or URL) Adding historical backlink trend analysis Enhancing the AI prompt to generate: Toxic backlink alerts Link-building opportunities Competitor backlink gap analysis Replacing storage with: Databases or data warehouses Slack / Email notifications BI dashboards Scheduling the workflow for continuous backlink monitoring Summary This n8n template delivers an end-to-end backlink intelligence system from raw backlink retrieval to AI-powered interpretation and structured storage. By combining SE Rankingโs backlink data with OpenAI-driven reasoning, it eliminates manual SEO analysis and enables scalable, repeatable backlink monitoring.
by WeblineIndia
Zoho CRM Deal Forecasting with External Market Factor This workflow automatically fetches active deals from Zoho CRM, retrieves real-time market signals, calculates AI-enhanced forecast metrics, evaluates deal-market alignment, stores data in a database, updates CRM, and sends a summary alert to Slack. This workflow runs weekly to help sales teams make data-driven decisions. It fetches all open deals from Zoho, calculates expected revenue using deal amount, probability, seasonal trends, and market signals. An AI node evaluates each dealโs match ratio against current market conditions. Forecasts and AI insights are stored in a database and written back into Zoho. A Slack message summarizes the key metrics for easy review. You receive: Weekly automated deal forecast**. AI-powered deal-market alignment insights**. Database storage for historical trends**. Slack summary notifications**. Ideal for sales teams wanting real-time insights into pipeline health and market alignment without manual calculations. Quick Start โ Implementation Steps Import the provided n8n workflow JSON file. Add your Zoho CRM credentials in all relevant nodes. Add your AlphaVantage API key in the Market Signal node. Connect your Slack credentials and select the channel for alerts. Connect your Supabase (or preferred database) account for storing forecasts. Activate the workflow โ it will run automatically on the configured weekly schedule. What It Does This workflow automates deal forecasting with AI-enhanced insights: Fetches all active deals from Zoho CRM. Retrieves real-time market data (SPY index) from AlphaVantage. Combines deal and market data for forecast calculations. Calculates expected revenue using: Deal amount Probability Seasonal factors Market signals Sends deal data to an AI node for match ratio, confidence level, and reasoning. Parses AI output and merges it with forecast data. Stores forecast & AI metrics in a database (Supabase). Updates Zoho CRM with adjusted forecast and AI insights. Sends a summary alert to Slack including: Deal name and stage Amount, probability, and expected revenue Market signal and seasonal factor AI match ratio and confidence This ensures teams see clear, actionable sales insights every week. Whoโs It For This workflow is ideal for: Sales managers and CRM admins Revenue operations teams Forecasting analysts Teams using Zoho CRM and Slack for pipeline management Anyone wanting AI insights on market alignment for deals Requirements to Use This Workflow To run this workflow, you need: n8n instance** (cloud or self-hosted) Zoho CRM account** with API access AlphaVantage API key** for market data Slack workspace** with API permissions Supabase or other database** for storing forecasts Basic understanding of deals, probabilities, and seasonal forecasting How It Works Weekly Trigger โ Workflow runs automatically once a week. Fetch Deals โ Retrieves all active deals from Zoho CRM. Get Market Signal โ Fetches real-time market data. Combine Deal & Market Info โ Merges deal and market datasets. Generate Forecast Metrics โ Calculates expected revenue using deal info, seasonality, and market influence. AI Deal Match Evaluator โ AI evaluates alignment of each deal with market conditions. Parse AI Output & Merge Forecast โ Parses AI response and combines with forecast data. Store Forecast in Database โ Saves forecast and AI insights to Supabase. Update Deal Forecast in Zoho โ Updates deals with adjusted forecast and AI insights. Send Forecast Summary to Slack โ Sends a clear summary with key metrics. Setup Steps Import the workflow JSON file into n8n. Add Zoho credentials for deal fetch and update nodes. Add AlphaVantage API key for market signal node. Configure Supabase node to store forecast data. Add Slack credentials and choose a channel for notifications. Test the workflow manually to ensure metrics are calculated correctly. Activate the weekly trigger. How To Customize Nodes Forecast Calculation Modify Generate Forecast Metrics node to adjust seasonal factors or calculation logic. AI Match Evaluation You can tweak prompts in Message a Model to adjust AI scoring logic or reasoning output. Database Storage Supabase node can include additional fields: Timestamp Deal owner Notes or comments Additional KPIs Slack Alerts Customize message format, emojis, or mentions for team readability. Add-Ons (Optional Enhancements) Integrate multiple market indices for more accurate forecasting. Add multi-stage probability adjustments. Create dashboards using stored forecast data. Extend AI evaluation for risk scoring or priority recommendations. Use Case Examples 1. Pipeline Health Quickly see which deals are aligned with market conditions. 2. Forecast Accuracy Track historical vs AI-enhanced forecasts for trend analysis. 3. Team Notifications Slack summary alerts keep sales and leadership informed weekly. Troubleshooting Guide | Issue | Possible Cause | Solution | |-------|----------------|---------| | No Slack alerts | Invalid credentials | Re-check Slack API key and channel | | Forecast not updating | Zoho API error | Verify Zoho OAuth credentials | | AI node fails | Model misconfiguration | Check OpenAI API credentials & prompt format | | Data not stored | Supabase connection issue | Verify credentials and table mapping | Need Help? If you need assistance setting up the workflow, modifying the AI forecast logic or integrating Slack summaries our n8n workflow development team at WeblineIndia can help. We provide workflow customization, advanced forecasting and reporting solutions for Zoho CRM pipelines.
by NODA shuichi
Description: Your personal AI Book Curator that reads reviews, recommends books, and supports affiliate links. ๐๐ค This advanced workflow acts as a complete "Reading Assistant Application" with monetization features. It takes a book title via form, researches it using Google APIs, and employs an OpenAI Agent to generate a summary and recommendations. Why use this template? Monetization Support: Just enter your Amazon Affiliate Tag in the config node, and all email links will automatically include your tag. Organized & Scalable: The workflow is clearly grouped into 4 sections (Input, Enrichment, AI, Delivery) with sticky notes for easy navigation. How it works: Input: User submits a book title (e.g., "Atomic Habits"). Research: The workflow fetches book metadata and searches for real-world reviews. Analyze: GPT-4o explains why the book is interesting and suggests 3 related reads. Deliver: Generates a beautiful HTML email with purchase links and logs the request to Google Sheets. Setup Requirements: Google Sheets: Create headers: date, book_title, author, ai_comment, user_email. Credentials: OpenAI, Google Custom Search, Gmail, Google Sheets. Config: Open the "1. Input & Config" section to enter API Keys and IDs.
by Yehor EGMS
The original LLM Council concept was introduced by Andrej Karpathy and published as an open-source repository demonstrating multi-model consensus and ranking. This workflow is my adaptation of that original idea, reimplemented and structured as a production-ready n8n template. Original repository - https://github.com/karpathy/llm-council This n8n template implements the LLM Council pattern: a single user question is processed in parallel by multiple large language models, independently evaluated by peer models, and then synthesized into one high-quality, consensus-driven final answer. It is designed for use cases where answer quality, balance, and reduced single-model bias are critical. ๐ Section 1: Trigger & Input โก When Chat Message Received (Chat Trigger) Purpose: Receives a userโs message and initiates the entire workflow. How it works: A user sends a chat message The message is stored as the Original Question The same input is forwarded simultaneously to multiple LLM pipelines Why it matters: Provides a clean, unified entry point for all downstream multi-model logic. ๐ Section 2: Stage 1 โ Parallel LLM Responses ๐ค Basic LLM Chains (x4) Models used: Anthropic Claude OpenAI GPT xAI Grok Google Gemini Purpose: Each model independently generates its own response to the same question. Key characteristics: Identical prompt structure for all models Independent reasoning paths No shared context between models Why it matters: Produces diverse perspectives, reasoning styles, and solution approaches. ๐ Section 3: Stage 2 โ Response Anonymization ๐งพ Set Nodes (Response A / B / C / D) Purpose: Stores model outputs in an anonymized format: Response A Response B Response C Response D Why it matters: Prevents evaluator models from knowing which LLM authored which response, reducing bias during evaluation. ๐ Section 4: Stage 3 โ Peer Evaluation & Ranking ๐ Evaluation Chains (Claude / GPT / Grok / Gemini) Purpose: Each model acts as a reviewer and: Analyzes all four anonymized responses Describes strengths and weaknesses of each Produces a strict FINAL RANKING from best to worst Ranking format (strict): FINAL RANKING: Response B Response A Response D Response C Why it matters: Creates multiple independent quality assessments from different model perspectives. ๐ Section 5: Stage 4 โ Ranking Aggregation ๐งฎ Code Node (JavaScript) Purpose: Aggregates all peer rankings by: Parsing ranking positions Calculating average position per response Counting evaluation occurrences Sorting responses by best average score Output includes: Aggregated rankings Best response label Best average score Why it matters: Transforms subjective rankings into a structured, quantitative consensus. ๐ Section 6: Stage 5 โ Final Consensus Answer ๐ง Chairman LLM Chain Purpose: One model acts as the Council Chairman and: Reviews all original responses Considers peer rankings and aggregated scores Identifies consensus patterns and disagreements Produces a single, clear, high-quality final answer Why it matters: Delivers a refined response that reflects collective model intelligence rather than a simple average. ๐ Workflow Overview Stage Node / Logic Purpose 1 Chat Trigger Receive user question 2 LLM Chains Generate independent responses 3 Set Nodes Anonymize outputs 4 Evaluation Chains Peer review & ranking 5 Code Node Aggregate rankings 6 Chairman LLM Final synthesized answer ๐ฏ Key Benefits ๐ง Multi-model intelligence โ avoids reliance on a single LLM โ๏ธ Reduced bias โ anonymized peer evaluation ๐ Quality-driven selection โ ranking-based consensus ๐ Modular architecture โ easy to add or replace models ๐ Language-flexible โ input and output languages configurable ๐งฉ Production-ready logic โ clear stages, deterministic ranking ๐ Ideal Use Cases High-stakes decision support Complex technical or architectural questions Strategy and research synthesis AI assistants requiring higher trust and reliability Comparing and selecting the best LLM-generated answers
by Dr. Firas
๐ AI Image Generation Workflow โ Scalable E-commerce Product Images This workflow automates the creation of high-quality, AI-generated product images using NanoBanana Pro. It analyzes multiple reference images, generates a professional photoshoot-style prompt, creates a new image, and stores the final result with a public URL for reuse. ๐ Documentation: Notion Guide ๐ค Who is this for? This workflow is designed for: E-commerce store owners Digital marketers and growth teams Creative agencies Automation builders using n8n Anyone who wants to generate scalable, consistent product images from existing photos No advanced coding skills are required. โ What problem does this workflow solve? / Use case Creating professional product images at scale is expensive, slow, and inconsistent. This workflow solves: Manual photoshoot costs Inconsistent visual branding Time wasted on prompt writing Difficulty generating AI-ready public image URLs Repetitive image upload and storage steps Typical use case: Transform 3 reference photos (model + product) into a studio-quality fashion image automatically. โ๏ธ What this workflow does Collects exactly 3 images via a form upload Validates inputs to ensure all required images are present Splits images into individual processing paths Uploads original images to Google Drive (permanent storage) Generates public, crawlable image URLs Analyzes each image using AI vision (GPT-4O) Aggregates image descriptions into a structured context Generates a professional photoshoot prompt using an AI agent Creates a new image via NanoBanana Pro Polls the API until the image generation is completed Downloads the final image as a binary file Uploads the final image to Google Drive Logs results (images + descriptions) into Google Sheets ๐ ๏ธ Setup Required credentials Google Drive (OAuth) Google Sheets (OAuth) OpenAI API key AtlasCloud API key Required configuration Replace all <PLACEHOLDER_VALUE> fields: Google Drive folder IDs Google Sheets document ID and sheet name AtlasCloud API key Ensure Google Drive folders have write permissions Confirm tmpfiles.org is reachable from your environment Important notes The workflow expects exactly 3 images The final image is downloaded as binary before upload Public URLs are normalized to https://tmpfiles.org/dl/... for maximum AI compatibility ๐ฅ Watch This Tutorial ๐ Need help or want to customize this? ๐ฉ Contact: LinkedIn ๐บ YouTube: @DRFIRASS ๐ Workshops: Mes Ateliers n8n Need help customizing? Contact me for consulting and support : Linkedin / Youtube / ๐ Mes Ateliers n8n
by Artem Boiko
A professional BIM-to-cost pipeline that extracts data from Revit models (2015โ2026), classifies elements with AI, decomposes them into construction works, and generates detailed cost estimates using the open-source DDC CWICR database. Produces HTML reports and Excel exports with full resource breakdown. Who's it for BIM Managers** automating quantity takeoff and cost estimation Cost Engineers** integrating 5D workflows into design pipelines Construction Companies** standardizing estimates from Revit models General Contractors** doing rapid budget checks during design MEP Engineers** pricing mechanical/electrical/plumbing systems Developers** building custom BIM-to-cost integrations What it does Extracts BIM data from Revit model via converter (RvtExporter) Classifies building vs non-building elements using AI Detects project type (Residential/Commercial/Industrial) Generates construction phases and assigns element types Decomposes each BIM type into detailed work items Searches DDC CWICR vector database for matching rates Calculates costs with unit mapping and resource breakdown Validates work completeness and checks for gaps Generates professional HTML report + Excel file How it works โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ REVIT MODEL (.rvt) โ โ Revit 2015โ2026 supported โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ BLOCK 1: CONVERSION โ โ RvtExporter.exe โ Excel with BIM element schedules โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ BLOCK 2: DATA LOADING & CLASSIFICATION โ โ โข Filter 3D View elements only โ โ โข AI analyzes headers โ aggregation rules (sum/mean/last) โ โ โข AI classifies building vs non-building elements โ โ โข Hard exclude: Grids, Levels, Annotations, Views, etc. โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ BLOCK 3: PROJECT ANALYSIS (Stages 0โ3) โ โ STAGE 0: Collect filtered BIM data โ โ STAGE 1: AI detects project type โ โ STAGE 2: AI generates construction phases โ โ STAGE 3: AI assigns element types to phases โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ BLOCK 4: WORK DECOMPOSITION (Stage 4) โ โ Loop through each BIM type: โ โ โข AI decomposes type into work items โ โ โข Example: Window โ Demolition, Installation, Sealing, Hardware โ โ โข Prepares search queries for pricing โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ BLOCK 5: PRICING & CALCULATION (Stages 5โ7) โ โ STAGE 5: Vector search in Qdrant (text-embedding-3-large, 3072 dim) โ โ STAGE 6: Map BIM units โ Rate units (mยฒ โ 100 mยฒ) โ โ STAGE 7: Calculate costs (Qty ร Unit Price) โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ BLOCK 6: VALIDATION & AGGREGATION โ โ STAGE 7.5: AI validates work completeness โ โ STAGE 8: Aggregate costs by phases โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ BLOCK 7: REPORT GENERATION (Stage 9) โ โ โข Professional HTML report with expandable rows โ โ โข Excel-compatible XLS file โ โ โข Auto-opens in browser โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Pipeline Stages | Stage | Name | Description | |-------|------|-------------| | 0 | Collect | Gather filtered BIM data | | 1 | Project Type | AI detects Residential/Commercial/Industrial | | 2 | Phases | AI generates construction phases | | 3 | Assignment | AI assigns element types to phases | | 4 | Decomposition | AI breaks types into work items | | 5 | Vector Search | Query Qdrant for pricing rates | | 6 | Unit Mapping | Convert BIM units to rate units | | 7 | Calculation | Compute costs (Qty ร Price) | | 7.5 | Validation | AI checks completeness, finds gaps | | 8 | Aggregation | Sum costs by phases | | 9 | Reports | Generate HTML + XLS outputs | Prerequisites | Component | Requirement | |-----------|-------------| | n8n | v1.30+ with Execute Command node | | Revit Exporter | RvtExporter.exe (provided separately) | | OpenAI API | For embeddings + LLM tasks | | Qdrant | Vector DB with DDC CWICR collections | | DDC CWICR Data | GitHub | | Windows | For Revit converter execution | Setup 1. Configure File Paths In Setup - Define file paths node: { "path_to_converter": "C:\\path\\to\\RvtExporter.exe", "project_file": "C:\\path\\to\\your_project.rvt", "group_by": "Type Name", "language_code": "DE" } 2. Select Language & Region | Code | Language | City | Currency | |------|----------|------|----------| | AR | Arabic | Dubai | AED | | ZH | Chinese | Shanghai | CNY | | DE | German | Berlin | EUR | | EN | English | Toronto | CAD | | ES | Spanish | Barcelona | EUR | | FR | French | Paris | EUR | | HI | Hindi | Mumbai | INR | | PT | Portuguese | Sรฃo Paulo | BRL | | RU | Russian | St. Petersburg | RUB | 3. Configure AI Model Connect your preferred LLM in the model nodes: | Provider | Model | Notes | |----------|-------|-------| | OpenAI | GPT-4o | Default, recommended | | Anthropic | Claude Opus 4 | High quality | | Google | Gemini 2.5 Pro | Good for large contexts | | xAI | Grok 4 | Fast inference | | DeepSeek | DeepSeek Chat | Cost-effective | | OpenRouter | Various | Multi-model access | 4. Set Up Qdrant Ensure DDC CWICR collections are loaded: DE_BERLIN_workitems_costs_resources_EMBEDDINGS_3072_DDC_CWICR ENG_TORONTO_workitems_costs_resources_EMBEDDINGS_3072_DDC_CWICR RU_STPETERSBURG_workitems_costs_resources_EMBEDDINGS_3072_DDC_CWICR ... 5. Configure OpenAI Credentials Set up OpenAI API credential for: Embeddings (text-embedding-3-large, 3072 dimensions) LLM calls (if using OpenAI as primary model) Features | Feature | Description | |---------|-------------| | ๐๏ธ Revit Integration | Direct extraction from .rvt files (2015โ2026) | | ๐ค Multi-LLM Support | OpenAI, Claude, Gemini, Grok, DeepSeek | | ๐ Smart Classification | AI separates building from non-building elements | | ๐ Work Decomposition | Breaks BIM types into detailed work items | | ๐ฏ Vector Search | Semantic matching via Qdrant + OpenAI embeddings | | ๐งฎ Unit Mapping | Automatic conversion (mยฒ โ 100 mยฒ, pcs โ sets) | | โ AI Validation | Checks for missing works and duplications | | ๐ Phase Aggregation | Costs grouped by construction phases | | ๐ HTML Report | Professional report with quality indicators | | ๐ Excel Export | XLS file with formulas and links | | ๐ 9 Languages | Full localization + regional pricing | Hard Exclude Categories The pipeline automatically excludes non-physical elements: Levels, Grids, Reference Planes Annotations, Dimensions, Text Notes Tags, Views, Sheets, Schedules Legends, Viewports, Section Boxes Scope Boxes, Match Lines Model Groups, Detail Groups Entourage (RPC people, cars, plants) Example Output Input: Residential building Revit model (45 element types) Processing: Project type detected: Residential Multi-Family Phases generated: Foundations โ Structure โ Envelope โ MEP โ Finishes Types assigned: 45 types โ 5 phases Works decomposed: 45 types โ 280 work items Rates found: 245/280 (87.5%) Output Files: project_2024-12-08.html โ Professional HTML report project_2024-12-08.xls โ Excel with full breakdown HTML Report Features: KPI summary (total cost, items, phases) Expandable phase sections Quality indicators (โ green/yellow/red) Resource breakdown per work item Clickable rate codes Responsive design Output Structure ๐ Cost Estimate: Residential Building โโโ ๐ Phase 1: Foundations โ โโโ Foundation walls โ 125.5 mยณ โ โฌ12,450 โ โโโ Concrete footings โ 45.2 mยณ โ โฌ8,340 โ โโโ Waterproofing โ 280 mยฒ โ โฌ4,200 โโโ ๐ Phase 2: Structure โ โโโ Concrete columns โ 18 pcs โ โฌ9,720 โ โโโ Floor slabs โ 450 mยฒ โ โฌ67,500 โ โโโ Stairs โ 3 flights โ โฌ8,100 โโโ ๐ Phase 3: Envelope โ โโโ Exterior walls โ 680 mยฒ โ โฌ95,200 โ โโโ Windows โ 42 pcs โ โฌ25,200 โ โโโ Roof system โ 225 mยฒ โ โฌ33,750 โโโ ๐ฐ TOTAL: โฌ485,240 Notes & Tips First run:** Conversion takes 1โ3 minutes depending on model size Cached conversion:** Subsequent runs skip conversion if Excel exists Testing mode:** Limit to 10 types for faster debugging Rate accuracy:** Depends on DDC CWICR coverage for your region Custom phases:** AI adapts phases based on project type Missing rates:** Flagged with red indicator in report Extending the Pipeline Add custom rates:** Extend Qdrant collection with your pricing Chain to PM tools:** Connect to OpenProject, Monday, Asana Email reports:** Add email node after report generation Cloud storage:** Upload to Google Drive, OneDrive, S3 Webhook trigger:** Replace manual trigger for API access Categories AI ยท Data Transformation ยท Document Ops ยท Files & Storage Tags bim, revit, cost-estimation, 5d-bim, 4d-bim, qdrant, vector-search, openai, construction, quantity-takeoff, html-report, multilingual Author DataDrivenConstruction.io https://DataDrivenConstruction.io info@datadrivenconstruction.io Consulting & Training We help AEC firms implement: BIM-to-cost automation pipelines 4D/5D integration workflows Custom Revit data extractors AI-powered estimation systems Vector database deployment for construction data Contact us to adapt this pipeline to your Revit templates and regional pricing. Resources DDC CWICR Database:** GitHub Qdrant Documentation:** qdrant.tech/documentation OpenAI Embeddings:** platform.openai.com n8n Execute Command:** docs.n8n.io โญ Star us on GitHub! github.com/datadrivenconstruction/DDC-CWICR
by Cheng Siong Chin
How It Works This workflow automates monthly tax filing processes by retrieving financial data, performing AI-driven tax calculations, coordinating pre-filing reviews with key stakeholders, incorporating feedback, and managing overall submission readiness. It pulls accounting records, executes GPT-5โbased tax calculations with transparent reasoning, formats comprehensive pre-filing reports, and routes them to a submission coordinator via email for review. The system captures reviewer feedback through structured prompts, intelligently applies necessary corrections, archives finalized records in Google Drive, and continuously tracks filing status. It is designed for accounting firms, tax practices, and finance departments that require coordinated, multi-stakeholder tax filing with minimal manual intervention. Setup Steps Connect accounting system and configure financial data fetch parameters. Set up OpenAI GPT-4 API for tax calculations and reasoning extraction. Configure Gmail, Chat Model, and Google Drive credentials. Define submission coordinator contacts and configure feedback. Prerequisites Accounting system access; OpenAI API key; Gmail account; Google Drive Use Cases Tax firms managing multi-client monthly filings with partner review Customization Modify tax calculation prompts for jurisdictions, adjust feedback collection fields Benefits Eliminates manual filing coordination, reduces submission errors
by noda
AI Recommender: From Food Photo to Restaurant and Book (Google Books Integrated) What it does Analyzes a food photo with an AI vision model to extract dish name + category Searches nearby restaurants with Google Places and selects the single best (rating โ reviews tie-break) Finds a matching book via Google Books and posts a tidy summary to Slack Who itโs for Foodies, bloggers, and teams who want a plug-and-play flow that turns a single food photo into a dining pick + themed reading. How it works Google Drive Trigger detects a new photo Dish Classifier (Vision LLM) โ JSON (dish_name, category, basic macros) Search Google Places near your origin; Select Best Place (AI) Recommend Book (AI) โ Search Google Books โ format details Post to Slack (JP/EN both possible) Requirements Google Drive / Google Places / Google Books credentials, LLM access (OpenRouter/OpenAI), Slack OAuth. Customize Edit origin/radius in Set Origin & Radius, tweak categoryโkeyword mapping in Normalize Classification, adjust Slack channel & message in Post to Slack.
by Calvin Cunningham
Use Cases -Personal or family budget tracking. -Small business expense logging via Telegram -Hands-free logging (using voice messages) How it works: -Trigger receives text or voice. -Optional branch transcribes audio to text. -AI parses into a structured array (SOP enforces schema). -Split Out produces 1 item per expense. -Loop Over Items appends rows sequentially with a Wait, preventing missed writes. -In parallel, Item Lists (Aggregate) builds a single summary string; Merge (Wait for Both) releases one final Telegram confirmation. Setup Instructions Connect credentials: Telegram, Google, OpenAI. Sheets: Create a sheet with headers Date, Category, Merchant, Amount, Note. Copy Spreadsheet ID + sheet name. Map columns in Append to Google Sheet. Pick models: set Chat model (e.g., gpt-4o-mini) and Whisper for transcription if using audio. Wait time: keep 500โ1000 ms to avoid API race conditions. Run: Send a Telegram message like: Gas 34.67, Groceries 82.45, Coffee 6.25, Lunch 14.90. Customization ideas: -Add categories map (Memory/Set) for consistent labeling. -Add currency detection/formatting. -Add error-to-Telegram path for invalid schema.