by Intuz
This n8n template from Intuz provides a complete and automated solution for preparing and delivering context-rich briefings directly to attendees before every meeting. It acts as an AI-powered executive assistant, gathering relevant information from all your key work tools to ensure everyone arrives prepared and aligned. Who's this workflow for? Engineering Managers & Team Leads Product Managers & Project Managers Scrum Masters & Agile Coaches Any team that holds regular status, planning, or technical meetings. How it works 1. Trigger on New Calendar Event: The workflow starts automatically whenever a new meeting is created in a designated Google Calendar. 2. Fetch Previous Context: It immediately connects to Notion to retrieve the notes from the most recent past meeting, ensuring continuity. 3. Wait for the Right Moment: The workflow calculates a time 15 minutes before the meeting's scheduled start and pauses its execution until then. 4. Gather Real-Time Project Data: Just before the meeting, the workflow wakes up and: Extracts keywords from the meeting title. Searches GitHub for recent Pull Requests (PRs) relevant to those keywords. Searches Jira for any tickets or issues that match the meeting's topic. 5. Build the Intelligent Briefing: It assembles all the gathered information—previous notes from Notion, current PRs from GitHub, and relevant tickets from Jira—into a single, beautifully formatted Slack message. 6. Deliver to Each Attendee: The workflow identifies all attendees from the Google Calendar invite, finds their corresponding Slack profiles via email, and sends the personalized briefing as a Direct Message (DM) to each one, ensuring everyone is prepared just in time. Key Requirements to Use This Template 1. n8n Instance: An active n8n account (Cloud or self-hosted). 2. Google Calendar Account: To trigger the workflow on new events. 3. Notion Account: With a dedicated database for storing meeting notes. 4. GitHub Account: To search for relevant pull requests. 5. Jira Cloud Account: To search for relevant project tickets. 6. Slack Workspace & App: A Slack workspace where you have permission to install an app. You will need a Bot Token with the necessary permissions. Setup Instructions Google Calendar Trigger: In the "Capture New Google Calendar Event" node, connect your Google Calendar account and select the calendar you want to monitor. Notion Connection: In the "Get Last Meeting Notes" node, connect your Notion account. Select the Notion Database ID that contains your meeting notes. GitHub & Jira Connections: In the "Get PRs from Repo" node, connect your GitHub account and select the repository to search. In the "Get Jira Issues Related to Meeting" node, connect your Jira Cloud account. You can customize the JQL query if needed. Slack Configuration (Crucial Step): Create a Slack App: Go to api.slack.com/apps, create a new app, and install it to your workspace. Set Permissions: In your app's "OAuth & Permissions" settings, add the following Bot Token Scopes: chat:write (to send messages) and users:read.email (this is critical for looking up attendees). Reinstall the app to your workspace. Get Bot Token: Copy the "Bot User OAuth Token" (it starts with xoxb-). Connect in n8n: In the "Get User Slack Info from Email" node, click "Header Parameters" and replace {{ slack oauth token }} with your actual Bot Token. In the "Send Meeting Context in Slack DM" node, connect your Slack credentials using the same Bot Token. Activate the Workflow: Save the workflow and toggle the "Active" switch to ON. Your automated pre-meeting bot is now live! Connect with us Website: https://www.intuz.com/n8n-workflow-automation-templates Email: getstarted@intuz.com LinkedIn: https://www.linkedin.com/company/intuz Get Started: https://n8n.partnerlinks.io/intuz For Custom Workflow Automation Click here: Get Started
by Cheng Siong Chin
How It Works This workflow delivers intelligent multilingual audio content creation for global marketing teams, e-learning providers, and content production studios. It solves the complex challenge of generating culturally adapted, professionally voiced translations optimized for each target language. The system begins with AI-powered localization that adapts source content for cultural context, idioms, and regional preferences rather than literal translation. Specialized AI agents then optimize speech parameters (pace, tone, emphasis) and voice characteristics (pitch, timbre, style) specific to each language's phonetic requirements. The workflow prepares language arrays and loops through each target language, generating optimized audio via ElevenLabs with customized voice parameters. All audio files are processed, formatted with metadata, and aggregated into a complete deliverable package, transforming single-source content into publication-ready multilingual audio assets. Setup Steps Configure OpenAI API credentials in all AI agent nodes Set up ElevenLabs account, obtain API key Define target languages list in "Workflow Configuration" node using ISO language codes Customize localization prompts in AI agents to match brand voice and content type Adjust voice parameter ranges and optimization criteria based on audio requirements Configure output formatting in "Aggregate Results" node Prerequisites OpenAI API access with GPT-4 capabilities, active ElevenLabs subscription with multi-voice access. Use Cases Global product launch campaigns, international e-learning course production Customization Modify AI prompts for industry-specific terminology, add quality validation checkpoints Benefits Achieves native-quality audio across languages, reduces production time by 80%
by Cheng Siong Chin
How It Works This workflow provides enterprise-grade translation and text-to-speech automation for international communication teams, content publishers, and localization services. It addresses producing high-quality multilingual audio content with consistent accuracy and natural delivery at scale. An AI orchestrator analyzes source content to determine optimal translation strategy, selecting specialized agents based on content type, complexity, and target languages. The translation agent processes text with contextual awareness, generating structured output that feeds into ElevenLabs' neural text-to-speech engine. Each audio file undergoes automated quality validation checking pronunciation accuracy, natural flow, and technical specifications. High-quality outputs proceed to standardized formatting for delivery, while failures trigger dedicated error handling with diagnostic reporting, ensuring reliable production of professional multilingual audio assets. Setup Steps Configure OpenAI API key in "Translation Orchestrator" Set up ElevenLabs credentials in "Text-to-Speech" Define source and target languages in "Workflow Configuration" Customize orchestration logic based on content types and complexity Set quality thresholds in "Audio Quality Validation" matching output Prerequisites OpenAI API access with GPT-4 capabilities, active ElevenLabs subscription. Use Cases Enterprise content localization, multilingual customer communications Customization Add language-specific translation agents, modify orchestration routing logic Benefits Delivers consistent translation quality through intelligent routing
by Cheng Siong Chin
How It Works This workflow automates IoT device compliance monitoring and anomaly detection for industrial operations. Designed for facility managers, quality assurance teams, and regulatory compliance officers, it solves the challenge of continuously monitoring sensor networks while ensuring regulatory adherence and detecting operational issues in real-time.The system runs every 15 minutes, fetching IoT sensor data and structuring it for analysis. Dual AI agents evaluate compliance against regulatory standards and detect operational anomalies. Issues trigger immediate email and Slack alerts for rapid response. Daily data aggregates into comprehensive ESG reports with AI-generated insights, automatically emailed to stakeholders for transparency and audit trails. Setup Steps Configure AirTable credentials and set 15-minute schedule interval Add OpenAI API keys for compliance and anomaly detection agents, configure regulatory thresholds Set Gmail/Slack credentials for alerts and ESG report distribution Prerequisites IoT sensor platform API access, OpenAI API key, Gmail/Slack accounts Use Cases Manufacturing quality control, environmental compliance monitoring Customization Modify sensor polling frequency, adjust compliance rules, customize anomaly thresholds Benefits Continuous compliance assurance, instant anomaly detection
by Akshay Chug
Overview Stop wasting time on leads that will never convert. This workflow scores every inbound form submission 1-10 using Claude AI, then automatically replies and routes based on fit — hot leads get an instant email and Slack alert, warm leads get a follow-up prompt, cold leads get a polite decline. How it works Lead fills out your built-in n8n contact form Claude AI scores them 1-10 against your ideal customer profile Hot (7-10) → Slack alert + personalised email + logged to Sheets Warm (4-6) → holding reply email + logged to Sheets Cold (1-3) → polite decline email + logged to Sheets Setup steps Copy the form URL from the Inbound Lead Form node and share it as your contact form Add your Anthropic API key to the Claude Sonnet sub-node Connect Gmail to the three reply nodes and update the email signatures Connect Slack to Notify Team - Hot Lead — or right-click and Disable it Create a Google Sheet with headers: Timestamp, Name, Email, Company, Size, Message, Score, Tier, Reasoning and connect it in all three Log nodes Edit the scoring prompt in Score Lead Intent to describe your ideal customer
by Cheng Siong Chin
How It Works This workflow automates cybersecurity incident detection and response for security operations centers (SOCs) managing constant threat landscapes. Designed for security analysts, IT operations teams, and CISOs, it solves the challenge of manually triaging security alerts, validating threats, and coordinating response actions across multiple systems and stakeholders. The system schedules continuous security monitoring, generates simulated anomaly data for testing, validates behaviors through AI agents (Behavior Validator confirms threat patterns, Governance Agent assesses severity), routes incidents by criticality (low/critical), and orchestrates responses: critical threats trigger automated human reviews, escalation workflows, and Slack alerts; low-priority items receive automated remediation with Google Sheets logging. By combining AI-powered threat analysis with intelligent routing and multi-channel response coordination, organizations reduce incident response time by 80%, minimize false positives, ensure consistent threat handling, and enable security teams to focus on strategic defense rather than alert fatigue. Setup Steps Connect Schedule Trigger for continuous monitoring Configure SIEM/security data sources Add OpenAI API keys to Behavior Validator and Governance Agent nodes Define severity thresholds and threat patterns in agent prompts Link Slack webhooks for critical incident alerts and escalation channels Connect Google Sheets API for incident logging and compliance tracking Prerequisites SIEM or security monitoring platform access, OpenAI API account Use Cases Intrusion detection response, malware outbreak containment Customization Modify AI prompts for organization-specific threat models, adjust severity scoring algorithms Benefits Reduces incident response time by 80%, minimizes false positive alert fatigue
by Chandan Singh
This workflow synchronizes MySQL database table schemas with a vector database in a controlled, idempotent manner. Each database table is indexed as a single vector to preserve complete schema context for AI-based retrieval and reasoning. The workflow prevents duplicate vectors and automatically handles schema changes by detecting differences and re-indexing only when required. How it works The workflow starts with a manual trigger and loads global configuration values. All database tables are discovered and processed one by one inside a loop. For each table, a normalized schema representation is generated, and a deterministic hash is calculated. A metadata table is checked to determine whether a vector already exists for the table. If a vector exists, the stored schema hash is compared with the current hash to detect schema changes. When a schema change is detected, the existing vector and metadata are deleted. The updated table schema is embedded as a single vector (without chunking) and upserted into the vector database. Vector identifiers and schema hashes are persisted for future executions. Setup steps Set the MySQL database name using mysql_database_name. Configure the Pinecone index name using pinecone_index. Set the vector namespace using vector_namespace. Configure the Pinecone index host using vector_index_host. Add your Pinecone API key using pinecone_apikey. Select the embedding model using embedding_model. Configure text processing options: chunk_size chunk_overlap Set the metadata table identifier using dataTable_Id. Save and run the workflow manually to perform the initial schema synchronization. Limitations This workflow indexes database table schemas only. Table data (rows) are not embedded or indexed. Each table is stored as a single vector. Very large or highly complex schemas may approach model token limits depending on the selected embedding model. Schema changes are detected using a hash-based comparison. Non-structural changes that do not affect the schema representation will not trigger re-indexing.
by Cheng Siong Chin
How It Works This workflow automates SEO content creation by aggregating multi-source research and generating optimized articles. Designed for content marketers, SEO specialists, and digital agencies, it solves the time-consuming challenge of researching trending topics and crafting search-optimized content at scale. The system pulls discussions from Reddit, videos from YouTube, and industry news via APIs, then combines this data into comprehensive insights. An AI agent analyzes aggregated research, performs Google Search SEO analysis, consults Wikipedia for accuracy, and generates structured, SEO-optimized HTML content. The final output saves automatically to Google Sheets for easy management and publishing workflows. Setup Steps Configure Reddit, YouTube, and industry news API credentials in fetch nodes Add OpenAI API key for GPT-4 agent and Google API key for search analysis Connect Google Sheets with write permissions for content storage Prerequisites Reddit API access, YouTube Data API key, industry news API Use Cases Blog content automation, competitive content analysis, trending topic research Customization Modify research sources, adjust AI prompts for brand voice, customize SEO parameters Benefits 10x faster content production, multi-platform research coverage
by sato rio
This workflow streamlines the entire inventory replenishment process by leveraging AI for demand forecasting and intelligent logic for supplier selection. It aggregates data from multiple sources—POS systems, weather forecasts, SNS trends, and historical sales—to predict future demand. Based on these predictions, it calculates shortages, requests quotes from multiple suppliers, selects the optimal vendor based on cost and lead time, and executes the order automatically. 🚀 Who is this for? Retail & E-commerce Managers** aiming to minimize stockouts and reduce overstock. Supply Chain Operations** looking to automate procurement and vendor selection. Data Analysts** wanting to integrate external factors (weather, trends) into inventory planning. 💡 How it works Data Aggregation: Fetches data from POS systems, MySQL (historical sales), OpenWeatherMap (weather), and SNS trend APIs. AI Forecasting: Formats the data and sends it to an AI prediction API to forecast demand for the next 7 days. Shortage Calculation: Compares the forecast against current stock and safety stock to determine necessary order quantities. Supplier Optimization: For items needing replenishment, the workflow requests quotes from multiple suppliers (A, B, C) in parallel. It selects the best supplier based on the lowest total cost within a 7-day lead time. Execution & Logging: Places the order via API, updates the inventory system, and logs the transaction to MySQL. Anomaly Detection: If the AI's confidence score is low, it skips the auto-order and sends an alert to Slack for manual review. ⚙️ Setup steps Configure Credentials: Set up credentials for MySQL and Slack in n8n. API Keys: You will need an API key for OpenWeatherMap (or a similar service). Update Endpoints: The HTTP Request nodes use placeholder URLs (e.g., pos-api.example.com, ai-prediction-api.example.com). Replace these with your actual internal APIs, ERP endpoints, or AI service (like OpenAI). Database Prep: Ensure your MySQL database has a table named forecast_order_log to store the order history. Schedule: The workflow is set to run daily at 03:00. Adjust the Schedule Trigger node as needed. 📋 Requirements n8n** (Self-hosted or Cloud) MySQL** database Slack** workspace External APIs for POS, Inventory, and Supplier communication (or mock endpoints for testing).
by Dr. Firas
🚀 AI Image Generation Workflow – Scalable E-commerce Product Images This workflow automates the creation of high-quality, AI-generated product images using NanoBanana Pro. It analyzes multiple reference images, generates a professional photoshoot-style prompt, creates a new image, and stores the final result with a public URL for reuse. 📄 Documentation: Notion Guide 👤 Who is this for? This workflow is designed for: E-commerce store owners Digital marketers and growth teams Creative agencies Automation builders using n8n Anyone who wants to generate scalable, consistent product images from existing photos No advanced coding skills are required. ❓ What problem does this workflow solve? / Use case Creating professional product images at scale is expensive, slow, and inconsistent. This workflow solves: Manual photoshoot costs Inconsistent visual branding Time wasted on prompt writing Difficulty generating AI-ready public image URLs Repetitive image upload and storage steps Typical use case: Transform 3 reference photos (model + product) into a studio-quality fashion image automatically. ⚙️ What this workflow does Collects exactly 3 images via a form upload Validates inputs to ensure all required images are present Splits images into individual processing paths Uploads original images to Google Drive (permanent storage) Generates public, crawlable image URLs Analyzes each image using AI vision (GPT-4O) Aggregates image descriptions into a structured context Generates a professional photoshoot prompt using an AI agent Creates a new image via NanoBanana Pro Polls the API until the image generation is completed Downloads the final image as a binary file Uploads the final image to Google Drive Logs results (images + descriptions) into Google Sheets 🛠️ Setup Required credentials Google Drive (OAuth) Google Sheets (OAuth) OpenAI API key AtlasCloud API key Required configuration Replace all <PLACEHOLDER_VALUE> fields: Google Drive folder IDs Google Sheets document ID and sheet name AtlasCloud API key Ensure Google Drive folders have write permissions Confirm tmpfiles.org is reachable from your environment Important notes The workflow expects exactly 3 images The final image is downloaded as binary before upload Public URLs are normalized to https://tmpfiles.org/dl/... for maximum AI compatibility 🎥 Watch This Tutorial 👋 Need help or want to customize this? 📩 Contact: LinkedIn 📺 YouTube: @DRFIRASS 🚀 Workshops: Mes Ateliers n8n Need help customizing? Contact me for consulting and support : Linkedin / Youtube / 🚀 Mes Ateliers n8n
by Cyrille
🚀 X-Ray: Your AI Crypto Intelligence & Social Agent Stop drowning in crypto noise. X-Ray is a high-performance "Human-in-the-loop" workflow that monitors the market 24/7, filters for high-impact narratives (like RWA), and prepares viral tweet drafts for your review. 🌟 Why use this? Zero Noise:** Focus only on your specific niche (RWA, DeFi, AI). Deep Context:** X-Ray researches full articles before writing, avoiding generic AI fluff. Safe Automation:** 100% control via a Gmail-based approval system. Archive Ready:** Builds your content database in Google Sheets automatically. ⚙️ How it Works Market Intelligence: Fetches real-time news via CryptoPanic. Narrative Filtering: AI identifies headlines matching your niche. Autonomous Research: Uses Google Search to extract full source content. Creative Drafting: Our Tweet Architect (GPT-4o) writes punchy, viral drafts. Review Pipeline: Drafts are sent to Gmail and saved to Google Sheets. 🔑 Quick Setup 1. Credentials: OpenAI API Key** (GPT-4o recommended). CryptoPanic API:** Get your token here. Google Custom Search:** Enable API in Cloud Console and create a Search Engine (CX ID). 2. Google Sheets: Create a sheet with headers: Date, Topic, Draft Tweet, Source URL. 3. Customization: Niche:* Edit the *Narrative Analyst** prompt to change keywords. Tone:* Adjust the *Tweet Architect** to match your personal brand. 🛠️ Pro Tips Auto-Post:* Swap Gmail for the *X (Twitter) node** for full automation. Multi-Channel:* Add *Telegram* or *Slack** nodes for team alerts. Sentiment:* Use AI to label news as *Bullish/Bearish for better hooks. 📞 Contact & Support Need help or custom automation? 🌐 Website: www.cadaero.ovh 📧 Email: contact@cadaero.com Happy Automating! — Cyrille d'Urbal (Cadaero)
by Yehor EGMS
The original LLM Council concept was introduced by Andrej Karpathy and published as an open-source repository demonstrating multi-model consensus and ranking. This workflow is my adaptation of that original idea, reimplemented and structured as a production-ready n8n template. Original repository - https://github.com/karpathy/llm-council This n8n template implements the LLM Council pattern: a single user question is processed in parallel by multiple large language models, independently evaluated by peer models, and then synthesized into one high-quality, consensus-driven final answer. It is designed for use cases where answer quality, balance, and reduced single-model bias are critical. 📌 Section 1: Trigger & Input ⚡ When Chat Message Received (Chat Trigger) Purpose: Receives a user’s message and initiates the entire workflow. How it works: A user sends a chat message The message is stored as the Original Question The same input is forwarded simultaneously to multiple LLM pipelines Why it matters: Provides a clean, unified entry point for all downstream multi-model logic. 📌 Section 2: Stage 1 — Parallel LLM Responses 🤖 Basic LLM Chains (x4) Models used: Anthropic Claude OpenAI GPT xAI Grok Google Gemini Purpose: Each model independently generates its own response to the same question. Key characteristics: Identical prompt structure for all models Independent reasoning paths No shared context between models Why it matters: Produces diverse perspectives, reasoning styles, and solution approaches. 📌 Section 3: Stage 2 — Response Anonymization 🧾 Set Nodes (Response A / B / C / D) Purpose: Stores model outputs in an anonymized format: Response A Response B Response C Response D Why it matters: Prevents evaluator models from knowing which LLM authored which response, reducing bias during evaluation. 📌 Section 4: Stage 3 — Peer Evaluation & Ranking 📊 Evaluation Chains (Claude / GPT / Grok / Gemini) Purpose: Each model acts as a reviewer and: Analyzes all four anonymized responses Describes strengths and weaknesses of each Produces a strict FINAL RANKING from best to worst Ranking format (strict): FINAL RANKING: Response B Response A Response D Response C Why it matters: Creates multiple independent quality assessments from different model perspectives. 📌 Section 5: Stage 4 — Ranking Aggregation 🧮 Code Node (JavaScript) Purpose: Aggregates all peer rankings by: Parsing ranking positions Calculating average position per response Counting evaluation occurrences Sorting responses by best average score Output includes: Aggregated rankings Best response label Best average score Why it matters: Transforms subjective rankings into a structured, quantitative consensus. 📌 Section 6: Stage 5 — Final Consensus Answer 🧠 Chairman LLM Chain Purpose: One model acts as the Council Chairman and: Reviews all original responses Considers peer rankings and aggregated scores Identifies consensus patterns and disagreements Produces a single, clear, high-quality final answer Why it matters: Delivers a refined response that reflects collective model intelligence rather than a simple average. 📊 Workflow Overview Stage Node / Logic Purpose 1 Chat Trigger Receive user question 2 LLM Chains Generate independent responses 3 Set Nodes Anonymize outputs 4 Evaluation Chains Peer review & ranking 5 Code Node Aggregate rankings 6 Chairman LLM Final synthesized answer 🎯 Key Benefits 🧠 Multi-model intelligence — avoids reliance on a single LLM ⚖️ Reduced bias — anonymized peer evaluation 📊 Quality-driven selection — ranking-based consensus 🔁 Modular architecture — easy to add or replace models 🌍 Language-flexible — input and output languages configurable 🧩 Production-ready logic — clear stages, deterministic ranking 🚀 Ideal Use Cases High-stakes decision support Complex technical or architectural questions Strategy and research synthesis AI assistants requiring higher trust and reliability Comparing and selecting the best LLM-generated answers