by nXsi
This n8n template monitors your self-hosted apps for new releases across GitHub and container registries, uses Claude AI to analyze changelogs, and delivers color-coded update digests to Discord, Telegram, Slack, and ntfy. Stop finding out about updates after something breaks. Claude reads every changelog and tells you exactly what changed, what might break, and how urgent the update is — with a ready-to-run Docker update command for each release. Good to know Estimated cost is $0.01–0.03 per daily run using Claude Haiku. See Anthropic pricing for current rates. PostgreSQL logging is included but optional — the core workflow runs without a database using n8n's built-in static data for version tracking. Test mode is ON by default. Your first run pushes a sample release through the full pipeline (AI analysis, formatting, delivery) so you can verify everything works before going live. How it works Schedule trigger runs daily at 8 AM and builds a watchlist from your manually configured repos plus optional docker-compose auto-detection Checks GitHub Releases API for repos and Docker Hub/GHCR tag APIs for container images. Pre-releases and RC/beta tags are automatically filtered out Compares each release against the last known version stored in workflow static data to detect what's genuinely new Claude Haiku reads each changelog and returns a structured summary with breaking change detection, CVE/security scanning, migration steps, and urgency classification (critical/recommended/optional) Alert rules route critical and security updates to instant alerts, while everything else batches into a single daily digest sorted by urgency Formatted messages are delivered to your enabled channels with color-coded embeds, Docker update commands, and links to full release notes Release history is optionally logged to PostgreSQL for tracking update timelines How to use Add your Anthropic API key as an n8n credential, open the Configure Watcher node to set your channel URLs, and edit the Build Repo Watchlist node to add your repos — that's the minimum to get running Click "Test workflow" to push a sample release through the full pipeline and verify delivery Set test_mode to false and toggle Active — the workflow checks daily and only alerts when new releases are found Requirements Anthropic API key (setup guide) At least one delivery channel: Discord webhook (setup guide), Telegram bot, Slack app, or ntfy topic Optional: GitHub Personal Access Token for higher API rate limits (setup guide) Optional: PostgreSQL for release history logging Customizing this workflow Add or remove repos and container registries in the Build Repo Watchlist code node — pre-loaded with 10 popular self-hosted apps Enable docker-compose auto-detection to automatically build your watchlist from a compose file URL or pasted content Set per-repo alert rules including urgency overrides, instant alert flags, and channel routing Adjust the schedule, swap delivery channels, or add additional registries
by Chandan Singh
This workflow synchronizes MySQL database table schemas with a vector database in a controlled, idempotent manner. Each database table is indexed as a single vector to preserve complete schema context for AI-based retrieval and reasoning. The workflow prevents duplicate vectors and automatically handles schema changes by detecting differences and re-indexing only when required. How it works The workflow starts with a manual trigger and loads global configuration values. All database tables are discovered and processed one by one inside a loop. For each table, a normalized schema representation is generated, and a deterministic hash is calculated. A metadata table is checked to determine whether a vector already exists for the table. If a vector exists, the stored schema hash is compared with the current hash to detect schema changes. When a schema change is detected, the existing vector and metadata are deleted. The updated table schema is embedded as a single vector (without chunking) and upserted into the vector database. Vector identifiers and schema hashes are persisted for future executions. Setup steps Set the MySQL database name using mysql_database_name. Configure the Pinecone index name using pinecone_index. Set the vector namespace using vector_namespace. Configure the Pinecone index host using vector_index_host. Add your Pinecone API key using pinecone_apikey. Select the embedding model using embedding_model. Configure text processing options: chunk_size chunk_overlap Set the metadata table identifier using dataTable_Id. Save and run the workflow manually to perform the initial schema synchronization. Limitations This workflow indexes database table schemas only. Table data (rows) are not embedded or indexed. Each table is stored as a single vector. Very large or highly complex schemas may approach model token limits depending on the selected embedding model. Schema changes are detected using a hash-based comparison. Non-structural changes that do not affect the schema representation will not trigger re-indexing.
by sato rio
This workflow streamlines the entire inventory replenishment process by leveraging AI for demand forecasting and intelligent logic for supplier selection. It aggregates data from multiple sources—POS systems, weather forecasts, SNS trends, and historical sales—to predict future demand. Based on these predictions, it calculates shortages, requests quotes from multiple suppliers, selects the optimal vendor based on cost and lead time, and executes the order automatically. 🚀 Who is this for? Retail & E-commerce Managers** aiming to minimize stockouts and reduce overstock. Supply Chain Operations** looking to automate procurement and vendor selection. Data Analysts** wanting to integrate external factors (weather, trends) into inventory planning. 💡 How it works Data Aggregation: Fetches data from POS systems, MySQL (historical sales), OpenWeatherMap (weather), and SNS trend APIs. AI Forecasting: Formats the data and sends it to an AI prediction API to forecast demand for the next 7 days. Shortage Calculation: Compares the forecast against current stock and safety stock to determine necessary order quantities. Supplier Optimization: For items needing replenishment, the workflow requests quotes from multiple suppliers (A, B, C) in parallel. It selects the best supplier based on the lowest total cost within a 7-day lead time. Execution & Logging: Places the order via API, updates the inventory system, and logs the transaction to MySQL. Anomaly Detection: If the AI's confidence score is low, it skips the auto-order and sends an alert to Slack for manual review. ⚙️ Setup steps Configure Credentials: Set up credentials for MySQL and Slack in n8n. API Keys: You will need an API key for OpenWeatherMap (or a similar service). Update Endpoints: The HTTP Request nodes use placeholder URLs (e.g., pos-api.example.com, ai-prediction-api.example.com). Replace these with your actual internal APIs, ERP endpoints, or AI service (like OpenAI). Database Prep: Ensure your MySQL database has a table named forecast_order_log to store the order history. Schedule: The workflow is set to run daily at 03:00. Adjust the Schedule Trigger node as needed. 📋 Requirements n8n** (Self-hosted or Cloud) MySQL** database Slack** workspace External APIs for POS, Inventory, and Supplier communication (or mock endpoints for testing).
by Mohamed Abdelwahab
An end-to-end Retrieval-Augmented Generation (RAG) customer support workflow for n8n, using a cache-first strategy (LangCache) combined with a Redis vector store powered by OpenAI embeddings. This template is designed for fast, accurate, and cost-efficient customer support chatbots, internal help desks, and knowledge-base assistants. Overview This workflow implements a production-ready RAG architecture optimized for customer support use cases. Incoming chat messages are processed through a structured pipeline that prioritizes cached answers, falls back to semantic vector search when needed, and validates response quality before returning a final answer. The workflow supports: Multi-question user inputs Intelligent query decomposition Cache reuse to reduce latency and cost High-precision retrieval from a Redis vector database Quality evaluation and controlled retries Final answer synthesis into a single, coherent response Key Features Chat-based RAG pipeline** using n8n’s Chat Trigger Query decomposition** for multi-topic questions LangCache integration** (search + save) Redis Vector Store** for semantic retrieval OpenAI embeddings and chat models** Quality scoring** with retry logic Session memory buffers** for contextual continuity Fallback-safe behavior** (no hallucinations) How the Workflow Works 1. Chat Trigger The workflow starts when a new chat message is received. 2. Configuration Setup A centralized configuration node defines: LangCache base URL Cache ID Similarity threshold (default: 0.75) Maximum retrieval iterations (default: 2) 3. Query Decomposition The user message is analyzed and decomposed into: A single focused question, or Multiple independent sub-questions This improves retrieval accuracy and cache reuse. 4. Cache-First Retrieval Each sub-question is processed independently: The workflow first searches LangCache If a high-similarity cached answer is found, it is reused immediately 5. Vector Retrieval (Cache Miss) If no cache hit exists: The query is embedded using OpenAI embeddings A semantic search is executed against the Redis vector index Retrieved knowledge-base documents are passed to a research-only agent 6. Knowledge-Only Answering The research agent: Answers strictly from the retrieved knowledge Returns "no info found" if no relevant data exists 7. Quality Evaluation Each generated answer is evaluated by a dedicated quality-check node: Outputs a numerical SCORE (0.0 – 1.0) Provides textual feedback Low scores can trigger limited retries 8. Cache Update High-quality answers are saved back to LangCache for future reuse. 9. Aggregation & Synthesis All sub-answers are aggregated and synthesized into: One final, user-facing response, or A polite fallback message if information is insufficient Main Nodes & Responsibilities When Chat Message Received** — Entry point for user messages LangCache Config** — Centralized configuration values Decompose Query (LangChain Agent)** — Splits complex queries Structured Output Parser** — Ensures valid JSON output Search LangCache** — Cache lookup via HTTP Redis Vector Store** — Semantic retrieval from Redis Embeddings OpenAI** — Vector generation Research Agent** — KB-only answering (no hallucinations) Quality Evaluator** — Scores answer relevance Save to LangCache** — Stores validated answers Memory Buffers** — Session context handling Response Synthesizer** — Final message generation Setup Instructions 1. Configure Credentials Create the following credentials in n8n: OpenAI API** Redis** HTTP Bearer Auth** (for LangCache) 2. Prepare the Knowledge Base Embed your documents using OpenAI embeddings Insert them into the configured Redis vector index Ensure documents are concise and well-structured 3. Configure LangCache Update the configuration node with: langcacheBaseUrl langcacheCacheId Optional tuning for similarity threshold and iterations 4. Test the Workflow Use the example data loader or schedule trigger Send test chat messages Validate cache hits, vector retrieval, and final responses Recommended Tuning Similarity Threshold:** 0.7 – 0.85 Max Iterations:** 1 – 3 Quality Score Cutoff:** 0.7 Model Choice:** Use faster models for low latency, stronger models for accuracy Cache Policy:** Cache only high-confidence answers Security & Compliance Notes Store API keys securely using n8n credentials Avoid caching sensitive or personally identifiable information Apply least-privilege access to Redis and LangCache Consider logging cache writes for audit purposes Common Use Cases Customer support chatbots Internal help desks Knowledge-base assistants Self-service support portals AI-powered FAQ systems Template Metadata (Recommended) Template Name:** AI Customer Support — Redis RAG (LangCache + OpenAI) Category:** Customer Support / AI / RAG Tags:** customer-support, RAG, knowledge-base, redis, openai, langcache, chatbot, n8n-template Difficulty Level:** Intermediate Required Integrations:** OpenAI, Redis, LangCache
by nXsi
This n8n template builds an automated daily news digest powered by Claude AI. It monitors RSS feeds, Reddit, and Hacker News, extracts full article text, analyzes each piece with AI, and delivers a polished briefing to Discord and Slack. Stop drowning in newsletters -- Claude reads everything and surfaces only what matters, scored and ranked by importance. Good to know Estimated cost is $0.03-0.10 per daily run using Claude Haiku + Sonnet. See Anthropic pricing for current rates. Works without a database out of the box. Optionally enable PostgreSQL for article history and cross-day deduplication. How it works Schedule trigger fires daily and fetches articles from 10 configurable sources (RSS, Atom, Reddit JSON, Hacker News API) Articles are deduplicated by URL hash and fuzzy title matching Jina Reader extracts full article text for deeper analysis Claude Haiku scores each article 1-10 for importance, assigns categories, and writes a "why it matters" summary Claude Sonnet compiles the top articles into a structured digest with lead story, top stories, quick hits, and trend detection Formatted output is delivered to Discord (rich embeds) and Slack (Block Kit) How to use Add your Anthropic API key as an n8n credential and set your Discord webhook URL in the config node -- that's the minimum to get running Edit the feed list in "Build feed source list" to add your own sources Requirements Anthropic API key (setup guide) Discord webhook URL (setup guide) and/or Slack credential Customizing this workflow Swap feed sources for any topic -- finance, gaming, research papers, industry news Adjust topic importance weights to prioritize what you care about Modify the Claude system prompt to change the digest's tone and style
by Jorge Martínez
Automating WhatsApp replies in Go High Level with Redis and Anthropic Description Integrates GHL + Wazzap with Redis and an AI Agent using ClientInfo to process messages, generate accurate replies, and send them via a custom field trigger. Who’s it for This workflow is for businesses using GoHighLevel (GHL), including the Wazzap plugin for WhatsApp, who want to automate inbound SMS/WhatsApp replies with AI. It’s ideal for teams that need accurate, data-driven responses from a predefined ClientInfo source and want to send them back to customers without paying for extra inbound automations. How it works / What it does Receive message in n8n via Webhook from GHL (Customer Replied (SMS) automation). WhatsApp messages arrive the same way using the Wazzap plugin. Filter message type: If audio → skip processing and send fallback asking for text. If text → sanitize by fixing escaped quotes, escaping line breaks/carriage returns/tabs, and removing invalid fields. Buffer messages in Redis to group multiple messages sent in a short window. Run AI Agent using the ClientInfo tool to answer only with accurate service/branch data. Sanitize AI output before sending back. Update GHL contact custom field (IA_answer) with the AI’s response. Send SMS reply automatically via GHL’s outbound automation triggered by the updated custom field. How to set up In GHL, create: Inbound automation: Trigger on Customer Replied (SMS) → Send to your n8n Webhook. Outbound automation: Trigger when IA_answer is updated → Send SMS to the contact. Create a custom field named IA_answer. Connect Wazzap in GHL to handle WhatsApp messages. Configure Redis in n8n (host, port, DB index, password). Add your AI model credentials (Anthropic, OpenAI, etc.) in n8n. (Optional) Set up the Google Drive Excel Merge sub-workflow to enrich ClientInfo with external data. Requirements GoHighLevel sub-account API key**. Anthropic (Claude)** API key or another supported LLM provider. Redis database** for temporary message storage. GHL automations: one for inbound messages to n8n, one for outbound replies when **IA\_answer is updated. GHL custom field: **IA\_answer to store and trigger replies. Wazzap plugin** in GHL for WhatsApp message handling. How to customize the workflow Add more context or business-specific data to the AI Agent prompt so replies match your brand tone and policies. Expand the ClientInfo dataset with additional services, branches, or product details. Adjust the Redis wait time to control how long the workflow buffers messages before replying.
by Jamot
How it works Your WhatsApp AI Assistant automatically handles customer inquiries by linking your Google Docs knowledge base to incoming WhatsApp messages. The system instantly processes customer questions, references your business documentation, and delivers AI-powered responses through OpenAI or Gemini - all without you lifting a finger. Works seamlessly in individual chats and WhatsApp groups where the assistant can respond on your behalf. Set up steps Time to complete: 15-30 minutes Step 1: Create your WhapAround account and connect your WhatsApp number (5 minutes) Step 2: Prepare your Google Doc with business information and add the document ID to the system (5 minutes) Step 3: Configure the WhatsApp webhook and map message fields (10 minutes) Step 4: Connect your OpenAI or Gemini API key (3 minutes) Step 5: Send a test message to verify everything works (2 minutes) Optional: Set up PostgreSQL database for conversation memory and configure custom branding/escalation rules (additional 15-20 minutes) Detailed technical configurations, webhook URLs, and API parameter settings are provided within each workflow step to guide you through the exact setup process.
by Jimleuk
On my never-ending quest to find the best embeddings model, I was intrigued to come across Voyage-Context-3 by MongoDB and was excited to give it a try. This template implements the embedding model on a Arxiv research paper and stores the results in a Vector store. It was only fitting to use Mongo Atlas from the same parent company. This template also includes a RAG-based Q&A agent which taps into the vector store as a test to helps qualify if the embeddings are any good and if this is even noticeable. How it works This template is split into 2 parts. The first part being the import of a research document which is then chunked and embedded into our vector store. The second part builds a RAG-based Q&A agent to test the vector store retrieval on the research paper. Read the steps for more details. How to use First ensure you create a Voyage account voyageai.com and a MongoDB database ready. Start with Step 1 and fill in the "Set Variables" node and Click on the Manual Execute Trigger. This will take care of populating the vector store with the research paper. To use the Q&A agent, it is required to publish the workflow to access the public chat interface. This is because "Respond to Chat" works best in this mode and not in editor mode. To use for your own document, edit the "Set Variables" node to define the URL to your own document. This embeddings approach should work best on larger documents. Requirements Voyageai.com account for embeddings. You may need to add credit to get a reasonable RPM for this workflow. MongoDB database either self-hosted or online at https://www.mongodb.com. OpenAI account for RAG Q&A agent. Customising this workflow The Voyage embeddings work with any vector store so feel free to swap out to other such as Qdrant or Pinecone if you're not a fan of MongoDB Atlas. If you're feeling brave, instead of the 3 sequential pages setup I have, why not try the whole document! Fair warning that you may hit memory problems if your instance isn't sufficiently sized - but if it is, go head and share the results!
by Mantaka Mahir
How it works This workflow automates the process of converting Google Drive documents into searchable vector embeddings for AI-powered applications: • Takes a Google Drive folder URL as input • Initializes a Supabase vector database with pgvector extension • Fetches all files from the specified Drive folder • Downloads and converts each file to plain text • Generates 768-dimensional embeddings using Google Gemini • Stores documents with embeddings in Supabase for semantic search Built for the Study Agent workflow to power document-based Q&A, but also works perfectly for any RAG system, AI chatbot, knowledge base, or semantic search application that needs to query document collections. Set up steps Prerequisites: • Google Drive OAuth2 credentials • Supabase account with Postgres connection details • Google Gemini API key (free tier available) Setup time: ~10 minutes Steps: Add your Google Drive OAuth2 credentials to the Google Drive nodes Configure Supabase Postgres credentials in the SQL node Add Supabase API credentials to the Vector Store node Add Google Gemini API key to the Embeddings node Update the input with your Drive folder URL Execute the workflow Note: The SQL query will drop any existing "documents" table, so backup data if needed. Detailed node-by-node instructions are in the sticky notes within the workflow. Works with: Study Agent (main use case), custom AI agents, chatbots, documentation search, customer support bots, or any RAG application.
by Oneclick AI Squad
This n8n workflow runs daily to analyze active customer behavior, engineers relevant features from usage and transaction data, applies a machine learning or AI-based model to predict churn probability, classifies risk levels, triggers retention actions for at-risk customers, stores predictions for tracking, and notifies relevant teams. Key Insights Prediction accuracy heavily depends on feature quality — ensure login frequency, spend trends, support interactions, and engagement metrics are consistently captured and up-to-date. Start with simple rule-based scoring or AI prompting (e.g., OpenAI/Claude) before integrating full ML models for easier testing and faster value. High-risk thresholds (e.g., >70%) should be tuned based on your actual churn data to avoid alert fatigue or missed opportunities. Workflow Process Initiate the workflow with the Daily Schedule Trigger node (runs every day at 2 AM). Query the customer database to fetch active user profiles, recent activity logs, login history, transaction records, and support ticket data. Perform feature engineering: calculate metrics such as login frequency (daily/weekly), average spend, spend velocity, days since last activity, number of support tickets, NPS/sentiment if available, and other engagement signals. Feed engineered features into the prediction step: call an ML model endpoint, run a Python code node with a lightweight model, or use an AI agent/LLM to estimate churn probability (0–100%). Classify each customer into risk tiers: HIGH RISK, MEDIUM RISK, or LOW RISK based on configurable probability thresholds. For at-risk customers (especially HIGH), trigger retention actions: create personalized campaigns, add to nurture sequences, generate discount codes, or create tasks in CRM. Store predictions, risk scores, features, and actions taken in an analytics database for historical tracking and model improvement. Send summarized alerts (e.g., list of high-risk customers with scores and recommended actions) via Email and/or Slack to customer success or retention teams. Usage Guide Import the workflow into n8n and configure credentials for your customer database (PostgreSQL/MySQL), ML API (if external), analytics DB, Slack webhook, SMTP/email, and CRM/retention platform. Define feature extraction queries and thresholds carefully in the relevant nodes — test with a small customer subset first. If using AI/LLM for prediction, refine the prompt to include clear examples of churn signals. Run manually via the Execute workflow button with sample data to validate data flow, scoring logic, and notifications. Once confident, activate the daily schedule. Prerequisites Customer database with readable tables for users, activity logs, transactions, and support interactions ML integration option: either an external ML API endpoint, Python code node with scikit-learn/simple model, or LLM node (OpenAI, Claude, etc.) for probabilistic scoring Separate analytics database (or same DB) with a table ready for churn predictions (customer_id, date, churn_prob, risk_level, etc.) SMTP credentials or email service for alerts Slack webhook URL (optional but recommended for team notifications) CRM or marketing automation API access (e.g., HubSpot, ActiveCampaign, Klaviyo) for creating retention campaigns/tasks Customization Options Adjust the daily trigger time or make it hourly for near real-time monitoring of high-value accounts. Change risk classification thresholds or add more tiers in the scoring logic node. Enhance the prediction step: switch from LLM-based to a trained ML model (via Hugging Face, custom endpoint, or Code node). Personalize retention actions: use AI to generate custom email content/offers based on the customer's behavior profile. Add filtering (e.g., only high-value customers > certain MRR) to focus retention efforts. Extend notifications: integrate with Microsoft Teams, Discord, or create tickets in Zendesk/Jira for follow-up. Build feedback loop: after actual churn occurs, update a training dataset or adjust weights/rules in future runs.
by Lukasz
Stop fighting alerts and start orchestrating intelligence. This workflow is a complete ecosystem designed to combat network threats in real-time. It transforms raw DNS logs into structured knowledge, leveraging Artificial Intelligence to make decisions that previously required hours of manual work by a SOC analyst. Real-World Problems it Solves: Manual Threat Analysis: Automates the process of verifying suspicious domains and IP addresses across multiple CTI sources simultaneously. Security Credential Management: Eliminates the risk of API key leaks through native integration with HashiCorp Vault. Alert Fatigue: Thanks to built-in filtering logic, the system only notifies you when the AI Threat Score exceeds 5 (Malicious/Critical). Data Fragmentation: Consolidates data from multiple CTI providers into a single, cohesive technical report. Core System Components: The workflow manages and communicates with the following elements of your infrastructure: Traffic Capture: Monitors passive DNS traffic to identify new Indicators of Compromise (IoCs). Secret Engine: HashiCorp Vault provides database credentials and API tokens dynamically during workflow execution. Intelligence Layer: Features three independent scanning branches: VirusTotal, Abuse_URLhaus, and Abuse_ThreatFox. AI Brain: Google Gemini AI acts as a "Senior Security Analyst," correlating data and generating verdicts in both English and Polish. Automated Response: An email notification system triggered exclusively for confirmed high-risk threats. Release v1.0.0 Highlights This release (available on https://github.com/lukaszFD/cyber-sentinel/releases) marks the first fully stable production-ready version of the system. Key features of this release: Full Ansible Orchestration: The entire stack—including Nginx, Vault, databases, and n8n—is deployed automatically using Ansible playbooks. Infrastructure as Code (IaC): Secure deployment based on Ansible Vault, requiring only the population of credentials and the presence of a .vault_pass file. Production-Ready: The system has been rigorously tested for stability in both Debian (Proxmox) and Raspberry Pi 5 environments. Documentation : https://lukaszfd.github.io/cyber-sentinel/
by Kevin Meneses
How it works This workflow takes a list of links from Google Sheets, visits each page, extracts the main text using Decodo, and creates a summary with the help of artificial intelligence. It helps you turn research articles or web pages into clear, structured insights you can reuse for your projects, content ideas, or newsletters. Input: A Google Sheet named input with one column called url. Output: Another Google Sheet named output, where all the processed data is stored: URL:** original article link Title:** article title Source:** website or domain Published Date:** publication date (if found) Main Topic:** main theme of the article Key Ideas:** three main takeaways or insights Summary:** short text summary Text Type:** type of content (e.g., article, blog, research paper) Setup steps Connect your Google Sheets account. Add your links to the input sheet. In the Decodo node, insert your API key. Configure the AI model (for example, Gemini). Run the workflow and check the results in the output sheet.