by Jonathan
This is the first of 4 workflows for a Mattermost Standup Bot. This workflow will create a default configuration file. You can set the default configuration in the Set node (Use Default Config) the values are: config.slashCmdToken - The token Mattermost provides when you make a new Slash Command config.mattermostBaseUrl - The base URL for your Mattermost instance config.botUserToken - The User token for your Mattermost bot config.n8nWebhookUrl - The URL for your "Action from MM" webhook in the "Standup Bot - Worker" workflow config.botUserId - The UserID for your Mattermost Bot user The config file is saved under /home/node/.n8n/standup-bot-config.json This workflow only needs to be run once manually as part of the setup .
by Lorena
This workflow ensures gender inclusive language in Mattermost channels. If someone addresses the group with “guys” or “gals”, a bot promptly replies with: "May I suggest “folks” or “y'all”? We use gender inclusive language here. 😄". Webhook node**: triggers the workflow when a new message is posted in Mattermost. IF node**: verifies if the message includes the words "guys" or "gals". If false, it does not take any action. If true, it triggers the Mattermost node. Mattermost node**: posts the language warning message in the Mattermost channel.
by Jenny
Vector Database as a Big Data Analysis Tool for AI Agents Workflows from the webinar "Build production-ready AI Agents with Qdrant and n8n". This series of workflows shows how to build big data analysis tools for production-ready AI agents with the help of vector databases. These pipelines are adaptable to any dataset of images, hence, many production use cases. Uploading (image) datasets to Qdrant Set up meta-variables for anomaly detection in Qdrant Anomaly detection tool KNN classifier tool For anomaly detection The first pipeline to upload an image dataset to Qdrant. The second pipeline is to set up cluster (class) centres & cluster (class) threshold scores needed for anomaly detection. The third is the anomaly detection tool, which takes any image as input and uses all preparatory work done with Qdrant to detect if it's an anomaly to the uploaded dataset. For KNN (k nearest neighbours) classification The first pipeline to upload an image dataset to Qdrant. This pipeline is the KNN classifier tool, which takes any image as input and classifies it on the uploaded to Qdrant dataset. To recreate both You'll have to upload crops and lands datasets from Kaggle to your own Google Storage bucket, and re-create APIs/connections to Qdrant Cloud (you can use Free Tier cluster), Voyage AI API & Google Cloud Storage. [This workflow] KNN classification tool This tool takes any image URL, and as output, it returns a class of the object on the image based on the image uploaded to the Qdrant dataset (lands). An image URL is received via the Execute Workflow Trigger, which is then sent to the Voyage AI Multimodal Embeddings API to fetch its embedding. The image's embedding vector is then used to query Qdrant, returning a set of X similar images with pre-labeled classes. Majority voting is done for classes of neighbouring images. A loop is used to resolve scenarios where there is a tie in Majority Voting, and we increase the number of neighbours to retrieve. When the loop finally resolves, the identified class is returned to the calling workflow.
by Jenny
Vector Database as a Big Data Analysis Tool for AI Agents Workflows from the webinar "Build production-ready AI Agents with Qdrant and n8n". This series of workflows shows how to build big data analysis tools for production-ready AI agents with the help of vector databases. These pipelines are adaptable to any dataset of images, hence, many production use cases. Uploading (image) datasets to Qdrant Set up meta-variables for anomaly detection in Qdrant Anomaly detection tool KNN classifier tool For anomaly detection The first pipeline to upload an image dataset to Qdrant. The second pipeline is to set up cluster (class) centres & cluster (class) threshold scores needed for anomaly detection. 3. This is the third pipeline --- the anomaly detection tool, which takes any image as input and uses all preparatory work done with Qdrant to detect if it's an anomaly to the uploaded dataset. For KNN (k nearest neighbours) classification The first pipeline to upload an image dataset to Qdrant. The second is the KNN classifier tool, which takes any image as input and classifies it on the uploaded to Qdrant dataset. To recreate both You'll have to upload crops and lands datasets from Kaggle to your own Google Storage bucket, and re-create APIs/connections to Qdrant Cloud (you can use Free Tier cluster), Voyage AI API & Google Cloud Storage. [This workflow] Anomaly Detection Tool This is the tool that can be used directly for anomalous images (crops) detection. It takes as input (any) image URL and returns a text message telling if whatever this image depicts is anomalous to the crop dataset stored in Qdrant. An Image URL is received via the Execute Workflow Trigger, which is used to generate embedding vectors using the Voyage AI Embeddings API. The returned vectors are used to query the Qdrant collection to determine if the given crop is known by comparing it to threshold scores of each image class (crop type). If the image scores lower than all thresholds, then the image is considered an anomaly for the dataset.
by Mohsin Ali
1. Document Ingestion & Processing Google Drive Trigger monitors for new files → Loop Over Items processes each file → File Info extracts metadata → Google Drive downloads the actual content → Switch routes to appropriate extractors (PDF or TEXT) based on file type 2. Content Transformation & Chunking Document Data node processes extracted text → Recursive Splitter breaks content into contextual chunks → Chunk Splitting applies intelligent segmentation while preserving document context and relationships between chunks 3. Embedding & Storage Basic LLM Chain processes chunks → OpenAI Chat Model generates contextual understanding → Summarize creates document summaries → Supabase Vector Store saves embeddings with metadata → Embeddings OpenAI creates vector representations → Default Data Loader handles storage operations 4. Query Processing & Retrieval When Clicking Execute triggers user queries → OpenAI processes and understands the question → AI Agent orchestrates hybrid search (combining vector similarity + keyword matching) → Google Gemini Chat Model generates final responses using retrieved context → HTTP Request handles additional external data sources
by Cheney Zhang
Create a RAG System with Paul Essays, Milvus, and OpenAI for Cited Answers This workflow automates the process of creating a document-based AI retrieval system using Milvus, an open-source vector database. It consists of two main steps: Data collection/processing Retrieval/response generation The system scrapes Paul Graham essays, processes them, and loads them into a Milvus vector store. When users ask questions, it retrieves relevant information and generates responses with citations. Step 1: Data Collection and Processing Set up a Milvus server using the official guide Create a collection named "my_collection" Execute the workflow to scrape Paul Graham essays: Fetch essay lists Extract names Split content into manageable items Limit results (if needed) Fetch texts Extract content Load everything into Milvus Vector Store This step uses OpenAI embeddings for vectorization. Step 2: Retrieval and Response Generation When a chat message is received, the system: Sets chunks to send to the model Retrieves relevant information from the Milvus Vector Store Prepares chunks Answers the query based on those chunks Composes citations Generates a comprehensive response This process uses OpenAI embeddings and models to ensure accurate and relevant answers with proper citations. For more information on vector databases and similarity search, visit Milvus documentation.
by Mutasem
Use case Slackbots are super powerful. At n8n, we have been using them to get a lot done.. But it can become hard to manage and maintain many different operations that a workflow can do. This is the base workflow we use for our most powerful internal Slackbots. They handle a lot from running e2e tests for Github branch to deleting a user. By splitting the workflow into many subworkflows, we are able to handle each command seperately, making it easier to debug as well as support new usecases. In this template, you can find eveything to setup your own Slackbot (and I made it simple, there's only one node to configure 😉). After that, you need to build your commands directly. This bot can create a new thread on an alerts channel and respond there. Or reply directly to the user. It responds for help request to return a help page. It automatically handles unknown commands. It also supports flags and environment variables. For example /cloudbot-test info mutasem --full-info -e env=prod would give you the following info, when calling subworkflow. How to setup Add Slack command and point it up to the webhook. For example. Add the following to the Set config node alerts_channel with alerts channel to start threads on instance_url with this instance url to make it easy to debug slack_token with slack bot token to validate request slack_secret_signature with slack secret signature to validate request help_docs_url with help url to help users understand the commands Build other workflows to call and add them to commands in Set Config. Each command must be mapped to a workflow id with an Execute Workflow Trigger node Activate workflow 🚀 How to adjust Add your own commands. Depending on your need, you might need to lock down who can call this.
by Leonardo Grigorio
Video explanation This n8n workflow helps you identify trending videos within your niche by detecting outlier videos that significantly outperform a channel's average views. It automates the process of monitoring competitor channels, saving time and streamlining content research. Included in the Workflow Automated Competitor Video Tracking Monitors videos from specified competitor channels, fetching data directly from the YouTube API. Outlier Detection Based on Channel Averages Compares each video’s performance against the channel’s historical average to identify significant spikes in viewership. Historical Video Data Management Stores video statistics in a PostgreSQL database, allowing the workflow to only fetch new videos and optimize API usage. Short Video Filtering Automatically removes short videos based on duration thresholds. Flexible Video Retrieval Fetches up to 3 months of historical data on the first run and only new videos on subsequent runs. PostgreSQL Database Integration Includes SQL queries for database setup, video insertion, and performance analysis. Configurable Outlier Threshold Focuses on videos published within the last two weeks with view counts at least twice the channel's average. Data Output for Analysis Outputs best-performing videos along with their engagement metrics, making it easier to identify trending topics. Requirements n8n installed on your machine or server A valid YouTube Data API key Access to a PostgreSQL database This workflow is intended for educational and research purposes, helping content creators gain insights into what topics resonate with audiences without manual daily monitoring.
by AOE Agent Lab
🌐 AI Customer Support Assistant - Cloud Version What this workflow does: This AI-powered customer support automation processes incoming support requests via email or chat, analyzes them using AI, retrieves relevant context, and generates draft responses for support agents. Key Features: ✅ Multi-channel Input: Email & chat triggers ✅ AI-powered Analysis: Extracts sentiment, urgency, and key information ✅ Context Integration: Combines product manuals, ERP data, and support history ✅ Draft Response Generation: Creates professional responses in German ✅ Human-in-the-loop: Approval workflow before sending to customers Demo Instructions: Use the Chat interface to test with sample customer queries Or send test emails to trigger the email workflow Watch how AI analyzes and generates contextual responses
by Cheney Zhang
Create a Paul Graham Essay Q&A System with OpenAI and Milvus Vector Database How It Works This workflow creates a question-answering system based on Paul Graham essays. It has two main steps: Data Collection & Processing: Scrapes Paul Graham essays Extracts text content Loads them into a Milvus vector store Chat Interaction: Provides a question-answering interface using the stored vector embeddings Utilizes OpenAI embeddings for semantic search Set Up Steps Set up a Milvus server following the official guide Create a collection named "my_collection" Run the workflow to scrape and load Paul Graham essays Start chatting with the QA system The workflow handles the entire process from fetching essays, extracting content, generating embeddings via OpenAI, storing vectors in Milvus, and providing retrieval for question answering.
by Yaron Been
Description This workflow automatically finds trending headlines and content from various sources and posts them to your social media accounts. It helps maintain an active social media presence without the daily manual effort of content curation. Overview This workflow automatically scrapes trending headlines and content from various sources and posts them to your social media accounts. It uses Bright Data to access content and n8n to schedule and post to platforms like Twitter, LinkedIn, or Facebook. Tools Used n8n:** The automation platform that orchestrates the workflow. Bright Data:** For scraping trending content from news sites, blogs, or other sources without getting blocked. Social Media APIs:** To post content to your accounts. How to Install Import the Workflow: Download the .json file and import it into your n8n instance. Configure Bright Data: Add your Bright Data credentials to the Bright Data node. Connect Social Media: Authenticate your social media accounts. Customize: Set your content preferences, posting schedule, and hashtag strategy. Use Cases Social Media Managers:** Automate content curation and posting. Content Creators:** Share trending topics in your niche. Businesses:** Maintain an active social media presence with minimal effort. Connect with Me Website:** https://www.nofluff.online YouTube:** https://www.youtube.com/@YaronBeen/videos LinkedIn:** https://www.linkedin.com/in/yaronbeen/ Get Bright Data:** https://get.brightdata.com/1tndi4600b25 (Using this link supports my free workflows with a small commission) #n8n #automation #socialmedia #brightdata #contentcuration #scheduling #socialmediaautomation #contentmarketing #socialmediamanagement #autoposting #trendingcontent #n8nworkflow #workflow #nocode #socialmediatools #digitalmarketing #contentcalendar #socialmediapresence #headlinecuration #trendalerts #socialmediaschedule #contentautomation #socialmediamarketing #contentdistribution #automatedposting #socialmediastrategy
by Stéphane Heckel
Emailing Using Google Sheet, Google Docs, and SMTP Automate personalized email campaigns using a Google Sheets contact list, a Google Docs template, and SMTP delivery. How It Works Google Docs** is used as the email template with variables: {{firstname}}, {{lastname}}, {{company}}, {{email}}. Google Sheet** contains your list of recipients (one per row). For each contact, the workflow merges personal data into the Google Docs template. Email is sent to each recipient via SMTP (batch size: 1). Use the Wait node to respect provider quotas. After sending, the workflow updates the "process" column of the Google Sheet with the date/time. How to Use Copy Templates: Google Docs Template Google Sheet Template Find each document’s ID (the text after /d/ and before /edit in the URL). Configure Workflow: Enter your Google Docs and Google Sheets IDs in the settings node. Set your email subject in the appropriate parameter. Set Up Credentials: Connect your Google account. Configure the SMTP node with your mail server details. Update Data: Edit the Google Docs template with your message and variables. Prepare your Google Sheet with these columns: email, firstname, lastname, company. Deploy and Test: Connect all nodes. Test with a small contact batch. Troubleshoot any node errors (indicated in red in n8n). Requirements Google Credentials & permissions**: For Sheets and Docs access. SMTP Server**: For email delivery (adjust Wait node for rate limits). n8n Version**: Tested on 1.105.2 (Ubuntu). Need Help? Contact me on LinkedIn or ask in the n8n Community Forum!