by Edisson Garcia
🚀 Message-Batching Buffer Workflow (n8n) This workflow implements a lightweight message-batching buffer using Redis for temporary storage and a JavaScript consolidation function to merge messages. It collects incoming user messages per session, waits for a configurable inactivity window or batch size threshold, consolidates buffered messages via custom code, then clears the buffer and returns the combined response—all without external LLM calls. 🔑 Key Features Redis-backed buffer** queues incoming messages per context_id. Centralized Config Parameters** node to adjust thresholds and timeouts in one place. Dynamic wait time** based on message length (configurable minWords, waitLong, waitShort). Batch trigger** fires on inactivity timeout or when buffer_count ≥ batchThreshold. Zero-cost consolidation** via built-in JavaScript Function (consolidate buffer)—no GPT-4 or external API required. ⚙️ Setup Instructions Extract Session & Message Trigger: When chat message received (webhook) or When clicking ‘Test workflow’ (manual). Map inputs: set variables context_id and message into a Set node named Mock input data (for testing) or a proper mapping node in production. Config Parameters Add a Set node Config Parameters with: minWords: 3 # Word threshold waitLong: 10 # Timeout (s) for long messages waitShort: 20 # Timeout (s) for short messages batchThreshold: 3 # Messages to trigger batch early All downstream nodes reference these JSON values dynamically. Determine Wait Time Node: get wait seconds (Code) JS code: const msg = $json.message || ''; const wordCount = msg.split(/\s+/).filter(w => w).length; const { minWords, waitLong, waitShort } = items[0].json; const waitSeconds = wordCount < minWords ? waitShort : waitLong; return [{ json: { context_id: $json.context_id, message: msg, waitSeconds } }]; Buffer Message in Redis Buffer messages: LPUSH buffer_in:{{$json.context_id}} with payload {text, timestamp}. Set buffer\_count increment: INCR buffer_count:{{$json.context_id}} with TTL {{$json.waitSeconds + 60}}. Set last\_seen: record last_seen:{{$json.context_id}} timestamp with same TTL. Check & Set Waiting Flag Get waiting\_reply: if null, Set waiting\_reply to true with TTL {{$json.waitSeconds}}; else exit. Wait for Inactivity WaitSeconds (webhook): pauses for {{$json.waitSeconds}} seconds before batch evaluation. Check Batch Trigger Get last\_seen and Get buffer\_count. IF (now - last_seen) ≥ waitSeconds * 1000 OR buffer_count ≥ batchThreshold, proceed; else use Wait node to retry. Consolidate Buffer consolidate buffer (Code): const j = items[0].json; const raw = Array.isArray(j.buffer) ? j.buffer : []; const buffer = raw.map(x => { try { return typeof x === 'string' ? JSON.parse(x) : x; } catch { return null; } }).filter(Boolean); buffer.sort((a, b) => new Date(a.timestamp) - new Date(b.timestamp)); const texts = buffer.map(e => e.text?.trim()).filter(Boolean); const unique = [...new Set(texts)]; const message = unique.join(' '); return [{ json: { context_id: j.context_id, message } }]; Cleanup & Respond Delete Redis keys: buffer_in, buffer_count, waiting_reply, last_seen (for the context_id). Return consolidated message to the user via your chat integration. 🛠 Customization Guidance Adjust thresholds* by editing the *Config Parameters** node. Change concatenation** (e.g., line breaks) by modifying the join separator in the consolidation code. Add filters** (e.g., ignore empty or system messages) inside the consolidation Function. Monitor performance**: for very high volume, consider sharding Redis keys by date or user segments. © 2025 Innovatex • Automation & AI Solutions • innovatexiot.carrd.co • LinkedIn
by AlexWantMoreB
🚀 What this flow does • 🔎 Selects the least-used WordPress category (tracked in PostgreSQL) • 🤖 Uses GPT (4-mini or better) to generate a fully formatted SEO article with headings, TOC, lists, CTA, and Yoast blocks • 🖼️ Creates a placeholder cover image and uploads it to WordPress Media • 📬 Publishes the final post via /wp-json/wp/v2/posts with correct category + featured image • 🧠 Logs the used category for future rotation (zero duplicates!) ⚙️ Setup in 3 mins 🏷️ Add your WordPress domain with a simple Set node: domain=https://yourdomain.com 🔐 Create these 3 credentials in n8n: YOUR_WORDPRESS_CREDENTIAL — for /media, /posts YOUR_POSTGRES_CREDENTIAL — for category tracking YOUR_OPENAI_CREDENTIAL — GPT-4-mini or better 🧱 Run the SQL from docs to create the used_categories table ✅ Manually test first 3–5 nodes to check WP auth, OpenAI response, and DB connection 🕒 Then just schedule it and let the bot write for you. 🎯 Why it's awesome This is your personal AI content writer + publisher — perfect for: • 📰 SEO content farms • 📈 Affiliate blogs • 🧰 Micro niche sites • 🤫 PBNs with rotation-safe automation No more manual uploads, broken categories, or GPT spam. Every post is structured, beautiful, and intelligently categorized.
by Reza Gholizade
This n8n workflow template uses community nodes and is only compatible with the self-hosted version of n8n. Conversational Kubernetes Management with GPT-4o and MCP Integration This workflow enables you to manage Kubernetes clusters conversationally using OpenAI’s GPT-4o and a secure MCP (Model Context Protocol) server. It transforms natural language queries into actionable Kubernetes commands via a lightweight MCP API gateway — perfect for developers and platform engineers seeking to simplify cluster interaction. 🚀 Setup Instructions Import the Workflow Upload this template to your n8n instance. Configure Required Credentials OpenAI API Key: Add your GPT-4o API key in the credentials. MCP Client Node: Set the URL and auth for your MCP server. Test Kubernetes Access Ensure your MCP server is correctly configured and has access to the target Kubernetes cluster. 🧩 Prerequisites n8n version 0.240.0 or later Access to GPT-4o via OpenAI A running MCP server Kubernetes cluster credentials configured in your MCP backend ⚠️ Community Nodes Disclaimer This workflow uses custom community nodes (e.g., MCP Client). Make sure to review and trust these nodes before running in production. 🛠️ How It Works A webhook or chat input triggers the conversation. GPT-4o interprets the message and generates structured Kubernetes queries. MCP Client securely sends requests to your cluster. The result is returned and formatted for easy reading. 🔧 Customization Tips Tweak the GPT-4o prompt to match your tone or technical level. Extend MCP endpoints to support new Kubernetes actions. Add alerting or monitoring integrations (e.g., Slack, Prometheus). 🖼️ Template Screenshot 🧠 Example Prompts Show me all pods in the default namespace. Get logs for nginx pod in kube-system. List all deployments in staging. 📎 Additional Resources MCP Server on GitHub OpenAI Documentation n8n Docs Build smarter Kubernetes workflows with the power of AI !
by Jimleuk
This n8n template automates triaging of newly opened support tickets and issue resolution via JIRA. If your organisation deals with a large number of support requests daily, automating triaging is a great use-case for introducing AI to your support teams. Extending the idea, we can also get AI to give a first attempt at resolving the issue intelligently. How it works A scheduled trigger picks up newly opened JIRA support tickets from the queue and discards any seen before. An AI agent analyses the open ticket to add labels, priority on the seriousness of the issue and simplifies the description for better readability and understanding for human support. Next, the agent attempts to address and resolve the issue by finding similar issues (by tags) which have been resolved. Each similar issue has its comments analysed and summarised to identify the actual resolution and facts. These summarises are then used as context for the AI agent to suggest a fix to the open ticket. How to use Simply connect your JIRA instance to the workflow and activate to start watching for open tickets. Depending on frequency, you may need to increase for decrease the intervals. Define labels to use in the agent's system prompt. Restrict to certain projects or issue types to suit your organisation. Requirements JIRA for issue management and support portal OpenAI for LLM Customising this workflow Not using JIRA? Try swapping out the nodes for Linear or your issue management system of choice. Try a different approach for issue resolution. You might want to try RAG approach where a knowledge base is used.
by Andrey
⚠️ DISCLAIMER: This workflow uses the HDW LinkedIn community node, which is only available on self-hosted n8n instances. It will not work on n8n.cloud. Overview This n8n workflow automates the enrichment of CRM contact data with professional insights from LinkedIn profiles. The workflow integrates with both Pipedrive and HubSpot CRMs, finding LinkedIn profiles that match your contacts and updating your CRM with valuable information about their professional background and recent activities. Key Features Multi-CRM Support**: Works with both Pipedrive and HubSpot AI-Powered Data Enrichment**: Uses an advanced AI agent to analyze and summarize professional information Automated Triggers**: Activates when new contacts are added or when enrichment is requested Comprehensive Profile Analysis**: Captures LinkedIn profile summaries and post activity How It Works Triggers The workflow activates in three scenarios: When a new contact is created in CRM When a contact is updated in CRM with an enrichment flag LinkedIn Data Collection Process Email Lookup: First tries to find the LinkedIn profile using the contact's email Advanced Search: If email lookup fails, uses name and company details to find potential matches Profile Analysis: Collects comprehensive profile information Post Analysis: Gathers and analyzes the contact's recent LinkedIn activity CRM Updates The workflow updates your CRM with: LinkedIn profile URL Professional summary (skills, experience, background) Analysis of recent LinkedIn posts and activity Setup Instructions Requirements Self-hosted n8n instance with the HDW LinkedIn community node installed API access to OpenAI (for GPT-4o) Pipedrive and/or HubSpot account HDW API key https://app.horizondatawave.ai Installation Steps Install the HDW LinkedIn Node: npm install n8n-nodes-hdw Follow the detailed instructions at: https://www.npmjs.com/package/n8n-nodes-hdw Configure Credentials: OpenAI: Add your OpenAI API key Pipedrive: Connect your Pipedrive account (if using) HubSpot: Connect your HubSpot account (if using) HDW LinkedIn: Add your API key from https://app.horizondatawave.ai CRM Custom Fields Setup: For Pipedrive: Go to Settings → Data Fields → Contact Fields → + Add Field Create the following custom fields: LinkedIn Profile: Field type - Large text Profile Summary: Field type - Large text LinkedIn Posts Summary: Field type - Large text Need Enrichment: Field type - Single option (Yes/No) Detailed instructions for creating custom fields in Pipedrive: https://support.pipedrive.com/en/article/custom-fields For HubSpot: Go to Settings → Properties → Create property Create the following properties for Contact object: linkedin_url: Field type - Single-line text profile_summary: Field type - Multi-line text linkedin_posts_summary: Field type - Multi-line text need_enrichment: Field type - Checkbox (Boolean) Detailed instructions for creating properties in HubSpot: https://knowledge.hubspot.com/properties/create-and-edit-properties Import the Workflow: Import the "HDW_CRM_Enrichment.json" file into your n8n instance Activate Webhooks: Enable the webhook triggers for your CRM to ensure the workflow activates correctly Customization Options AI Agent Prompts You can modify the system prompts in the "Data Enrichment AI Agent" nodes to: Change the focus of profile analysis Adjust the tone and detail level of summaries Customize what information is extracted from posts CRM Field Mapping The workflow is pre-configured to update specific custom fields in Pipedrive and HubSpot. Update the field/property mappings in: "Update data in Pipedrive" nodes "Update data in HubSpot" node Troubleshooting Common Issues LinkedIn Profile Not Found**: Check if the contact's email is their work email; consider adjusting the search parameters Webhook Not Triggering**: Verify webhook configuration in your CRM Missing Custom Fields**: Ensure all required custom fields are created in your CRM with correct names Rate Limits Be aware of LinkedIn API rate limits (managed by HDW LinkedIn node) Consider implementing delays if processing large batches of contacts Best Practices Use enrichment flags to selectively update contacts rather than enriching all contacts Review and clean contact data in your CRM before enrichment Periodically review the AI-generated summaries to ensure quality and relevance
by Simeon
🔄 Reddit Content Operations via MCP Server 🧑💼 Who is this for? This workflow is built for content creators, marketers, Reddit automation enthusiasts, and AI agent developers who want structured, programmable access to Reddit content. If you're researching niche communities, tracking trends, or automating Reddit engagement — this is for you. 💡 What problem is this workflow solving? Reddit has valuable content scattered across subreddits, but manual analysis or engagement is inefficient. This workflow acts as a centralized API interface to: Query and manage Reddit posts Create, fetch, delete, and reply to comments Analyze subreddit metadata and behavior Enable AI agents to autonomously operate on Reddit data It does this using an MCP (Model Context Protocol) Server over Server-Sent Events (SSE). ⚙️ What this workflow does This template sets up a custom MCP Server that listens for JSON-based operation commands sent via SSE. Based on the operation, it routes the request to one of the following branches: 🟥 Post CRUD Create a new Reddit post Search posts across subreddits Fetch posts by ID Delete existing posts 🟩 Comment CRUD Create or reply to comments Fetch multiple comments from posts Delete specific comments 🟦 Subreddit Read Operations Get information about subreddits List subreddit posts Retrieve subreddit rules 🛠 Setup Import this workflow into your self-hosted n8n instance. Configure Reddit credentials (OAuth2). Connect your input system to the MCP Server Trigger node via SSE. Send operation payloads to the server like this: { "operation": "post_search", "params": { "query": "AI agents", "subreddit": "machinelearning" } } The workflow will route to the appropriate node based on operation type. 🧩 Supported Operations post_create post_get_many post_search post_delete post_get_by_id comment_create comment_reply comment_get_many comment_delete subreddit_get_about subreddit_get_many subreddit_get_rules 🧠 How to customize this workflow to your needs Add new operations to the operation_switch node for additional API functionality. Chain results into Notion, Slack, Airtable, or external APIs. Integrate with OpenAI/GPT to summarize posts or filter content. Add logic to score and sort content by engagement, sentiment, or keywords. 🟨 Sticky Notes Each operation group is color-coded (Posts, Comments, Subreddits). Sticky Notes explain the purpose and dependencies of each section. Easy to maintain and extend with clear logical separation. ⚠️ This template uses a custom MCP Server node and only works in self-hosted n8n. 🖼 Workflow Preview
by Jimleuk
This n8n workflow takes Slack conversations and turns them into Calendar events complete with accurate date and times and location information. Adding and removing attendees are also managed automatically. How it works Workflow monitors a Slack channel for invite messages with a "📅" reaction and sends this to the AI agent. AI agent parses the message determining the time, date and location. Using its Location tool, the AI agent searches for the precise location address from Google Maps. Using its Calendar tool, the AI agent creates a Google Calendar invite with the title, description and location address for the user. Back in the Slack channel, others can RSVP to the invite by reacting with the "✅" emjoi. The workflow polls the message after a while and adds the users who have reacted to the Calendar Invite as attendees. Conversely, removing any attendees who have since removed their reaction. Examples Jill: "Hey team, I'm organising a round of Laser Tag (Bunker 51) next Thursday around 6pm. Please RSVP with a ✅" AI: "I've helped you create an event in your calendar https://cal.google.com/..." Jack: "✅" AI: "I've added Jack to the event as an attendee". Requirements Slack channel to attach the workflow OpenAI account to use a GPT model Google Calendar to create and update events Customising the Workflow This workflow can work with other messaging platforms that support reactions or tagging like features such as discord. Don't use Google Calendar? Swap it out for Outlook or your own. Use any combinations of emjoi reactions and add new rules like "RSVP maybe" which could send reminder updates nearer the event date.
by Sina
🧠 Who is this for? Startup founders designing creative growth strategies Marketing teams seeking low-cost, high-impact campaigns Consultants and agencies needing fast guerrilla plans Creators exploring AI-powered content and campaigns ❓ What problem does this workflow solve? Building a full guerrilla marketing strategy usually takes hours of brainstorming, validation, and formatting. This template does all of that in minutes using a swarm of AI agents, from idea generation to KPIs, and even kills bad ideas before you waste time on them. ⚙️ What this workflow does Starts with a chat input where you describe your business or idea A “Swarm Intelligence” loop: One AI agent generates guerrilla ideas Another agent critically validates the idea and gives honest feedback If the idea is weak, it asks for a new one If accepted, the swarm continues with 16 AI specialists generating: 🎯 Objectives 🧍♂️ Personas 🎤 Messaging 🧨 Tactics 📢 Channels 🧮 Budget 📊 KPIs 📋 Risk plan and more Merges all chapters into a final Markdown file Lets you download the campaign in seconds 🛠️ Setup Import the workflow to your n8n instance (Optional) Configure your LLM (OpenAI or Ollama) in the “OpenAI Chat Model” node Type your business idea (e.g., “Luxury dog collar brand for Instagram dads”) Wait for flow completion Download the final marketing plan file 🤖 LLM Flexibility (Choose Your Model) Supports any LLM via LangChain: Ollama (LLaMA 3.1, Mistral, DeepSeek) OpenAI (GPT-4, GPT-3.5) To switch models, just replace the “Language Model” node, no other logic needs updating 📌 Notes Output is professional and ready-to-pitch Built-in pessimistic validator filters out bad ideas before wasting time 📩 Need help? Email: sinamirshafiee@gmail.com Happy to support setup or customization!
by Zacharia Kimotho
This workflow automates sentiment analysis of Reddit posts related to Apple's WWDC25 event. It extracts data, categorizes posts, analyzes sentiment of comments, and updates a Google Sheet with the results. Preliquisites Bright Data Account: You need a Bright Data account to scrape Reddit data. Ensure you have the correct permissions to use their API. https://brightdata.com/ Google Sheets API Credentials: Enable the Google Sheets API in your Google Cloud project and create credentials (OAuth 2.0 Client IDs). Google Gemini API Credentials: You need a Gemini API key to run the sentiment analysis. Ensure you have the correct permissions to use their API. https://ai.google.dev/". You can use any other models of choice Setup Import the Workflow: Import the provided JSON workflow into your n8n instance.", Configure Bright Data Credentials:, In the 'scrap reddit' and the 'get status' nodes, in Header Parameters find the Authorization field, replace Bearer 1234 with your Bright Data API key. Apply this to every node that utilizes your Bright Data API Key., Set up the Google Sheets API credentials, In the 'Append Sentiments' node, set up the Google Sheets API by connecting your Google Sheets account through oAuth 2 credentials. ", Configure the Google Gemini Credential ID, In the ' Sentiment Analysis per comment' node, set up the Google Gemini API by connecting your Google AI account through the API credentials. , Configure Additional Parameters:, In the 'scrap reddit' node, modify the JSON body to adjust the search term, date, or sort method., In the 'Wait' node, alter the 'Amount' to adjust the polling interval for scraping status, it is set to 15 seconds by default., In the 'Text Classifier' node, customize the categories and descriptions to suit the sentiment analysis needs. Review categories such as 'WWDC events' to ensure relevancy., In the 'Sentiment Analysis per comment' node, modify the system prompt template to improve context. customization_options Bright Data API parameters to adjust scraping behavior. Wait node duration to optimize polling. Text Classifier categories and descriptions. Sentiment Analysis system prompt. Use Case Examples Brand Monitoring:** Track public sentiment towards Apple during and after the WWDC25 event. Product Feedback Analysis:** Gather insights into user reactions to new product announcements. Competitive Analysis:** Compare sentiment towards Apple's announcements versus competitors. Event Impact Assessment:** Measure the overall impact of the WWDC25 event on various aspects of Apple's business. Target_audiences: Marketing professionals in the tech industry, Brand managers, Product managers, Market research analysts, Social media managers Troubleshooting: Workflow fails to start. Check that all necessary credentials (Bright Data and Google Sheets API) are correctly configured and that the Bright Data API key is valid. Data scraping fails. Verify the Bright Data API key, ensure the dataset ID is correct, and inspect the Bright Data dashboard for any issues with the scraping job. Sentiment analysis is inaccurate. Refine the categories and descriptions in the 'Text Classifier' node. Check that you have the correct Google Gemini API key, as the original is a placeholder. Google Sheets are not updating. Ensure the Google Sheets API credentials have the necessary permissions to write to the specified spreadsheet and sheet. Check API usage limits. Workflow does not produce the correct output. Check the data connections, by clicking the connections, and looking at which data is being produced. Check all formulas for errors. Happy productivity!
by Davide
The provided workflow in n8n is designed to create a Business WhatsApp AI RAG (Retrieval-Augmented Generation) Chatbot. How it works: Webhook Setup: The workflow begins by setting up webhooks for verification and response. The Verify webhook receives GET requests and sends back a verification code, while the Respond webhook handles incoming POST requests from Meta regarding WhatsApp messages. Message Handling: Once a message is received, the workflow checks if the incoming JSON contains a user message. If it does, the message is processed further; otherwise, a generic response is sent. AI Agent Interaction: The user's message is passed to the AI Agent node, which uses a conversational agent with a predefined system message tailored for an electronics store. This ensures that the AI provides accurate and professional responses based on the knowledge base. Knowledge Base Utilization: The AI Agent references a knowledge base stored in Qdrant, a vector database. Documents from Google Drive are downloaded, vectorized using OpenAI embeddings, and stored in Qdrant for retrieval during conversations. Response Generation: The AI Agent generates a response using the OpenAI chat model (gpt-4o-mini) and sends it back to the user via WhatsApp. Set up steps: Create Qdrant Collection: Update the QDRANTURL and COLLECTION variables in the workflow. Use the Create collection HTTP request node to initialize the collection in Qdrant. Vectorize Documents: Configure the Get folder and Download Files nodes to fetch documents from a specified Google Drive folder. Use the Embeddings OpenAI node to generate embeddings for the downloaded files. Store the vectorized documents in Qdrant using the Qdrant Vector Store node. Configure Webhooks: Ensure both Verify and Respond webhooks have the same URL. Set the Verify webhook to use the GET HTTP method and the Respond webhook to use the POST HTTP method. Set Up AI Agent: Define the system prompt for the AI Agent, specifying guidelines for product information, technical support, customer service, and knowledge base usage. Link the AI Agent to the OpenAI chat model and configure any additional tools as needed. Test Workflow: Trigger the workflow manually using the When clicking ‘Test workflow’ node to ensure all components are functioning correctly. Monitor the flow of data through the nodes and verify that responses are being generated and sent accurately. By following these steps, the workflow will be fully operational, enabling a robust AI-powered chatbot capable of handling customer inquiries via WhatsApp. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Lucas Peyrin
How it works This template is an interactive playground designed to help you master the most useful keyboard shortcuts in n8n and supercharge your building speed. Forget boring lists—this workflow gives you hands-on tasks to complete, turning learning into a practical exercise. The workflow is structured into four chapters, each focusing on a different aspect of workflow development: Node Basics: Learn the fundamentals of interacting with a single node, such as renaming, editing, duplicating, and deactivating. Canvas Navigation & Selection: Master the art of moving around the canvas and selecting multiple nodes efficiently. Advanced Actions: Discover powerful moves like tidying up messy connections and creating sub-workflows. Execution & Debugging: Uncover essential shortcuts for testing your workflows, like pinning data and navigating the executions panel. Each step provides a clear task in a sticky note, guiding you to perform the action yourself. Set up steps Setup time: 0 minutes! This workflow is a self-contained tutorial and requires no setup, credentials, or configuration. Open the workflow. Follow the instructions in the sticky notes, starting from the top. Perform the actions as described to build muscle memory for each shortcut. That's it! Get ready to become an n8n power user.
by Lucas Peyrin
How it works This template is an interactive, step-by-step tutorial designed to teach you the most important skill in n8n: using expressions to access and manipulate data. If you know what JSON is but aren't sure how to pull a specific piece of information from one node and use it in another, this workflow is for you. It starts with a single "Source Data" node that acts as our filing cabinet, and then walks you through a series of lessons, each demonstrating a new technique for retrieving and transforming that data. You will learn how to: Access a simple value from a previous node. Use n8n's built-in selectors like .last() and .first(). Get a specific item from a list (Array). Drill down into nested data (Objects). Combine these techniques to access data in an array of objects. Go beyond simple retrieval by using JavaScript functions to do math or change text. Inspect data with utility functions like Object.keys() and JSON.stringify(). Summarize data from multiple items using .all() and arrow functions. Set up steps Setup time: 0 minutes! This workflow is a self-contained tutorial and requires no setup or external credentials. Click "Execute Workflow" to run the entire tutorial. Follow the flow from the "Source Data" node to the "Final Exam" node. For each lesson, click on the node to see how its expressions are configured in the parameters panel. Read the detailed sticky note next to each lesson—it breaks down exactly how the expression works and why. By the end, you'll have the foundational knowledge to connect data and build powerful, dynamic workflows in n8n.