by Edisson Garcia
🚀 Message-Batching Buffer Workflow (n8n) This workflow implements a lightweight message-batching buffer using Redis for temporary storage and a JavaScript consolidation function to merge messages. It collects incoming user messages per session, waits for a configurable inactivity window or batch size threshold, consolidates buffered messages via custom code, then clears the buffer and returns the combined response—all without external LLM calls. 🔑 Key Features Redis-backed buffer** queues incoming messages per context_id. Centralized Config Parameters** node to adjust thresholds and timeouts in one place. Dynamic wait time** based on message length (configurable minWords, waitLong, waitShort). Batch trigger** fires on inactivity timeout or when buffer_count ≥ batchThreshold. Zero-cost consolidation** via built-in JavaScript Function (consolidate buffer)—no GPT-4 or external API required. ⚙️ Setup Instructions Extract Session & Message Trigger: When chat message received (webhook) or When clicking ‘Test workflow’ (manual). Map inputs: set variables context_id and message into a Set node named Mock input data (for testing) or a proper mapping node in production. Config Parameters Add a Set node Config Parameters with: minWords: 3 # Word threshold waitLong: 10 # Timeout (s) for long messages waitShort: 20 # Timeout (s) for short messages batchThreshold: 3 # Messages to trigger batch early All downstream nodes reference these JSON values dynamically. Determine Wait Time Node: get wait seconds (Code) JS code: const msg = $json.message || ''; const wordCount = msg.split(/\s+/).filter(w => w).length; const { minWords, waitLong, waitShort } = items[0].json; const waitSeconds = wordCount < minWords ? waitShort : waitLong; return [{ json: { context_id: $json.context_id, message: msg, waitSeconds } }]; Buffer Message in Redis Buffer messages: LPUSH buffer_in:{{$json.context_id}} with payload {text, timestamp}. Set buffer\_count increment: INCR buffer_count:{{$json.context_id}} with TTL {{$json.waitSeconds + 60}}. Set last\_seen: record last_seen:{{$json.context_id}} timestamp with same TTL. Check & Set Waiting Flag Get waiting\_reply: if null, Set waiting\_reply to true with TTL {{$json.waitSeconds}}; else exit. Wait for Inactivity WaitSeconds (webhook): pauses for {{$json.waitSeconds}} seconds before batch evaluation. Check Batch Trigger Get last\_seen and Get buffer\_count. IF (now - last_seen) ≥ waitSeconds * 1000 OR buffer_count ≥ batchThreshold, proceed; else use Wait node to retry. Consolidate Buffer consolidate buffer (Code): const j = items[0].json; const raw = Array.isArray(j.buffer) ? j.buffer : []; const buffer = raw.map(x => { try { return typeof x === 'string' ? JSON.parse(x) : x; } catch { return null; } }).filter(Boolean); buffer.sort((a, b) => new Date(a.timestamp) - new Date(b.timestamp)); const texts = buffer.map(e => e.text?.trim()).filter(Boolean); const unique = [...new Set(texts)]; const message = unique.join(' '); return [{ json: { context_id: j.context_id, message } }]; Cleanup & Respond Delete Redis keys: buffer_in, buffer_count, waiting_reply, last_seen (for the context_id). Return consolidated message to the user via your chat integration. 🛠 Customization Guidance Adjust thresholds* by editing the *Config Parameters** node. Change concatenation** (e.g., line breaks) by modifying the join separator in the consolidation code. Add filters** (e.g., ignore empty or system messages) inside the consolidation Function. Monitor performance**: for very high volume, consider sharding Redis keys by date or user segments. © 2025 Innovatex • Automation & AI Solutions • innovatexiot.carrd.co • LinkedIn
by Reza Gholizade
This n8n workflow template uses community nodes and is only compatible with the self-hosted version of n8n. Conversational Kubernetes Management with GPT-4o and MCP Integration This workflow enables you to manage Kubernetes clusters conversationally using OpenAI’s GPT-4o and a secure MCP (Model Context Protocol) server. It transforms natural language queries into actionable Kubernetes commands via a lightweight MCP API gateway — perfect for developers and platform engineers seeking to simplify cluster interaction. 🚀 Setup Instructions Import the Workflow Upload this template to your n8n instance. Configure Required Credentials OpenAI API Key: Add your GPT-4o API key in the credentials. MCP Client Node: Set the URL and auth for your MCP server. Test Kubernetes Access Ensure your MCP server is correctly configured and has access to the target Kubernetes cluster. 🧩 Prerequisites n8n version 0.240.0 or later Access to GPT-4o via OpenAI A running MCP server Kubernetes cluster credentials configured in your MCP backend ⚠️ Community Nodes Disclaimer This workflow uses custom community nodes (e.g., MCP Client). Make sure to review and trust these nodes before running in production. 🛠️ How It Works A webhook or chat input triggers the conversation. GPT-4o interprets the message and generates structured Kubernetes queries. MCP Client securely sends requests to your cluster. The result is returned and formatted for easy reading. 🔧 Customization Tips Tweak the GPT-4o prompt to match your tone or technical level. Extend MCP endpoints to support new Kubernetes actions. Add alerting or monitoring integrations (e.g., Slack, Prometheus). 🖼️ Template Screenshot 🧠 Example Prompts Show me all pods in the default namespace. Get logs for nginx pod in kube-system. List all deployments in staging. 📎 Additional Resources MCP Server on GitHub OpenAI Documentation n8n Docs Build smarter Kubernetes workflows with the power of AI !
by Andrey
⚠️ DISCLAIMER: This workflow uses the HDW LinkedIn community node, which is only available on self-hosted n8n instances. It will not work on n8n.cloud. Overview This n8n workflow automates the enrichment of CRM contact data with professional insights from LinkedIn profiles. The workflow integrates with both Pipedrive and HubSpot CRMs, finding LinkedIn profiles that match your contacts and updating your CRM with valuable information about their professional background and recent activities. Key Features Multi-CRM Support**: Works with both Pipedrive and HubSpot AI-Powered Data Enrichment**: Uses an advanced AI agent to analyze and summarize professional information Automated Triggers**: Activates when new contacts are added or when enrichment is requested Comprehensive Profile Analysis**: Captures LinkedIn profile summaries and post activity How It Works Triggers The workflow activates in three scenarios: When a new contact is created in CRM When a contact is updated in CRM with an enrichment flag LinkedIn Data Collection Process Email Lookup: First tries to find the LinkedIn profile using the contact's email Advanced Search: If email lookup fails, uses name and company details to find potential matches Profile Analysis: Collects comprehensive profile information Post Analysis: Gathers and analyzes the contact's recent LinkedIn activity CRM Updates The workflow updates your CRM with: LinkedIn profile URL Professional summary (skills, experience, background) Analysis of recent LinkedIn posts and activity Setup Instructions Requirements Self-hosted n8n instance with the HDW LinkedIn community node installed API access to OpenAI (for GPT-4o) Pipedrive and/or HubSpot account HDW API key https://app.horizondatawave.ai Installation Steps Install the HDW LinkedIn Node: npm install n8n-nodes-hdw Follow the detailed instructions at: https://www.npmjs.com/package/n8n-nodes-hdw Configure Credentials: OpenAI: Add your OpenAI API key Pipedrive: Connect your Pipedrive account (if using) HubSpot: Connect your HubSpot account (if using) HDW LinkedIn: Add your API key from https://app.horizondatawave.ai CRM Custom Fields Setup: For Pipedrive: Go to Settings → Data Fields → Contact Fields → + Add Field Create the following custom fields: LinkedIn Profile: Field type - Large text Profile Summary: Field type - Large text LinkedIn Posts Summary: Field type - Large text Need Enrichment: Field type - Single option (Yes/No) Detailed instructions for creating custom fields in Pipedrive: https://support.pipedrive.com/en/article/custom-fields For HubSpot: Go to Settings → Properties → Create property Create the following properties for Contact object: linkedin_url: Field type - Single-line text profile_summary: Field type - Multi-line text linkedin_posts_summary: Field type - Multi-line text need_enrichment: Field type - Checkbox (Boolean) Detailed instructions for creating properties in HubSpot: https://knowledge.hubspot.com/properties/create-and-edit-properties Import the Workflow: Import the "HDW_CRM_Enrichment.json" file into your n8n instance Activate Webhooks: Enable the webhook triggers for your CRM to ensure the workflow activates correctly Customization Options AI Agent Prompts You can modify the system prompts in the "Data Enrichment AI Agent" nodes to: Change the focus of profile analysis Adjust the tone and detail level of summaries Customize what information is extracted from posts CRM Field Mapping The workflow is pre-configured to update specific custom fields in Pipedrive and HubSpot. Update the field/property mappings in: "Update data in Pipedrive" nodes "Update data in HubSpot" node Troubleshooting Common Issues LinkedIn Profile Not Found**: Check if the contact's email is their work email; consider adjusting the search parameters Webhook Not Triggering**: Verify webhook configuration in your CRM Missing Custom Fields**: Ensure all required custom fields are created in your CRM with correct names Rate Limits Be aware of LinkedIn API rate limits (managed by HDW LinkedIn node) Consider implementing delays if processing large batches of contacts Best Practices Use enrichment flags to selectively update contacts rather than enriching all contacts Review and clean contact data in your CRM before enrichment Periodically review the AI-generated summaries to ensure quality and relevance
by Simeon
🔄 Reddit Content Operations via MCP Server 🧑💼 Who is this for? This workflow is built for content creators, marketers, Reddit automation enthusiasts, and AI agent developers who want structured, programmable access to Reddit content. If you're researching niche communities, tracking trends, or automating Reddit engagement — this is for you. 💡 What problem is this workflow solving? Reddit has valuable content scattered across subreddits, but manual analysis or engagement is inefficient. This workflow acts as a centralized API interface to: Query and manage Reddit posts Create, fetch, delete, and reply to comments Analyze subreddit metadata and behavior Enable AI agents to autonomously operate on Reddit data It does this using an MCP (Model Context Protocol) Server over Server-Sent Events (SSE). ⚙️ What this workflow does This template sets up a custom MCP Server that listens for JSON-based operation commands sent via SSE. Based on the operation, it routes the request to one of the following branches: 🟥 Post CRUD Create a new Reddit post Search posts across subreddits Fetch posts by ID Delete existing posts 🟩 Comment CRUD Create or reply to comments Fetch multiple comments from posts Delete specific comments 🟦 Subreddit Read Operations Get information about subreddits List subreddit posts Retrieve subreddit rules 🛠 Setup Import this workflow into your self-hosted n8n instance. Configure Reddit credentials (OAuth2). Connect your input system to the MCP Server Trigger node via SSE. Send operation payloads to the server like this: { "operation": "post_search", "params": { "query": "AI agents", "subreddit": "machinelearning" } } The workflow will route to the appropriate node based on operation type. 🧩 Supported Operations post_create post_get_many post_search post_delete post_get_by_id comment_create comment_reply comment_get_many comment_delete subreddit_get_about subreddit_get_many subreddit_get_rules 🧠 How to customize this workflow to your needs Add new operations to the operation_switch node for additional API functionality. Chain results into Notion, Slack, Airtable, or external APIs. Integrate with OpenAI/GPT to summarize posts or filter content. Add logic to score and sort content by engagement, sentiment, or keywords. 🟨 Sticky Notes Each operation group is color-coded (Posts, Comments, Subreddits). Sticky Notes explain the purpose and dependencies of each section. Easy to maintain and extend with clear logical separation. ⚠️ This template uses a custom MCP Server node and only works in self-hosted n8n. 🖼 Workflow Preview
by Sina
🧠 Who is this for? Startup founders designing creative growth strategies Marketing teams seeking low-cost, high-impact campaigns Consultants and agencies needing fast guerrilla plans Creators exploring AI-powered content and campaigns ❓ What problem does this workflow solve? Building a full guerrilla marketing strategy usually takes hours of brainstorming, validation, and formatting. This template does all of that in minutes using a swarm of AI agents, from idea generation to KPIs, and even kills bad ideas before you waste time on them. ⚙️ What this workflow does Starts with a chat input where you describe your business or idea A “Swarm Intelligence” loop: One AI agent generates guerrilla ideas Another agent critically validates the idea and gives honest feedback If the idea is weak, it asks for a new one If accepted, the swarm continues with 16 AI specialists generating: 🎯 Objectives 🧍♂️ Personas 🎤 Messaging 🧨 Tactics 📢 Channels 🧮 Budget 📊 KPIs 📋 Risk plan and more Merges all chapters into a final Markdown file Lets you download the campaign in seconds 🛠️ Setup Import the workflow to your n8n instance (Optional) Configure your LLM (OpenAI or Ollama) in the “OpenAI Chat Model” node Type your business idea (e.g., “Luxury dog collar brand for Instagram dads”) Wait for flow completion Download the final marketing plan file 🤖 LLM Flexibility (Choose Your Model) Supports any LLM via LangChain: Ollama (LLaMA 3.1, Mistral, DeepSeek) OpenAI (GPT-4, GPT-3.5) To switch models, just replace the “Language Model” node, no other logic needs updating 📌 Notes Output is professional and ready-to-pitch Built-in pessimistic validator filters out bad ideas before wasting time 📩 Need help? Email: sinamirshafiee@gmail.com Happy to support setup or customization!
by Zacharia Kimotho
This workflow automates sentiment analysis of Reddit posts related to Apple's WWDC25 event. It extracts data, categorizes posts, analyzes sentiment of comments, and updates a Google Sheet with the results. Preliquisites Bright Data Account: You need a Bright Data account to scrape Reddit data. Ensure you have the correct permissions to use their API. https://brightdata.com/ Google Sheets API Credentials: Enable the Google Sheets API in your Google Cloud project and create credentials (OAuth 2.0 Client IDs). Google Gemini API Credentials: You need a Gemini API key to run the sentiment analysis. Ensure you have the correct permissions to use their API. https://ai.google.dev/". You can use any other models of choice Setup Import the Workflow: Import the provided JSON workflow into your n8n instance.", Configure Bright Data Credentials:, In the 'scrap reddit' and the 'get status' nodes, in Header Parameters find the Authorization field, replace Bearer 1234 with your Bright Data API key. Apply this to every node that utilizes your Bright Data API Key., Set up the Google Sheets API credentials, In the 'Append Sentiments' node, set up the Google Sheets API by connecting your Google Sheets account through oAuth 2 credentials. ", Configure the Google Gemini Credential ID, In the ' Sentiment Analysis per comment' node, set up the Google Gemini API by connecting your Google AI account through the API credentials. , Configure Additional Parameters:, In the 'scrap reddit' node, modify the JSON body to adjust the search term, date, or sort method., In the 'Wait' node, alter the 'Amount' to adjust the polling interval for scraping status, it is set to 15 seconds by default., In the 'Text Classifier' node, customize the categories and descriptions to suit the sentiment analysis needs. Review categories such as 'WWDC events' to ensure relevancy., In the 'Sentiment Analysis per comment' node, modify the system prompt template to improve context. customization_options Bright Data API parameters to adjust scraping behavior. Wait node duration to optimize polling. Text Classifier categories and descriptions. Sentiment Analysis system prompt. Use Case Examples Brand Monitoring:** Track public sentiment towards Apple during and after the WWDC25 event. Product Feedback Analysis:** Gather insights into user reactions to new product announcements. Competitive Analysis:** Compare sentiment towards Apple's announcements versus competitors. Event Impact Assessment:** Measure the overall impact of the WWDC25 event on various aspects of Apple's business. Target_audiences: Marketing professionals in the tech industry, Brand managers, Product managers, Market research analysts, Social media managers Troubleshooting: Workflow fails to start. Check that all necessary credentials (Bright Data and Google Sheets API) are correctly configured and that the Bright Data API key is valid. Data scraping fails. Verify the Bright Data API key, ensure the dataset ID is correct, and inspect the Bright Data dashboard for any issues with the scraping job. Sentiment analysis is inaccurate. Refine the categories and descriptions in the 'Text Classifier' node. Check that you have the correct Google Gemini API key, as the original is a placeholder. Google Sheets are not updating. Ensure the Google Sheets API credentials have the necessary permissions to write to the specified spreadsheet and sheet. Check API usage limits. Workflow does not produce the correct output. Check the data connections, by clicking the connections, and looking at which data is being produced. Check all formulas for errors. Happy productivity!
by Lucas Peyrin
How it works This template is an interactive playground designed to help you master the most useful keyboard shortcuts in n8n and supercharge your building speed. Forget boring lists—this workflow gives you hands-on tasks to complete, turning learning into a practical exercise. The workflow is structured into four chapters, each focusing on a different aspect of workflow development: Node Basics: Learn the fundamentals of interacting with a single node, such as renaming, editing, duplicating, and deactivating. Canvas Navigation & Selection: Master the art of moving around the canvas and selecting multiple nodes efficiently. Advanced Actions: Discover powerful moves like tidying up messy connections and creating sub-workflows. Execution & Debugging: Uncover essential shortcuts for testing your workflows, like pinning data and navigating the executions panel. Each step provides a clear task in a sticky note, guiding you to perform the action yourself. Set up steps Setup time: 0 minutes! This workflow is a self-contained tutorial and requires no setup, credentials, or configuration. Open the workflow. Follow the instructions in the sticky notes, starting from the top. Perform the actions as described to build muscle memory for each shortcut. That's it! Get ready to become an n8n power user.
by Lucas Peyrin
How it works This template is an interactive, step-by-step tutorial designed to teach you the most important skill in n8n: using expressions to access and manipulate data. If you know what JSON is but aren't sure how to pull a specific piece of information from one node and use it in another, this workflow is for you. It starts with a single "Source Data" node that acts as our filing cabinet, and then walks you through a series of lessons, each demonstrating a new technique for retrieving and transforming that data. You will learn how to: Access a simple value from a previous node. Use n8n's built-in selectors like .last() and .first(). Get a specific item from a list (Array). Drill down into nested data (Objects). Combine these techniques to access data in an array of objects. Go beyond simple retrieval by using JavaScript functions to do math or change text. Inspect data with utility functions like Object.keys() and JSON.stringify(). Summarize data from multiple items using .all() and arrow functions. Set up steps Setup time: 0 minutes! This workflow is a self-contained tutorial and requires no setup or external credentials. Click "Execute Workflow" to run the entire tutorial. Follow the flow from the "Source Data" node to the "Final Exam" node. For each lesson, click on the node to see how its expressions are configured in the parameters panel. Read the detailed sticky note next to each lesson—it breaks down exactly how the expression works and why. By the end, you'll have the foundational knowledge to connect data and build powerful, dynamic workflows in n8n.
by Yang
📝 Description 🤖 What this workflow does This workflow turns Reddit pain points into emotionally-driven comic-style ads using AI. It takes in a product description, scrapes Reddit for real user pain points, filters relevant posts using AI, generates ad angles, rewrites them into 4-panel comic prompts, and finally uses Dumpling AI to generate comic-style images. All final creatives are uploaded to Google Drive. 🧠 What problem is this solving? Crafting ad content that truly speaks to customer struggles is time-consuming. This workflow automates that entire process — from pain point discovery to visual creative output — using AI and Reddit as a source of truth for customer language. 👤 Who is this for? Copywriters and performance marketers Startup founders and indie hackers Creatives building empathy-driven ad concepts Automation experts looking to generate scroll-stopping content ⚙️ Setup Instructions Here’s how to set everything up, step by step: 🔹 1. Trigger: Form Input Node: 📝 Form - Submit Product Info This form asks the user to enter: Brand Name Website Product Description ✅ Make sure this form is active and testable. 🔹 2. Generate Reddit Keyword Node: 🧠 GPT-4o - Generate Reddit Keyword Uses the product description to generate a search keyword based on what your audience might be discussing on Reddit. 🔹 3. Search Reddit Node: 🔍 Reddit - Search Posts Uses the keyword to search Reddit for relevant threads. Make sure your Reddit integration is properly configured. 🔹 4. Filter Valid Posts Node: 🔎 IF - Check Upvotes & Text Length Filters out low-effort or unpopular posts. Only keeps posts with: Minimum 2 upvotes Content at least 100 characters long ✅ You can adjust these thresholds in the node settings. 🔹 5. Clean Reddit Output Node: 🧼 Code - Structure Reddit Posts This formats the list of posts into clean JSON for the AI agents to process. 🔹 6. Check Relevance with AI Agent Node: 🤔 Langchain Agent - Post Relevance Classifier This node uses a LangChain agent (tool: think2) to determine if each post is relevant to your product. Only relevant ones are passed forward. 🔹 7. Aggregate Relevant Posts Node: 📦 Code - Merge Relevant Posts Collects all relevant posts into a clean format for the next GPT-4 call. 🔹 8. Generate Ad Angles Node: ✍️ GPT-4o - Generate Emotional Ad Angles Writes 10 pain-point-based marketing angles using real customer language. 🔹 9. Rank the Best Angles Node: 📊 GPT-4o - Rank Top 10 Angles Scores the generated angles and ranks them from most to least powerful. Only the top 3 are passed forward. 🔹 10. Turn Angles into Comic Prompts Node: 🎭 GPT-4o - Write Comic Scene Prompts Rewrites each of the top ad angles into a 4-panel comic strip structure (pain → tension → product → resolution). 🔹 11. Generate Comic Images Node: 🎨 Dumpling AI - Generate Comic Panels Sends each prompt to Dumpling AI to create visual comic scenes. 🔹 12. Wait for Image Generation Node: ⏳ Wait - Dumpling AI Response Time Adds a delay to give Dumpling AI time to finish generating all images. 🔹 13. Get Final Image URLs Node: 🔗 Code - Extract Image URLs from Dumpling Response Extracts all image links for preview/download. 🔹 14. Upload to Google Drive Node: ☁️ Google Drive - Upload Comics Uploads the comic images to your chosen Google Drive folder. ✅ Update this node with your destination folder ID. 🔹 15. Log Final Output Optional You can extend the flow to log the image links, ad angles, and Reddit sources to Google Sheets, Airtable, or Notion depending on your use case. 🛠️ How to Customize ✏️ Adjust tone: Update GPT-4 system prompts to sound more humorous, emotional, or brand-specific. 🧵 Use different styles: Swap Dumpling AI image settings for ink sketch, manga, or cartoon renderings. 🔄 Change input source: Replace Reddit with X (Twitter), Quora, or YouTube comments. 📦 Store results differently: Swap Google Drive for Notion, Dropbox, or Airtable. This workflow turns real audience struggles into thumb-stopping comic content — automatically.
by Joseph LePage
🎥 YouTube Video AI Agent Workflow This n8n workflow template allows you to interact with an AI agent that extracts details and the transcript of a YouTube video using a provided video ID. Once the details and transcript are retrieved, you can chat with the AI agent to explore or analyze the video's content in a conversational and insightful manner. 🌟 How the Workflow Works 🔗 Input Video ID: The user provides a YouTube video ID as input to the workflow. 📄 Data Retrieval: The workflow fetches essential details about the video (e.g., title, description, upload date) and retrieves its transcript using YouTube's Data API and additional tools for transcript extraction. 🤖 AI Agent Interaction: The extracted details and transcript are processed by an AI-powered agent. Users can then ask questions or engage in a conversation with the agent about the video's content, such as: Summarizing the transcript. Analyzing key points. Clarifying specific sections. 💬 Dynamic Responses: The AI agent uses natural language processing (NLP) to generate contextual and accurate responses based on the video data, ensuring a smooth and intuitive interaction. 🚀 Use Cases 📊 Content Analysis**: Quickly analyze long YouTube videos by querying specific sections or extracting summaries. 📚 Research and Learning**: Gain insights from educational videos or tutorials without watching them entirely. ✍️ Content Creation**: Repurpose transcripts into blogs, social media posts, or other formats efficiently. ♿ Accessibility**: Provide an alternative, text-based way to interact with video content for users who prefer reading over watching. 🛠️ Resources for Getting Started Google Cloud Console** (for API setup): Visit Google Cloud's Get Started Guide to configure your API access. YouTube Data API Key Setup**: Follow this guide to create and manage your YouTube Data API key. Install n8n Locally**: Refer to this installation guide for setting up n8n on your local machine. ✨ Sample Prompts "Tell me about this YouTube video with id: JWfNLF_g_V0" "Can you provide a list of key takeaways from this video with id: [youtube-video-id]?"
by Jimleuk
This n8n template automates triaging of newly opened support tickets and issue resolution via JIRA. If your organisation deals with a large number of support requests daily, automating triaging is a great use-case for introducing AI to your support teams. Extending the idea, we can also get AI to give a first attempt at resolving the issue intelligently. How it works A scheduled trigger picks up newly opened JIRA support tickets from the queue and discards any seen before. An AI agent analyses the open ticket to add labels, priority on the seriousness of the issue and simplifies the description for better readability and understanding for human support. Next, the agent attempts to address and resolve the issue by finding similar issues (by tags) which have been resolved. Each similar issue has its comments analysed and summarised to identify the actual resolution and facts. These summarises are then used as context for the AI agent to suggest a fix to the open ticket. How to use Simply connect your JIRA instance to the workflow and activate to start watching for open tickets. Depending on frequency, you may need to increase for decrease the intervals. Define labels to use in the agent's system prompt. Restrict to certain projects or issue types to suit your organisation. Requirements JIRA for issue management and support portal OpenAI for LLM Customising this workflow Not using JIRA? Try swapping out the nodes for Linear or your issue management system of choice. Try a different approach for issue resolution. You might want to try RAG approach where a knowledge base is used.
by AlexWantMoreB
🚀 What this flow does • 🔎 Selects the least-used WordPress category (tracked in PostgreSQL) • 🤖 Uses GPT (4-mini or better) to generate a fully formatted SEO article with headings, TOC, lists, CTA, and Yoast blocks • 🖼️ Creates a placeholder cover image and uploads it to WordPress Media • 📬 Publishes the final post via /wp-json/wp/v2/posts with correct category + featured image • 🧠 Logs the used category for future rotation (zero duplicates!) ⚙️ Setup in 3 mins 🏷️ Add your WordPress domain with a simple Set node: domain=https://yourdomain.com 🔐 Create these 3 credentials in n8n: YOUR_WORDPRESS_CREDENTIAL — for /media, /posts YOUR_POSTGRES_CREDENTIAL — for category tracking YOUR_OPENAI_CREDENTIAL — GPT-4-mini or better 🧱 Run the SQL from docs to create the used_categories table ✅ Manually test first 3–5 nodes to check WP auth, OpenAI response, and DB connection 🕒 Then just schedule it and let the bot write for you. 🎯 Why it's awesome This is your personal AI content writer + publisher — perfect for: • 📰 SEO content farms • 📈 Affiliate blogs • 🧰 Micro niche sites • 🤫 PBNs with rotation-safe automation No more manual uploads, broken categories, or GPT spam. Every post is structured, beautiful, and intelligently categorized.