by Cheng Siong Chin
How It Works This workflow automates brand reputation monitoring by analyzing sentiment across news, social media, reviews, and forums using AI-powered trend detection. Designed for PR teams, brand managers, marketing directors, and crisis communication specialists requiring real-time awareness of reputation threats before they escalate.The template solves the challenge of manually tracking brand mentions across fragmented channels—news outlets, Twitter, Instagram, review sites, Reddit, industry forums—then identifying emerging crises hidden in sentiment shifts and volume spikes.Scheduled execution triggers four parallel HTTP nodes fetching data from news APIs, social media monitoring services, review aggregators, and forum discussion platforms. Merge node combines all sources, then normalization ensures consistent data structure. OpenAI GPT-4 with structured output parsing performs sophisticated sentiment analysis and trend detection, identifying sudden negative sentiment surges, coordinated criticism patterns, and viral complaint escalation. Setup Steps Configure HTTP nodes with API credentials for news monitoring service Add OpenAI API key to Chat Model node for sentiment and trend analysis Connect Slack workspace and specify crisis response team channel Integrate Gmail account with PR leadership distribution list Set up Google Sheets connection and create monitoring dashboard Prerequisites OpenAI API key, news monitoring API access Use Cases Consumer brands monitoring product launch reception and identifying quality issues early Customization Modify AI prompts for industry-specific crisis indicators Benefits Reduces crisis detection time from hours to minutes enabling damage control before viral spread
by Yusuke Yamamoto
This n8n template demonstrates a multi-modal AI recipe assistant that suggests delicious recipes based on user input, delivered via Telegram. The workflow can uniquely handle two types of input: a photo of your ingredients or a simple text list. Use cases are many: Get instant dinner ideas by taking a photo of your fridge contents, reduce food waste by finding recipes for leftover ingredients, or create a fun and interactive service for a cooking community or food delivery app! Good to know This workflow uses two different AI models (one for vision, one for text generation), so costs will be incurred for each execution. See OpenRouter Pricing or your chosen model provider's pricing page for updated info. The AI prompts are in English, but the final recipe output is configured to be in Japanese. You can easily change the language by editing the prompt in the Recipe Generator node. How it works The workflow starts when a user sends a message or an image to your bot on Telegram via the Telegram Trigger. An IF node intelligently checks if the input is text or an image. If an image is sent, the AI Vision Agent analyzes it to identify ingredients. A Structured Output Parser then forces this data into a clean JSON list. If text is sent, a Set node directly prepares the user's text as the ingredient list. Both paths converge, providing a standardized ingredient list to the Recipe Generator agent. This AI acts as a professional chef to create three detailed recipes. Crucially, a second Structured Output Parser takes the AI's creative text and formats it into a reliable JSON structure (with name, difficulty, instructions, etc.). This ensures the output is always predictable and easy to work with. A final Set node uses a JavaScript expression to transform the structured recipe data into a beautiful, emoji-rich, and easy-to-read message. The formatted recipe suggestions are sent back to the user on Telegram. How to use Configure the Telegram Trigger with your own bot's API credentials. Add your AI provider credentials in the OpenAI Vision Model and OpenAI Recipe Model nodes (this template uses OpenRouter, but it can be swapped for a direct OpenAI connection). Requirements A Telegram account and a bot token. An AI provider account that supports vision and text models, such as OpenRouter or OpenAI. Customising this workflow Modify the prompt in the Recipe Generator to include dietary restrictions (e.g., "vegan," "gluten-free") or to change the number of recipes suggested. Swap the Telegram nodes for Discord, Slack, or a Webhook to integrate this recipe bot into a different platform or your own application. Connect to a recipe database API to supplement the AI's suggestions with existing recipes.
by Rohit Dabra
WooCommerce AI Agent — n8n Workflow (Overview) Description: Turn your WooCommerce store into a conversational AI assistant — create products, place orders, run reports and manage coupons using natural language via n8n + an MCP Server. Key features Natural-language commands mapped to WooCommerce actions (products, orders, reports, coupons). Structured JSON outputs + lightweight mapping to avoid schema errors. Calls routed through your MCP Server for secure, auditable tool execution. Minimal user prompts — agent auto-fetches context and asks only when necessary. Extensible: add new tools or customize prompts/mappings easily. Demo of the workflow: Youtube Video 🚀 Setup Guide: WooCommerce + AI Agent Workflow in n8n 1. Prerequisites Running n8n instance WooCommerce store with REST API keys OpenAI API key MCP server (production URL) 2. Import Workflow Open n8n dashboard Go to Workflows → Import Upload/paste the workflow JSON Save as WooCommerce AI Agent 3. Configure Credentials OpenAI Create new credential → OpenAI API Add your API key → Save & test WooCommerce Create new credential → WooCommerce API Enter Base URL, Consumer Key & Secret → Save & test MCP Client In MCP Client node, set Server URL to your MCP server production URL Add authentication if required 4. Test Workflow Open workflow in editor Run a sample request (e.g., create a test product) Verify product appears in WooCommerce 5. Activate Workflow Once tested, click Activate in n8n Workflow is now live 🎉 6. Troubleshooting Schema errors** → Ensure fields match WooCommerce node requirements Connection issues** → Re-check credentials and MCP URL
by Rahul Joshi
Description Automatically compare candidate resumes to job descriptions (PDFs) from Google Drive, generate a 0–100 fit score with gap analysis, and update Google Sheets—powered by Azure OpenAI (GPT-4o-mini). Fast, consistent screening with saved reports in Drive. 📈📄 What This Template Does Fetches job descriptions and resumes (PDF) from Google Drive. 📥 Extracts clean text from both PDFs for analysis. 🧼 Generates an AI evaluation (score, must-have gaps, nice-to-have bonuses, summary). 🤝 Parses the AI output to structured JSON. 🧩 Delivers a saved text report in Drive and updates a Google Sheet. 🗂️ Key Benefits Saves time with automated, consistent scoring. ⏱️ Clear gap analysis for quick decisions. 🔍 Audit-ready reports stored in Drive. 🧾 Centralized tracking in Google Sheets. 📊 No-code operation after initial setup. 🧑💻 Features Google Drive search and download for JDs and resumes. 📂 PDF-to-text extraction for reliable parsing. 📝 Azure OpenAI (GPT-4o-mini) comparison and scoring. 🤖 Robust JSON parsing and error handling. 🛡️ Automatic report creation in Drive. 💾 Append or update candidate data in Google Sheets. 📑 Requirements n8n instance (cloud or self-hosted). Google Drive credentials in n8n with access to JD and resume folders (e.g., “JD store”, “Resume_store”). Azure OpenAI access with a deployed GPT-4o-mini model and credentials in n8n. Google Sheets credentials in n8n to append or update candidate rows. PDFs for job descriptions and resumes stored in the designated Drive folders. Target Audience Talent acquisition and HR operations teams. 🧠 Recruiters (in-house and agencies). 🧑💼 Hiring managers seeking consistent shortlisting. 🧭 Ops teams standardizing candidate evaluation records. 🗃️ Step-by-Step Setup Instructions Connect Google Drive and Google Sheets in n8n Credentials and verify folder access. 🔑 Add Azure OpenAI credentials and select GPT-4o-mini in the AI node. 🧠 Import the workflow and assign credentials to all nodes (Drive, AI, Sheets). 📦 Set folder references for JDs (“JD store”) and resumes (“Resume_store”). 📁 Run once to validate extraction, scoring, report creation, and sheet updates. ✅
by Masaki Go
Slack Bot for n8n Template Search with AI Tips, Cache and Analytics Search n8n workflow templates directly from Slack with AI-powered suggestions. Mention the bot with what you need in English, Spanish or Japanese and get matching templates plus actionable tips to improve your automation. Who is this for Teams using n8n who want to find workflow templates without leaving Slack. Great for multilingual teams and onboarding new members. What this workflow does Detects user intent (search, help, or browse categories) and routes accordingly Extracts keywords from 200+ known services and translates 150+ Japanese business terms to English Checks a Google Sheets cache before calling the n8n Templates API Uses OpenAI (gpt-4o-mini) to generate contextual tips based on the search results and use case When no templates are found, the AI suggests alternative keywords and how to build the workflow from scratch Logs every search to Google Sheets and posts a weekly usage report to Slack Setup Create a Slack App with app_mentions:read and chat:write scopes Set Slack credentials in n8n Create an HTTP Header Auth credential for OpenAI (name: Authorization, value: Bearer sk-your-key) Create a Google Sheet with two tabs: Cache (SearchQuery, CachedResponse, ResultCount, Timestamp) and Analytics (Timestamp, User, Query, Keywords, ResultCount, Intent, FromCache) Connect the Google Sheet in all four Sheets nodes Select your Slack channel in the Trigger, Error Reply and Weekly Summary nodes Activate and test with a mention How to customize Add services to knownServices or extend jaToEn for more languages Edit the AI system prompts to change tone or tip style Adjust the weekly report schedule in the Schedule Trigger node Replace Google Sheets cache with Redis for better performance at scale
by Guillaume Duvernay
This template provides a high-performance, cost-optimized alternative to standard AI Agents for building RAG (Retrieval-Augmented Generation) chatbots. Instead of relying on a single expensive model to decide every action, this workflow uses a modular "Routing & Specialized Steps" architecture. It delivers results up to 50% faster and 3x more cost-efficiently by only involving heavy-duty models when deep internal knowledge is actually required. By leveraging Lookio as the core RAG platform, you can connect your own documentation (PDFs, Docs, Webpages) to a chat interface without the complexity of managing vector databases or custom chunking strategies manually. Learn more about breaking down agents for efficiency in this YouTube deep dive. 👥 Who is this for? Customer Support Teams:** Build an automated response system that answers queries based on official product guides or internal FAQs. Efficiency-Focused Developers:** Scale AI operations without ballooning API costs by offloading simple queries to smaller models. Marketing & Content Teams:** Provide instant access to brand guidelines or past content repositories for internal research. 💡 What problem does this solve? Eliminates Token Waste:** Traditional agents send long system prompts to expensive models even for basic greetings like "Hello." This workflow routes those to a "nano" model, saving significant costs. Increases Reliability:** By breaking down the "Agent" logic into discrete steps (Categorize -> Query Prep -> Retrieval -> Response), you gain more control over the output guidelines at every stage. Scalable Knowledge Retrieval:* Uses *Lookio** to handle the heavy lifting of RAG, ensuring sourced and factual answers based on your private data rather than general AI training. ⚙️ How it works Memory & Intent Routing: The workflow fetches past messages and uses a specialized Text Classifier (powered by a small model) to determine if the user is asking a knowledge-based question or just chatting. Path A (Simple Response): If it's a greeting, a small model handles the reply instantly. Path B (Knowledge Retrieval): If information is needed, a specialized LLM step crafts a clean search query specifically for Lookio. RAG Execution: The Lookio API retrieves the exact insights needed from your connected knowledge documents. Final Generation: A large model synthesizes the specific Lookio results and the conversation history into a final, fact-based response. What is Lookio, the RAG platform? Lookio is a business-focused AI platform designed for automated knowledge retrieval. Unlike casual AI tools, Lookio is "API-first," meaning it’s built specifically to integrate with tools like n8n. It handles the entire RAG pipeline—from document ingestion to vector storage and logical retrieval—allowing you to focus on building the logic of your automation rather than the infrastructure of your AI. Lookio offers various query modes (Eco, Flash, Deep) so you can prioritize speed or depth depending on your budget. 🛠️ Setup Set up Lookio: Create an account at Lookio.app, upload your documents, and create an assistant. API Key: In the RAG via Lookio node, replace <YOUR-API-KEY> in the header and paste your assistant_id in the body. AI Credentials: Add your OpenAI (or preferred provider) credentials to the Very small model, Mini model, and Large model nodes. Activate: Turn the workflow on. You can now chat with your knowledge base via the n8n chat interface. 🚀 Taking it further Add More Branches:* Expand the *Intent router** to include paths for specific actions, like extracting emails for lead generation or checking order statuses via a database lookup. Formatting Tweaks:* Adjust the system prompts in the *Write the final response** node to match your brand's specific tone (e.g., "Explain it like I'm five" or "Legal professional tone"). Deployment:** Connect this backend to your website or a Slack channel for real-time team usage.
by Moka Ouchi
How it works This workflow automates the creation and management of a daily space-themed quiz in your Slack workspace. It's a fun way to engage your team and learn something new about the universe every day! Triggers Daily:** The workflow automatically runs at a scheduled time every day. Fetches NASA's Picture of the Day:** It starts by fetching the latest Astronomy Picture of the Day (APOD) from the official NASA API, including its title, explanation, and image URL. Generates a Quiz with AI:** Using the information from NASA, it prompts a Large Language Model (LLM) like OpenAI's GPT to create a unique, multiple-choice quiz question. Posts to Slack:** The generated quiz is then posted to a designated Slack channel. The bot automatically adds numbered reactions (1️⃣, 2️⃣, 3️⃣, 4️⃣) to the message, allowing users to vote. Waits and Tallies Results:** After a configurable waiting period, the workflow retrieves all reactions on the quiz message. A custom code node then tallies the votes, identifies the users who answered correctly, and calculates the total number of participants. Announces the Winner:** Finally, it posts a follow-up message in the same channel, revealing the correct answer, a detailed explanation, and mentions all the users who got it right. Set up steps This template should take about 10-15 minutes to set up. Credentials: NASA: Add your NASA API credentials in the Get APOD node. You can get a free API key from the NASA API website. OpenAI: Add your OpenAI API credentials in the OpenAI: Create Quiz node. Slack: Add your Slack API credentials to all the Slack nodes. You'll need to create a Slack App with the following permissions: chat:write, reactions:read, and reactions:write. Configuration: In the Workflow Configuration node, set your channelId to the Slack channel where you want the quiz to be posted. You can also customize the quizDifficulty, llmTone, and answerTimeoutMin to fit your audience. Activate Workflow: Once configured, simply activate the workflow. It will run automatically at the time specified in the Schedule Trigger node (default is 21:00 daily). Requirements An n8n instance A NASA API Key An OpenAI API Key A Slack App with the appropriate permissions and API credentials
by Cheng Siong Chin
How It Works This workflow automates supply chain monitoring and risk management by deploying multiple specialized AI agents to analyze different supply chain dimensions simultaneously. Designed for supply chain managers, procurement teams, and logistics coordinators, it solves the critical challenge of real-time supply chain visibility and proactive risk mitigation across complex global networks. The system triggers on schedule, fetches current supply chain data, then deploys four specialized AI agents—Enterprise Executor for strategic coordination, Provider Generator for supplier assessment, Circular Economy analyzer for sustainability metrics, and Logistics Optimizer for distribution efficiency. Each agent leverages OpenAI models with dedicated tools for calculations and data parsing. Results are merged, analyzed for risk levels (critical, high, normal), and routed to appropriate stakeholders via email with risk-specific formatting and urgency levels. Setup Steps Configure Schedule Trigger with desired monitoring frequency Set up OpenAI API credentials for all four AI agent nodes Configure Fetch Supply Chain Data node with your ERP/SCM system API endpoint Customize Enterprise Executor Agent tools with your strategic KPIs Update Provider Generator Agent with supplier evaluation criteria Configure Circular Economy Agent with sustainability metrics and targets Prerequisites Active OpenAI API account with sufficient credits, supply chain management system with API access Use Cases Daily supply chain health monitoring, supplier risk assessment, inventory shortage prediction Customization Modify agent prompts for industry-specific analysis, adjust risk scoring algorithms Benefits Provides 360-degree supply chain visibility, enables proactive risk mitigation
by David Olusola
AI Resume Screening with GPT-4o & Google Drive - Automated Hiring Pipeline How it works Transform your hiring process with this intelligent automation that screens resumes in minutes, not hours. The workflow monitors your Gmail inbox, processes resume attachments using AI analysis, and delivers structured candidate evaluations to a centralized Google Sheets dashboard. Key workflow steps: Email Detection - Monitors Gmail for resume attachments (PDF, DOCX, TXT) File Processing - Uploads to Google Drive and extracts text content AI Analysis - GPT-4o evaluates candidates against job requirements Data Extraction - Pulls contact info and key qualifications automatically Results Logging - Saves structured analysis to Google Sheets for team review Set up steps Total setup time: 15-20 minutes Required Credentials (5 minutes) Gmail account with OAuth2 access Google Drive API credentials Google Sheets API access OpenAI API key for GPT-4o Configuration Steps (10 minutes) Connect Gmail trigger - Authorize email monitoring Set up Google Drive folder - Choose destination for resume files Create tracking spreadsheet - Copy the provided Google Sheets template Add OpenAI credentials - Insert your API key for AI analysis Customize job description - Update the role requirements in the "Job Description" node Optional Customization (5 minutes) Modify AI scoring criteria in the recruiter prompt Adjust candidate information extraction fields Customize Google Sheets column mapping No coding required - All configuration happens through the n8n interface using pre-built nodes and simple dropdown selections. Template Features Smart File Handling Supports PDF, Word documents, and plain text resumes Automatic format conversion and text extraction Intelligent routing based on file type AI-Powered Analysis GPT-4o evaluation against job requirements Structured scoring with strengths/weaknesses breakdown Risk and opportunity assessment for each candidate Actionable next-steps recommendations Seamless Integration Direct Gmail inbox monitoring Automatic Google Drive file organization Real-time Google Sheets dashboard updates Clean data extraction for CRM integration Professional Output Standardized candidate scoring (1-10 scale) Detailed justification for each evaluation Contact information extraction Resume quality validation Perfect for HR teams, recruiting agencies, and growing companies looking to streamline their hiring pipeline with intelligent automation.
by Miha
This is an official n8n workflow that helps you follow our sticky note and naming guidelines - required for getting your template published on the n8n template library. How it works: Parses the workflow's nodes, connections, and spatial layout. Uses GPT-4o to group nodes into logical clusters and generate descriptive sticky notes. Resolves any overlapping sticky notes through iterative collision detection. Optionally renames all nodes to follow descriptive naming conventions via a second AI pass. Setup steps Add your OpenAI API credentials to the two OpenAI Chat Model nodes. Paste your target workflow JSON into the "Set Workflow Variables" node. Set renameNodes to true or false depending on whether you want node renaming.
by Cheng Siong Chin
How It Works This workflow automates end-to-end curriculum planning using a multi-agent AI architecture in n8n. Designed for educators, instructional designers, and academic institutions, it eliminates the manual effort of researching, structuring, and assessing curriculum content. A Curriculum Supervisor Agent orchestrates three specialised sub-agents: a Research Agent (using Google Search, Perplexity, and Wikipedia), a Content Creation Agent (using GPT for drafting), and an Assessment Agent (using a calculator and code tools). Planning memory persists context across agent interactions. Once all agents complete their tasks, the Prepare Curriculum Data node formats the output, which is then stored in Google Sheets. This pipeline ensures coherent, research-backed, assessment-ready curriculum plans are generated and stored automatically with minimal human intervention. Setup Steps Connect OpenAI credentials to Supervisor, Research, Content, and Assessment model nodes. Add Google Custom Search API key to the Google Search Tool node. Configure Perplexity API key in the Perplexity Research Tool node. Authenticate Google Sheets with OAuth2; set target sheet in Store Curriculum Plan node. Verify Planning Memory node is linked to the Supervisor Agent for context persistence. Prerequisites Google Custom Search API key Perplexity API key Google Sheets OAuth2 credentials Use Cases University course redesign aligned to industry trends Customisation Add email output via Gmail node post-storage Benefits Cuts curriculum planning time by 70%+
by PollupAI
Who is this for? This n8n workflow template is designed for customer support, CX, and ops teams that manage customer messages through HubSpot and use Jira for internal task management. It is especially useful for SaaS companies aiming to automate ticket triage, sentiment detection, and team assignment using AI agents. 🧩 What problem is this workflow solving? Customer service teams often struggle with manual message classification, delayed reactions to churn signals, and inefficiencies in routing support issues to the right internal teams. This workflow uses LLMs and automated profiling to: Detect churn risk or intent in customer messages Summarize issues Classify tickets into categories (e.g. fulfillment, technical, invoicing) Automatically create Jira tickets based on enriched insights 🤖 What this workflow does This AI-powered workflow processes HubSpot support tickets and routes them to Jira based on sentiment and topic. Here’s the full breakdown: Triggers: Either manually or on a schedule (via cron). Fetch HubSpot tickets: Retrieves new messages and their metadata. Run Orchestration Agent: Uses Sentinel Agent to detect emotional tone, churn risk, and purchase intent. Calls Profiler Agent to enrich customer profiles from HubSpot. Summarizes the message using OpenAI. Classifies the ticket using a custom classifier (technical, fulfillment, etc.). Generate a Jira ticket: Title and description are generated using GPT. The assignee and project are predefined. AI agents can be expanded (e.g. add Guide or Facilitator agents). ⚙️ Setup To use this template, you’ll need: HubSpot account** with OAuth2 credentials in n8n Jira Software Cloud account** and project ID OpenAI credentials** for GPT-based nodes Optional: Create sub-workflows for additional AI agents Steps: Clone the workflow in your n8n instance. Replace placeholder credentials for HubSpot, OpenAI, and Jira. Adjust Jira project/issue type IDs to match your setup. Test the workflow using the manual trigger or scheduled trigger node. 🛠️ How to customize this workflow to your needs 1. Edit category logic In the “Category Classifier” node, modify categories and prompt structure to match your internal team structures (e.g., billing, account management, tech support). 2. Refine AI prompts Customize the agent prompt definitions in: Sentinel_agent Profiler_agent Orchestrator to better align with your brand tone or service goals. 3. Update Jira integration You can route tickets to different projects or team leads by adjusting the “Create an issue in Jira” node based on classification output. 4. Add escalation paths Insert Slack, email, or webhook notifications for specific risk levels or customer segments. This workflow empowers your team with real-time message triage, automated decision-making, and AI-enhanced customer insight, turning every inbound ticket into a data-driven action item.