by Rajeet Nair
This workflow implements a cost-optimized AI routing system using n8n. It intelligently decides whether a request should be handled by a low-cost model or escalated to a higher-quality model based on response confidence. The goal is to minimize LLM usage costs while maintaining high answer quality. A query is first processed by a cheaper model. The response is then evaluated by a confidence-scoring AI agent. If the response quality is insufficient, the workflow automatically escalates the request to a more capable model. This approach is useful for building scalable AI systems where most queries can be answered cheaply, while complex queries still receive high-quality responses. How It Works Webhook Trigger Receives a user query from an external application. Workflow Configuration Defines parameters such as: confidence threshold cheap model cost expensive model cost Cheap Model Response The query is first processed using GPT-4o-mini to minimize cost. Confidence Evaluation An AI agent analyzes the response quality. It evaluates accuracy, completeness, clarity, and relevance. Structured Output Parsing The evaluator returns structured data including: confidence score explanation escalation recommendation. Decision Logic If the confidence score is below the configured threshold, the workflow escalates the request. Expensive Model Escalation The query is reprocessed using GPT-4o for a higher-quality answer. Cost Calculation Token usage is analyzed to estimate: total cost cost difference between models. Final Response Formatting The workflow returns: AI response model used confidence score escalation status estimated cost. Setup Instructions Create an OpenAI credential in n8n. Configure the following nodes: Cheap Model (GPT-4o-mini) Expensive Model (GPT-4o) OpenAI Chat Model used by the confidence evaluator agent. Adjust configuration values in the Workflow Configuration node: confidenceThreshold cheapModelCostPer1kTokens expensiveModelCostPer1kTokens Deploy the workflow and send requests to the Webhook URL. Example webhook payload: { "query": "Explain how photosynthesis works." }
by jellyfish
Template Description This description details the template's purpose, how it works, and its key features. You can copy and use it directly. Overview This is a powerful n8n "meta-workflow" that acts as a Supervisor. Through a simple Telegram bot, you can dynamically create, manage, and delete countless independent, AI-driven market monitoring agents (Watchdogs). This template is a perfect implementation of the "Workflowception" (workflow managing workflows) concept in n8n, showcasing how to achieve ultimate automation by leveraging the the n8n API. How It Works ? Telegram Bot Interface: Execute all operations by sending commands to your own Telegram Bot: /add SYMBOL INTERVAL PROMPT: Add a new monitoring task. /delete SYMBOL: Delete an existing monitoring task. /list: List all currently running monitoring tasks. /help: Get help information. Use Telegram Bot to control The watchdog workfolw created in the below Dynamic Workflow Management: Upon receiving an /add command, the Supervisor system reads a "Watchdog" template, fills in your provided parameters (like trading pair and time interval), and then automatically creates a brand new, independent workflow via the n8n API and activates it. Persistent Storage: All monitoring tasks are stored in a PostgreSQL database, ensuring your configurations are safe even if n8n restarts. The ID of each newly created workflow is also written back to the database to facilitate future deletion operations. AI-Powered Analysis: Each created "Watchdog" workflow runs on schedule. It fetches the latest candlestick chart by calling a self-hosted tradingview-snapshot service. This service, available at https://github.com/0xcathiefish/tradingview-snapshot, works by simulating a login to your account and then using TradingView's official snapshot feature to generate an unrestricted, high-quality chart image. An example of a generated snapshot can be seen here: https://s3.tradingview.com/snapshots/u/uvxylM1Z.png. To use this, you need to download the Docker image from the packages in the GitHub repository mentioned above, and run it as a container. The n8n workflow then communicates directly with this container via an HTTP API to request and receive the chart snapshot. After obtaining the image, the workflow calls a multimodal AI model (Gemini). It sends both the chart image and your custom text-based conditions (e.g., "breakout above previous high on high volume" or "break below 4-hour MA20") to the AI for analysis, enabling truly intelligent chart interpretation and alert triggering. Key Features Workflowception: A prime example of one workflow using an API to create, activate, and delete other workflows. Full Control via Telegram: Manage your monitoring bots from anywhere, anytime, without needing to log into the n8n interface. AI Visual Analysis: Move beyond simple price alerts. Let an AI "read" the charts for you to enable complex, pattern-based, and indicator-based intelligent alerts. Persistent & Extensible: Built on PostgreSQL for stability and reliability. You can easily add more custom commands.
by Yusuke Yamamoto
This n8n template demonstrates a multi-modal AI recipe assistant that suggests delicious recipes based on user input, delivered via Telegram. The workflow can uniquely handle two types of input: a photo of your ingredients or a simple text list. Use cases are many: Get instant dinner ideas by taking a photo of your fridge contents, reduce food waste by finding recipes for leftover ingredients, or create a fun and interactive service for a cooking community or food delivery app! Good to know This workflow uses two different AI models (one for vision, one for text generation), so costs will be incurred for each execution. See OpenRouter Pricing or your chosen model provider's pricing page for updated info. The AI prompts are in English, but the final recipe output is configured to be in Japanese. You can easily change the language by editing the prompt in the Recipe Generator node. How it works The workflow starts when a user sends a message or an image to your bot on Telegram via the Telegram Trigger. An IF node intelligently checks if the input is text or an image. If an image is sent, the AI Vision Agent analyzes it to identify ingredients. A Structured Output Parser then forces this data into a clean JSON list. If text is sent, a Set node directly prepares the user's text as the ingredient list. Both paths converge, providing a standardized ingredient list to the Recipe Generator agent. This AI acts as a professional chef to create three detailed recipes. Crucially, a second Structured Output Parser takes the AI's creative text and formats it into a reliable JSON structure (with name, difficulty, instructions, etc.). This ensures the output is always predictable and easy to work with. A final Set node uses a JavaScript expression to transform the structured recipe data into a beautiful, emoji-rich, and easy-to-read message. The formatted recipe suggestions are sent back to the user on Telegram. How to use Configure the Telegram Trigger with your own bot's API credentials. Add your AI provider credentials in the OpenAI Vision Model and OpenAI Recipe Model nodes (this template uses OpenRouter, but it can be swapped for a direct OpenAI connection). Requirements A Telegram account and a bot token. An AI provider account that supports vision and text models, such as OpenRouter or OpenAI. Customising this workflow Modify the prompt in the Recipe Generator to include dietary restrictions (e.g., "vegan," "gluten-free") or to change the number of recipes suggested. Swap the Telegram nodes for Discord, Slack, or a Webhook to integrate this recipe bot into a different platform or your own application. Connect to a recipe database API to supplement the AI's suggestions with existing recipes.
by Rohit Dabra
WooCommerce AI Agent — n8n Workflow (Overview) Description: Turn your WooCommerce store into a conversational AI assistant — create products, place orders, run reports and manage coupons using natural language via n8n + an MCP Server. Key features Natural-language commands mapped to WooCommerce actions (products, orders, reports, coupons). Structured JSON outputs + lightweight mapping to avoid schema errors. Calls routed through your MCP Server for secure, auditable tool execution. Minimal user prompts — agent auto-fetches context and asks only when necessary. Extensible: add new tools or customize prompts/mappings easily. Demo of the workflow: Youtube Video 🚀 Setup Guide: WooCommerce + AI Agent Workflow in n8n 1. Prerequisites Running n8n instance WooCommerce store with REST API keys OpenAI API key MCP server (production URL) 2. Import Workflow Open n8n dashboard Go to Workflows → Import Upload/paste the workflow JSON Save as WooCommerce AI Agent 3. Configure Credentials OpenAI Create new credential → OpenAI API Add your API key → Save & test WooCommerce Create new credential → WooCommerce API Enter Base URL, Consumer Key & Secret → Save & test MCP Client In MCP Client node, set Server URL to your MCP server production URL Add authentication if required 4. Test Workflow Open workflow in editor Run a sample request (e.g., create a test product) Verify product appears in WooCommerce 5. Activate Workflow Once tested, click Activate in n8n Workflow is now live 🎉 6. Troubleshooting Schema errors** → Ensure fields match WooCommerce node requirements Connection issues** → Re-check credentials and MCP URL
by Julien DEL RIO
Who's it for This template is designed for content creators, podcasters, businesses, and researchers who need to transcribe long audio recordings that exceed OpenAI Whisper's 25 MB file size limit (~20 minutes of audio). How it works This workflow combines n8n, FileFlows, and OpenAI Whisper API to transcribe audio files of any length: User uploads an MP3 file through a web form and provides an email address n8n splits the file into 4 MiB chunks and uploads them to FileFlows FileFlows uses FFmpeg to segment the audio into 15-minute chunks (safely under the 25 MB API limit) Each segment is transcribed using OpenAI's Whisper API (configured for French by default) All transcriptions are merged into a single text file The complete transcription is automatically emailed to the user Processing time: Typically 10-15 minutes for a 1-hour audio file. Requirements n8n instance (self-hosted or cloud) FileFlows with Docker and FFmpeg installed OpenAI API key (Whisper API access) Gmail account for email delivery Network access between n8n and FileFlows Setup Complete setup instructions, including FileFlows workflow import, credentials configuration, and storage setup, are provided in the workflow's sticky notes. Cost OpenAI Whisper API: $0.006 per minute. A 1-hour recording costs approximately $0.36.
by Rahul Joshi
Description Automatically compare candidate resumes to job descriptions (PDFs) from Google Drive, generate a 0–100 fit score with gap analysis, and update Google Sheets—powered by Azure OpenAI (GPT-4o-mini). Fast, consistent screening with saved reports in Drive. 📈📄 What This Template Does Fetches job descriptions and resumes (PDF) from Google Drive. 📥 Extracts clean text from both PDFs for analysis. 🧼 Generates an AI evaluation (score, must-have gaps, nice-to-have bonuses, summary). 🤝 Parses the AI output to structured JSON. 🧩 Delivers a saved text report in Drive and updates a Google Sheet. 🗂️ Key Benefits Saves time with automated, consistent scoring. ⏱️ Clear gap analysis for quick decisions. 🔍 Audit-ready reports stored in Drive. 🧾 Centralized tracking in Google Sheets. 📊 No-code operation after initial setup. 🧑💻 Features Google Drive search and download for JDs and resumes. 📂 PDF-to-text extraction for reliable parsing. 📝 Azure OpenAI (GPT-4o-mini) comparison and scoring. 🤖 Robust JSON parsing and error handling. 🛡️ Automatic report creation in Drive. 💾 Append or update candidate data in Google Sheets. 📑 Requirements n8n instance (cloud or self-hosted). Google Drive credentials in n8n with access to JD and resume folders (e.g., “JD store”, “Resume_store”). Azure OpenAI access with a deployed GPT-4o-mini model and credentials in n8n. Google Sheets credentials in n8n to append or update candidate rows. PDFs for job descriptions and resumes stored in the designated Drive folders. Target Audience Talent acquisition and HR operations teams. 🧠 Recruiters (in-house and agencies). 🧑💼 Hiring managers seeking consistent shortlisting. 🧭 Ops teams standardizing candidate evaluation records. 🗃️ Step-by-Step Setup Instructions Connect Google Drive and Google Sheets in n8n Credentials and verify folder access. 🔑 Add Azure OpenAI credentials and select GPT-4o-mini in the AI node. 🧠 Import the workflow and assign credentials to all nodes (Drive, AI, Sheets). 📦 Set folder references for JDs (“JD store”) and resumes (“Resume_store”). 📁 Run once to validate extraction, scoring, report creation, and sheet updates. ✅
by Moka Ouchi
How it works This workflow automates the creation and management of a daily space-themed quiz in your Slack workspace. It's a fun way to engage your team and learn something new about the universe every day! Triggers Daily:** The workflow automatically runs at a scheduled time every day. Fetches NASA's Picture of the Day:** It starts by fetching the latest Astronomy Picture of the Day (APOD) from the official NASA API, including its title, explanation, and image URL. Generates a Quiz with AI:** Using the information from NASA, it prompts a Large Language Model (LLM) like OpenAI's GPT to create a unique, multiple-choice quiz question. Posts to Slack:** The generated quiz is then posted to a designated Slack channel. The bot automatically adds numbered reactions (1️⃣, 2️⃣, 3️⃣, 4️⃣) to the message, allowing users to vote. Waits and Tallies Results:** After a configurable waiting period, the workflow retrieves all reactions on the quiz message. A custom code node then tallies the votes, identifies the users who answered correctly, and calculates the total number of participants. Announces the Winner:** Finally, it posts a follow-up message in the same channel, revealing the correct answer, a detailed explanation, and mentions all the users who got it right. Set up steps This template should take about 10-15 minutes to set up. Credentials: NASA: Add your NASA API credentials in the Get APOD node. You can get a free API key from the NASA API website. OpenAI: Add your OpenAI API credentials in the OpenAI: Create Quiz node. Slack: Add your Slack API credentials to all the Slack nodes. You'll need to create a Slack App with the following permissions: chat:write, reactions:read, and reactions:write. Configuration: In the Workflow Configuration node, set your channelId to the Slack channel where you want the quiz to be posted. You can also customize the quizDifficulty, llmTone, and answerTimeoutMin to fit your audience. Activate Workflow: Once configured, simply activate the workflow. It will run automatically at the time specified in the Schedule Trigger node (default is 21:00 daily). Requirements An n8n instance A NASA API Key An OpenAI API Key A Slack App with the appropriate permissions and API credentials
by 荒城直也
Weather Monitoring Across Multiple Cities with OpenWeatherMap, GPT-4o-mini, and Discord This workflow provides an automated, intelligent solution for global weather monitoring. It goes beyond simple data fetching by calculating a custom "Comfort Index" and using AI to provide human-like briefings and activity recommendations. Whether you are managing remote teams or planning travel, this template centralizes complex environmental data into actionable insights. Who’s it for Remote Team Leads:** Keep an eye on environmental conditions for team members across different time zones. Frequent Travelers & Event Planners:** Monitor weather risks and comfort levels for multiple destinations simultaneously. Smart Home/Life Enthusiasts:** Receive daily morning briefings on air quality and weather alerts directly in Discord. How it works Schedule Trigger: The workflow runs every 6 hours (customizable) to ensure data is up to date. Data Collection: It loops through a list of cities, fetching current weather, 5-day forecasts, and Air Quality Index (AQI) data via the OpenWeatherMap node and HTTP Request node. Smart Processing: A Code node calculates a "Comfort Index" (based on temperature and humidity) and flags specific alerts (e.g., extreme heat, high winds, or poor AQI). AI Analysis: The OpenAI node (using GPT-4o-mini) analyzes the aggregated data to compare cities and recommend the best location for outdoor activities. Conditional Routing: An If node checks for active weather alerts. Urgent alerts are routed to a specific Discord notification, while routine briefings are sent normally. Archiving: All processed data is appended to Google Sheets for historical tracking and future analysis. How to set up Credentials: Connect your OpenWeatherMap, OpenAI, Discord (Webhook), and Google Sheets accounts. Locations: Open the 'Set Monitoring Locations' node and edit the JSON array with the cities, latitudes, and longitudes you wish to track. Google Sheets: Configure the 'Log to Google Sheets' node with your specific Spreadsheet ID and Sheet Name. Discord: Ensure your Webhook URL is correctly pasted into the Discord nodes. Requirements OpenWeatherMap API Key** (Free tier is sufficient). OpenAI API Key** (Configured for GPT-4o-mini). Discord Webhook URL**. Google Sheet** with headers ready for logging. How to customize Adjust Alert Thresholds:** Modify the logic in the 'Process and Analyze Data' Code node to change what triggers a "High Wind" or "Extreme Heat" alert. Refine AI Persona:** Edit the System Prompt in the 'AI Weather Analysis' node to change the tone or focus of the weather briefing. Change Frequency:** Adjust the Schedule Trigger to run once a day or every hour depending on your needs.
by yusan25c
How It Works This template is a workflow that registers Jira tickets to Pinecone. By combining it with the Automated Jira Ticket Responses with GPT-4 and Pinecone Knowledge Base template, you can continuously improve the quality of automated responses in Jira. Prerequisites A Jira account and credentials (API key and email address) A Pinecone account and credentials (API key and environment settings) OpenAI credentials (API key) Setup Instructions Jira Credentials Register your Jira credentials (API key and email address) in n8n. Vector Database Setup (Pinecone) Register your Pinecone credentials (API key and environment variables) in n8n. AI Node Configure the OpenAI node with your credentials (API key). Step by Step Scheduled Trigger The workflow runs at regular intervals according to the schedule set in the Scheduled Trigger node. Jira Trigger (Completed Tickets) Retrieves the summary, description, and comments of completed Jira tickets. Register to Pinecone Converts the retrieved ticket information into vectors and registers them in Pinecone. Notes Configure the Scheduled Trigger interval carefully to avoid exceeding API rate limits. Further Reference For a detailed walkthrough (in Japanese), see this article: 👉 Automating knowledge registration to Pinecone with n8n (Qiita) You can find the template file on GitHub here: 👉 Template File on GitHub
by The O Suite
How the n8n OWASP Scanner Works & How to Set It Up How It Works (Simple Flow): Input**: Enter target URL + endpoint (e.g., https://example.com, /login) Scan**: This workflow executes 5 parallel HTTP tests (Headers, Cookies, CORS, HTTPS, Methods) Analyze**: Pure JS logic checks OWASP ASVS (Application Security Verification Standard) rules (no external tools) Merge**: Combines all findings into one Markdown report Output: Auto-generates + downloads scan-2025-11-16_210900.md (example filename) Email:** (Optional) Forward the report to an email address using Gmail. Setup in 3 Steps (2 Minutes) Import Workflow Copy the full JSON (from "Export Final Workflow") In n8n → Workflows → Import from JSON → Paste → Import (Optional) Connect your Gmail credentials In the last node to auto-email the report Click Execute the workflow Enter a URL in the new window, then click 'submit'. You can alternatively download or receive the Markdown report directly from the Markdown to File node (Supports any HTTP/HTTPS endpoint. Works in n8n Cloud or self-hosted.)
by Mano
📰 What This Workflow Does This intelligent news monitoring system automatically: • RSS Feed Aggregation: Pulls the latest headlines from Google News RSS feeds and Hacker News • AI Content Filtering: Identifies and prioritizes AI-related news from the past 24 hours • Smart Summarization: Uses OpenAI to create concise, informative summaries of top stories • Telegram Delivery: Sends formatted news digests directly to your Telegram channel • Scheduled Execution: Runs automatically every morning at 8:00 AM (configurable) 🎯 Key Features ✅ Multi-Source News: Combines Google News and Hacker News for comprehensive coverage ✅ AI-Powered Filtering: Automatically identifies relevant AI and technology news ✅ Intelligent Summarization: OpenAI generates clear, concise summaries with key insights ✅ Telegram Integration: Instant delivery to your preferred chat or channel ✅ Daily Automation: Scheduled to run every morning for fresh news updates ✅ Customizable Timing: Easy to adjust schedule for different time zones 🔧 How It Works Scheduled Trigger: Workflow activates daily at 8:00 AM (or your preferred time) RSS Feed Reading: Fetches latest articles from Google News and Hacker News feeds Content Filtering: Identifies AI-related stories from the past 24 hours AI Summarization: OpenAI processes and summarizes the most important stories Telegram Delivery: Sends formatted news digest to your Telegram channel 📋 Setup Requirements • OpenAI API Key: For AI-powered news summarization • Telegram Bot: Create via @BotFather and get bot token + chat ID • RSS Feed Access: Google News and Hacker News RSS feeds (public) ⚙️ Configuration Steps Set Up Telegram Bot: Message @BotFather on Telegram Create new bot with /newbot command Save bot token and chat ID Configure OpenAI: Add OpenAI API credentials in n8n Ensure access to GPT models for summarization Update RSS Feeds: Verify Google News RSS feed URLs Confirm Hacker News feed accessibility Schedule Timing: Adjust Schedule Trigger for your time zone Default: 8:00 AM daily (modify as needed) Test & Deploy: Run test execution to verify all connections Activate workflow for daily automation 🎨 Customization Options Time Zone Adjustment: Modify Schedule Trigger for different regions News Sources: Add additional RSS feeds for broader coverage Filtering Criteria: Adjust AI prompts to focus on specific topics Summary Length: Customize OpenAI prompts for different detail levels Delivery Format: Modify Telegram message formatting and structure 💡 Use Cases • AI Professionals: Stay updated on latest AI developments and industry news • Tech Teams: Monitor technology trends and competitor announcements • Researchers: Track academic and industry research developments • Content Creators: Source material for AI-focused content and newsletters • Business Leaders: Stay informed about AI market trends and opportunities
by Thiago Vazzoler Loureiro
Description This workflow vectorizes the TUSS (Terminologia Unificada da Saúde Suplementar) table by transforming medical procedures into vector embeddings ready for semantic search. It automates the import of TUSS data, performs text preprocessing, and uses Google Gemini to generate vector embeddings. The resulting vectors can be stored in a vector database, such as PostgreSQL with pgvector, enabling efficient semantic queries across healthcare data. What Problem Does This Solve? Searching for medical procedures using traditional keyword matching is often imprecise. This workflow enhances the search experience by enabling semantic similarity search, which can retrieve more relevant results based on the meaning of the query instead of exact word matches. How It Works Import TUSS data: Load medical procedure entries from the TUSS table. Preprocess text: Clean and prepare the text for embedding. Generate embeddings: Use Google Gemini to convert each procedure into a semantic vector. Store vectors: Save the output in a PostgreSQL database with the pgvector extension. Prerequisites An n8n instance (self-hosted). A PostgreSQL database with the pgvector extension enabled. Access to the Google Gemini API. TUSS data in a structured format (CSV, database, or API source). Customization Tips You can adapt the preprocessing logic to your own language or domain-specific terms. Swap Google Gemini with another embedding model, such as OpenAI or Cohere. Adjust the chunking logic to control the granularity of semantic representation. Setup Instructions Prepare a source (database or CSV) with TUSS data. You need at least two fields: CD_ITEM (Medical procedure code) DS_ITEM (Medical procedure description) Configure your Oracle or PostgreSQL database credentials in the Credentials section of n8n. Make sure your PostgreSQL database has pgVector installed. Replace the placeholder table and column names with your actual TUSS table. Connect your Google Gemini credentials (via OpenAI proxy or official connector). Run the workflow to vectorize all medical procedure descriptions.