by Meelioo
How it works This beginner-friendly workflow demonstrates the core building blocks of n8n. It guides you through: Triggers – Start workflows manually, on a schedule, via webhooks, or through chat. Data processing** – Use Set and Code nodes to create, transform, and enrich data. Logic and branching – Apply conditions with IF nodes and merge different branches back together. API integrations** – Fetch external data (e.g., users from an API), split arrays into individual items, and extract useful fields. AI-powered steps** – Connect to OpenAI for generating fun facts or build interactive assistants with chat triggers, memory, and tools. Responses** – Return structured results via webhooks or summary nodes. By the end, it demonstrates a full flow: creating data → transforming it → making decisions → calling APIs → using AI → responding with outputs. Set up steps Time required: 5–10 minutes. What you need: An n8n instance (cloud or self-hosted). Optional: API credentials (e.g., OpenAI) if you want to test AI features. Setup flow: Import this workflow. Add your API keys where needed (OpenAI, etc.). Trigger the workflow manually or test with webhooks. >👉 Detailed node explanations and examples are already included as sticky notes inside the workflow itself, so you can learn step by step as you explore.
by Ruthwik
📧 AI-Powered Email Categorization & Labeling in Zoho Mail This n8n template demonstrates how to use AI text classification to automatically categorize incoming emails in Zoho Mail and apply the correct label (e.g., Support, Billing, HR). It saves time by keeping your inbox structured and ensures emails are routed to the right category. Use cases include: Routing customer support requests to the correct team. Organizing billing and finance communications separately. Streamlining HR and recruitment email handling. Reducing inbox clutter and ensuring no important message is missed. ℹ️ Good to know You’ll need to configure Zoho OAuth credentials — see Self Client Overview, Authorization Code Flow, and Zoho Mail OAuth Guide. The labels must already exist in Zoho Mail (e.g., Support, Billing, HR). The workflow fetches these labels and applies them automatically. The Zoho Mail API domain changes depending on your account region: .com → Global accounts (https://mail.zoho.com/api/...) .eu → EU accounts (https://mail.zoho.eu/api/...) .in → India accounts (https://mail.zoho.in/api/...) Example: For an EU account, the endpoint would be: https://mail.zoho.eu/api/accounts/<accountID>/updatemessage The AI model used for text classification may incur costs depending on your provider (e.g., OpenRouter). Start by testing with a small set of emails before enabling for your full inbox. 🔄 How it works A new email in Zoho Mail triggers the workflow. OAuth authentication retrieves access to Zoho Mail’s API. All available labels are fetched, and a label map (display name → ID) is created. The AI model analyzes the subject and body to predict the correct category. The workflow routes the email to the right category branch. The matching Zoho Mail label is applied (final node is deactivated by default). 🛠️ How to use Create the required labels (e.g., Support, Billing, HR, etc.) in your Zoho Mail account before running the workflow. Replace the Zoho Mail Account ID in the Set Account ID node. Configure your Zoho OAuth credentials in the Get Access Token node. Update the API base URL to match your Zoho account’s region (.com, .eu, .in, etc.). Activate the Apply Label to Email node once ready for production. Optionally, adjust categories in the AI classifier prompt to fit your organization’s needs. 📋 Requirements Zoho Mail account with API access enabled. Labels created in Zoho Mail for each category you want to classify. OAuth credentials set up in n8n. Correct Zoho Mail API domain (.com, .eu, .in) based on your account region. An AI model (via OpenRouter or other provider) for text classification. 🎨 Customising this workflow This workflow can be adapted to many inbox management scenarios. Examples include: Auto-routing customer inquiries to specific departments. Prioritizing VIP client emails with special labels. Filtering job applications directly into an HR-managed folder.
by John Alejandro SIlva
🤖💬 Smart Telegram AI Assistant with Memory Summarization & Dynamic Model Selection > Optimize your AI workflows, cut costs, and get faster, more accurate answers. 📋 Description Tired of expensive AI calls, slow responses, or bots that forget your context? This Telegram AI Assistant template is designed to optimize cost, speed, and precision in your AI-powered conversations. By combining PostgreSQL chat memory, AI summarization, and dynamic model selection, this workflow ensures you only pay for what you really need. Simple queries get routed to lightweight models, while complex requests automatically trigger more advanced ones. The result? Smarter context, lower costs, and better answers. This template is perfect for anyone who wants to: ⚡ Save money by using cheaper models for easy tasks. 🧠 Keep context relevant with AI-powered summarization. ⏱️ Respond faster thanks to optimized chat memory storage. 💬 Deliver better answers directly inside Telegram. ✨ Key Benefits 💸 Cost Optimization: Automatically routes simple requests to Gemini Flash Lite and reserves Gemini Pro only for complex reasoning. 🧠 Smarter Context: Summarization ensures only the most relevant chat history is used. ⏱️ Faster Workflows: Storing user + agent messages in a single row reduces DB queries by half and saves ~0.3s per response. 🎤 Voice Message Support: Convert Telegram voice notes to text and reply intelligently. 🛡️ Error-Proof Formatting: Safe MarkdownV2 ensures Telegram-ready answers. 💼 Use Case This template is for anyone who needs an AI chatbot on Telegram that balances cost, performance, and intelligence. Customer support teams can reduce expenses by using lightweight models for FAQs. Freelancers and consultants can offer faster AI-powered chats without losing context. Power users can handle voice + text seamlessly while keeping conversations memory-aware. Whether you’re scaling a business or just want a smarter assistant, this workflow adapts to your needs and budget. 💬 Example Interactions Quick Q&A** → Routed to Gemini Flash Lite for fast, low-cost answers. Complex problem-solving** → Sent to Gemini Pro for in-depth reasoning. Voice messages** → Automatically transcribed, summarized, and answered. Long conversations** → Context is summarized, ensuring precise and efficient replies. 🔑 Required Credentials Telegram Bot API** (Bot Token) PostgreSQL** (Database connection) Google Gemini API** (Flash Lite, Flash, Pro) ⚙️ Setup Instructions 🗄️ Create the PostgreSQL table (chat_memory) from the Gray section SQL. 🔌 Configure the Telegram Trigger with your bot token. 🤖 Connect your Gemini API credentials. 🗂️ Set up PostgreSQL nodes with your DB details. ▶️ Activate the workflow and start chatting with your AI-powered Telegram bot. 🏷 Tags telegram ai-assistant chatbot postgresql summarization memory gemini dynamic-routing workflow-optimization cost-saving voice-to-text 🙏 Acknowledgement A special thank you to Davide for the inspiration behind this template. His work on the AI Orchestrator that dynamically selects models based on input type served as a foundational guide for this architecture. 💡 Need Assistance? Want to customize this workflow for your business or project? Let’s connect: 📧 Email: johnsilva11031@gmail.com 🔗 LinkedIn: John Alejandro Silva Rodríguez
by Growth AI
Intelligent chatbot with custom knowledge base Who's it for Businesses, developers, and organizations who need a customizable AI chatbot for internal documentation access, customer support, e-commerce assistance, or any use case requiring intelligent conversation with access to specific knowledge bases. What it does This workflow creates a fully customizable AI chatbot that can be deployed on any platform supporting webhook triggers (websites, Slack, Teams, etc.). The chatbot accesses a personalized knowledge base stored in Supabase and can perform advanced actions like sending emails, scheduling appointments, or updating databases beyond simple conversation. How it works The workflow combines several powerful components: Webhook Trigger: Accepts messages from any platform that supports webhooks AI Agent: Processes user queries with customizable personality and instructions Vector Database: Searches relevant information from your Supabase knowledge base Memory System: Maintains conversation history for context and traceability Action Tools: Performs additional tasks like email sending or calendar booking Technical architecture Chat trigger connects directly to AI Agent Language model, memory, and vector store all connect as tools/components to the AI Agent Embeddings connect specifically to the Supabase Vector Store for similarity search Requirements Supabase account and project AI model API key (any LLM provider of your choice) OpenAI API key (for embeddings - this is covered in Cole Medin's tutorial) n8n built-in PostgreSQL access (for conversation memory) Platform-specific webhook configuration (optional) How to set up Step 1: Configure your trigger The template uses n8n's default chat trigger For external platforms: Replace with webhook trigger and configure your platform's webhook URL Supported platforms: Any service with webhook capabilities (websites, Slack, Teams, Discord, etc.) Step 2: Set up your knowledge base For creating and managing your vector database, follow this comprehensive guide: Watch Cole Medin's tutorial on document vectorization This video shows how to build a complete knowledge base on Supabase The tutorial covers document processing, embedding creation, and database optimization Important: The video explains the OpenAI embeddings configuration required for vector search Step 3: Configure the AI agent Define your prompt: Customize the agent's personality and role Example: "You are the virtual assistant for example.com. Help users by answering their questions about our products and services." Select your language model: Choose any AI provider you prefer (OpenAI, Anthropic, Google, etc.) Set behavior parameters: Define response style, tone, and limitations Step 4: Connect Supabase Vector Store Add the "Supabase Vector Store" tool to your agent Configure your Supabase project credentials Mode: Set to "retrieve-as-tool" for automatic agent integration Tool Description: Customize description (default: "Database") to describe your knowledge base Table configuration: Specify the table containing your knowledge base (example shows "growth_ai_documents") Ensure your table name matches your actual knowledge base structure Multiple tables: You can connect several tables for organized data structure The agent will automatically decide when to search the knowledge base based on user queries Step 5: Set up conversation memory (recommended) Use "Postgres Chat Memory" with n8n's built-in PostgreSQL credentials Configure table name: Choose a name for your chat history table (will be auto-created) Context Window Length: Set to 20 messages by default (adjustable based on your needs) Benefits: Conversation traceability and analytics Context retention across messages Unique conversation IDs for user sessions Stored in n8n's database, not Supabase How to customize the workflow Basic conversation features Response style: Modify prompts to change personality and tone Knowledge scope: Update Supabase tables to expand or focus the knowledge base Language support: Configure for multiple languages Response length: Set limits for concise or detailed answers Memory retention: Adjust context window length for longer or shorter conversation memory Advanced action capabilities The chatbot can be extended with additional tools for: Email automation: Send support emails when users request assistance Calendar integration: Book appointments directly in Google Calendar Database updates: Modify Airtable or other databases based on user interactions API integrations: Connect to external services and systems File handling: Process and analyze uploaded documents Platform-specific deployments Website integration Replace chat trigger with webhook trigger Configure your website's chat widget to send messages to the n8n webhook URL Handle response formatting for your specific chat interface Slack/Teams deployment Set up webhook trigger with Slack/Teams webhook URL Configure response formatting for platform-specific message structures Add platform-specific features (mentions, channels, etc.) E-commerce integration Connect to product databases Add order tracking capabilities Integrate with payment systems Configure support ticket creation Results interpretation Conversation management Chat history: All conversations stored in n8n's PostgreSQL database with unique IDs Context tracking: Agent maintains conversation flow and references previous messages Analytics potential: Historical data available for analysis and improvement Knowledge retrieval Semantic search: Vector database returns most relevant information based on meaning, not just keywords Automatic decision: Agent automatically determines when to search the knowledge base Source tracking: Ability to trace answers back to source documents Accuracy improvement: Continuously refine knowledge base based on user queries Use cases Internal applications Developer documentation: Quick access to technical guides and APIs HR support: Employee handbook and policy questions IT helpdesk: Troubleshooting guides and system information Training assistant: Learning materials and procedure guidance External customer service E-commerce support: Product information and order assistance Technical support: User manuals and troubleshooting Sales assistance: Product recommendations and pricing FAQ automation: Common questions and instant responses Specialized implementations Lead qualification: Gather customer information and schedule sales calls Appointment booking: Healthcare, consulting, or service appointments Order processing: Take orders and update inventory systems Multi-language support: Global customer service with language detection Workflow limitations Knowledge base dependency: Quality depends on source documentation and embedding setup Memory storage: Requires active n8n PostgreSQL connection for conversation history Platform restrictions: Some platforms may have webhook limitations Response time: Vector search may add slight delay to responses Token limits: Large context windows may increase API costs Embedding costs: OpenAI embeddings required for vector search functionality
by Daniel Agrici
This workflow automates business intelligence. Submit one URL, and it scrapes the website, uses AI to perform a comprehensive analysis, and generates a professional report in Google Doc and PDF format. It's perfect for agencies, freelancers, and consultants who need to streamline client research or competitive analysis. How It Works The workflow is triggered by a form input, where you provide a single website URL. Scrape: It uses Firecrawl to scrape the sitemap and get the full content from the target website. Analyze: The main workflow calls a Tools Workflow (included below) which uses Google Gemini and Perplexity AI agents to analyze the scraped content and extract key business information. Generate & Deliver: All the extracted data is formatted and used to populate a template in Google Docs. The final report is saved to Google Drive and delivered via Gmail. What It Generates The final report is a comprehensive business analysis, including: Business Overview: A full company description. Target Audience Personas: Defines the demographic and psychographic profiles of ideal customers. Brand & UVP: Extracts the brand's personality matrix and its Unique Value Proposition (UVP). Customer Journey: Maps the typical customer journey from Awareness to Loyalty. Required Tools This workflow requires n8n and API keys/credentials for the following services: Firecrawl (for scraping) Perplexity (for AI analysis) Google Gemini (for AI analysis) Google Services (for Docs, Drive, and Gmail) ⚠️ Required: Tools Workflow This workflow will not work without its "Tools" sub-workflow. Please create a new, separate workflow in n8n, name it (e.g., "Business Analysis Tools"), and paste the following code into it. { "name": "Business Analysis Workflow Tools", "nodes": [ { "parameters": { "workflowInputs": { "values": [ { "name": "function" }, { "name": "keyword" }, { "name": "url" }, { "name": "location_code" }, { "name": "language_code" } ] } }, "type": "n8n-nodes-base.executeWorkflowTrigger", "typeVersion": 1.1, "position": [ -448, 800 ], "id": "e79e0605-f9ac-4166-894c-e5aa9bd75bac", "name": "When Executed by Another Workflow" }, { "parameters": { "rules": { "values": [ { "conditions": { "options": { "caseSensitive": true, "leftValue": "", "typeValidation": "strict", "version": 2 }, "conditions": [ { "id": "8d7d3035-3a57-47ee-b1d1-dd7bfcab9114", "leftValue": "serp_search", "rightValue": "={{ $json.function }}", "operator": { "type": "string", "operation": "equals", "name": "filter.operator.equals" } } ], "combinator": "and" }, "renameOutput": true, "outputKey": "serp_search" }, { "conditions": { "options": { "caseSensitive": true, "leftValue": "", "typeValidation": "strict", "version": 2 }, "conditions": [ { "id": "bb2c23eb-862d-4582-961e-5a8d8338842c", "leftValue": "ai_mode", "rightValue": "={{ $json.function }}", "operator": { "type": "string", "operation": "equals", "name": "filter.operator.equals" } } ], "combinator": "and" }, "renameOutput": true, "outputKey": "ai_mode" }, { "conditions": { "options": { "caseSensitive": true, "leftValue": "", "typeValidation": "strict", "version": 2 }, "conditions": [ { "id": "4603eee1-3888-4e32-b3b9-4f299dfd6df3", "leftValue": "internal_links", "rightValue": "={{ $json.function }}", "operator": { "type": "string", "operation": "equals", "name": "filter.operator.equals" } } ], "combinator": "and" }, "renameOutput": true, "outputKey": "internal_links" } ] }, "options": {} }, "type": "n8n-nodes-base.switch", "typeVersion": 3.2, "position": [ -208, 784 ], "id": "72c37890-7054-48d8-a508-47ed981551d6", "name": "Switch" }, { "parameters": { "method": "POST", "url": "https://api.dataforseo.com/v3/serp/google/organic/live/advanced", "authentication": "genericCredentialType", "genericAuthType": "httpBasicAuth", "sendBody": true, "specifyBody": "json", "jsonBody": "=[\n {\n \"keyword\": \"{{ $json.keyword.replace(/[:'\"\\\\/]/g, '') }}\",\n \"location_code\": {{ $json.location_code }},\n \"language_code\": \"{{ $json.language_code }}\",\n \"depth\": 10,\n \"group_organic_results\": true,\n \"load_async_ai_overview\": true,\n \"people_also_ask_click_depth\": 1\n }\n]", "options": { "redirect": { "redirect": { "followRedirects": false } } } }, "type": "n8n-nodes-base.httpRequest", "typeVersion": 4.2, "position": [ 384, 512 ], "id": "6203f722-b590-4a25-8953-8753a44eb3cb", "name": "SERP Google", "credentials": { "httpBasicAuth": { "id": "n5o00CCWcmHFeI1p", "name": "DataForSEO" } } }, { "parameters": { "content": "## SERP Google", "height": 272, "width": 688, "color": 4 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ 288, 432 ], "id": "81593217-034f-466d-9055-03ab6b2d7d08", "name": "Sticky Note3" }, { "parameters": { "assignments": { "assignments": [ { "id": "97ef7ee0-bc97-4089-bc37-c0545e28ed9f", "name": "platform", "value": "={{ $json.tasks[0].data.se }}", "type": "string" }, { "id": "9299e6bb-bd36-4691-bc6c-655795a6226e", "name": "type", "value": "={{ $json.tasks[0].data.se_type }}", "type": "string" }, { "id": "2dc26c8e-713c-4a59-a353-9d9259109e74", "name": "keyword", "value": "={{ $json.tasks[0].data.keyword }}", "type": "string" }, { "id": "84c9be31-8f1d-4a67-9d13-897910d7ec18", "name": "results", "value": "={{ $json.tasks[0].result }}", "type": "array" } ] }, "options": {} }, "type": "n8n-nodes-base.set", "typeVersion": 3.4, "position": [ 592, 512 ], "id": "a916551a-009b-403f-b02e-3951d54d2407", "name": "Prepare SERP output" }, { "parameters": { "content": "# Google Organic Search API\n\nThis API lets you retrieve real-time Google search results with a wide range of parameters and custom settings. \nThe response includes structured data for all available SERP features, along with a direct URL to the search results page. \n\n👉 Documentation\n", "height": 272, "width": 496, "color": 4 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ 976, 432 ], "id": "87672b01-7477-4b43-9ccc-523ef8d91c64", "name": "Sticky Note17" }, { "parameters": { "method": "POST", "url": "https://api.dataforseo.com/v3/serp/google/ai_mode/live/advanced", "authentication": "genericCredentialType", "genericAuthType": "httpBasicAuth", "sendBody": true, "specifyBody": "json", "jsonBody": "=[\n {\n \"keyword\": \"{{ $json.keyword }}\",\n \"location_code\": {{ $json.location_code }},\n \"language_code\": \"{{ $json.language_code }}\",\n \"device\": \"mobile\",\n \"os\": \"android\"\n }\n]", "options": { "redirect": { "redirect": {} } } }, "type": "n8n-nodes-base.httpRequest", "typeVersion": 4.2, "position": [ 384, 800 ], "id": "fb0001c4-d590-45b3-a3d0-cac7174741d3", "name": "AI Mode", "credentials": { "httpBasicAuth": { "id": "n5o00CCWcmHFeI1p", "name": "DataForSEO" } } }, { "parameters": { "content": "## AI Mode", "height": 272, "width": 512, "color": 6 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ 288, 720 ], "id": "2cea3312-31f8-4ff0-b385-5b76b836274c", "name": "Sticky Note11" }, { "parameters": { "assignments": { "assignments": [ { "id": "b822f458-ebf2-4a37-9906-b6a2606e6106", "name": "keyword", "value": "={{ $json.tasks[0].data.keyword }}", "type": "string" }, { "id": "10484675-b107-4157-bc7e-b942d8cdb5d2", "name": "result", "value": "={{ $json.tasks[0].result[0].items }}", "type": "array" } ] }, "options": {} }, "type": "n8n-nodes-base.set", "typeVersion": 3.4, "position": [ 592, 800 ], "id": "6b1e7239-ee2b-4457-8acb-17ce87415729", "name": "Prepare AI Mode Output" }, { "parameters": { "content": "# Google AI Mode API\n\nThis API provides AI-generated search result summaries and insights from Google. \nIt returns detailed explanations, overviews, and related information based on search queries, with parameters to customize the AI overview. \n\n👉 Documentation\n", "height": 272, "width": 496, "color": 6 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ 800, 720 ], "id": "d761dc57-e35d-4052-a360-71170a155f7b", "name": "Sticky Note18" }, { "parameters": { "content": "## Input", "height": 384, "width": 544, "color": 7 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ -528, 672 ], "id": "db90385e-f921-4a9c-89f3-53fc5825b207", "name": "Sticky Note" }, { "parameters": { "assignments": { "assignments": [ { "id": "b865f4a0-b4c3-4dde-bf18-3da933ab21af", "name": "platform", "value": "={{ $json.platform }}", "type": "string" }, { "id": "476e07ca-ccf6-43d4-acb4-4cc905464314", "name": "type", "value": "={{ $json.type }}", "type": "string" }, { "id": "f1a14eb8-9f10-4198-bbc7-17091532b38e", "name": "keyword", "value": "={{ $json.keyword }}", "type": "string" }, { "id": "181791a0-1d88-481c-8d98-a86242bb2135", "name": "results", "value": "={{ $json.results[0].items }}", "type": "array" } ] }, "options": {} }, "type": "n8n-nodes-base.set", "typeVersion": 3.4, "position": [ 800, 512 ], "id": "83fef061-5e0b-417c-b1f6-d34eb712fac6", "name": "Sort Results" }, { "parameters": { "content": "## Internal Links", "height": 272, "width": 272, "color": 5 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ 288, 1024 ], "id": "9246601a-f133-4ca3-aac8-989cb45e6cd2", "name": "Sticky Note7" }, { "parameters": { "method": "POST", "url": "https://api.firecrawl.dev/v2/map", "sendHeaders": true, "headerParameters": { "parameters": [ { "name": "Authorization", "value": "Bearer your-firecrawl-apikey" } ] }, "sendBody": true, "specifyBody": "json", "jsonBody": "={\n \"url\": \"https://{{ $json.url }}\",\n \"limit\": 400,\n \"includeSubdomains\": false,\n \"sitemap\": \"include\"\n }", "options": {} }, "type": "n8n-nodes-base.httpRequest", "typeVersion": 4.2, "position": [ 368, 1104 ], "id": "fd6a33ae-6fb3-4331-ab6a-994048659116", "name": "Get Internal Links" }, { "parameters": { "content": "# Firecrawl Map API\n\nThis endpoint maps a website from a single URL and returns the list of discovered URLs (titles and descriptions when available) — extremely fast and useful for selecting which pages to scrape or for quickly enumerating site links. (Firecrawl)\n\nIt supports a search parameter to find relevant pages inside a site, location/languages options to emulate country/language (uses proxies when available), and SDK + cURL examples in the docs,\n\n👉 Documentation\n\n[1]: https://docs.firecrawl.dev/features/map \"Map | Firecrawl\"\n", "height": 272, "width": 624, "color": 5 }, "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ 560, 1024 ], "id": "08457204-93ff-4586-a76e-03907118be3c", "name": "Sticky Note24" } ], "pinData": { "When Executed by Another Workflow": [ { "json": { "function": "serp_search", "keyword": "villanyszerelő Largo Florida", "url": null, "location_code": 2840, "language_code": "hu" } } ] }, "connections": { "When Executed by Another Workflow": { "main": [ [ { "node": "Switch", "type": "main", "index": 0 } ] ] }, "Switch": { "main": [ [ { "node": "SERP Google", "type": "main", "index": 0 } ], [ { "node": "AI Mode", "type": "main", "index": 0 } ], [ { "node": "Get Internal Links", "type": "main", "index": 0 } ] ] }, "SERP Google": { "main": [ [ { "node": "Prepare SERP output", "type": "main", "index": 0 } ] ] }, "AI Mode": { "main": [ [ { "node": "Prepare AI Mode Output", "type": "main", "index": 0 } ] ] }, "Prepare SERP output": { "main": [ [ { "node": "Sort Results", "type": "main", "index": 0 } ] ] }, "Sort Results": { "main": [ [] ] } }, "active": false, "settings": { "executionOrder": "v1" }, "versionId": "6fce16d1-aa28-4939-9c2d-930d11c1e17f", "meta": { "instanceId": "1ee7b11b3a4bb285563e32fdddf3fbac26379ada529b942ee7cda230735046a1" }, "id": "VjpOW2V2aNV9HpQJ", "tags": [] } `
by Toshiya Minami
Here’s a clean, English-only template description you can paste into the n8n “Description” field. Overview This workflow analyzes customer survey responses, groups them by sentiment (positive / neutral / negative), generates themes and insights with an AI agent, and delivers a consolidated report to your destinations (Google Sheets, Slack). It runs on a daily schedule and uses batch-based AI analysis for accuracy. Flow: Schedule → Fetch from Sheets → Group & batch (Code) → AI analysis → Aggregate → Save/Notify (Sheets, Slack) What You’ll Need A survey data source (Google Sheets recommended) AI model credentials (e.g., OpenAI or OpenRouter) Optional destinations: Google Sheets (summary sheet), Slack channel Setup Data source (Google Sheets) In Get Survey Responses, replace YOUR_SHEET_ID and YOUR_SHEET_NAME with your sheet details. Ensure the sheet includes columns like: 満足度 (Rating), 自由記述コメント (Comment), 回答日時 (Timestamp). AI model Add credentials to your preferred LLM node (OpenAI/OpenRouter). Keep the prompt’s JSON-only requirement so the structured parser can consume it reliably. Destinations Save to Sheet: set your output documentId / sheetName. Slack: set the target channelId on the Slack node. How It Works Daily Schedule Trigger — starts the workflow at your chosen time. Get Survey Responses (Sheets) — reads survey data. Group & Prepare Data (Code) — classifies by rating (>=4: positive, =3: neutral, <3: negative) and creates batches (max 50 per batch). Loop Over Batches — feeds each sentiment batch to the AI separately for cleaner signals. Analyze Survey Batch (AI Agent) — returns structured JSON: themes, insights, recommendations. Add Metadata (Code) — attaches original sentiment and item counts to each AI result. Aggregate Results (Code) — merges all batches; outputs Top Themes, Key Insights, Priority Recommendations, and an Executive Summary. Save to Sheet / Slack — appends the summary to a sheet and posts highlights to Slack. Data Assumptions (Columns) Your source should include at least: 満足度 (Rating) — integer 1–5 自由記述コメント (Comment) — string 回答日時 (Timestamp) — ISO string or date Outputs Consolidated summary** containing: Top themes (with example quotes) Key insights Priority recommendations Executive summary Destinations**: Google Sheets (one row per run) and Slack (high-level highlights) Customize Adjust sentiment thresholds (e.g., require >=5 for positive) or batch size (default 50) in the Code node. Tailor the AI prompt or the output JSON schema to your domain. Add more outputs (CSV export, database insert, additional channels) in parallel after the Aggregate step. Before You Run (Checklist) [ ] Add credentials for Sheets / AI / Slack in Credentials [ ] Update documentId, sheetName, and Slack channelId [ ] Confirm your column names match the Code node references [ ] Verify schedule time and timezone (e.g., Asia/Tokyo) Troubleshooting Parser errors on AI output: ensure the model response is **JSON-only; reduce temperature or simplify schema if needed. Only some batches run**: check batch size in Loop and ensure each sentiment bucket actually contains responses. No output to Sheets/Slack**: verify credentials, IDs, and required fields; confirm permissions. Security & Template Notes Do not include credentials in the template file. Users add their own after import. Use Sticky Notes to document Overview, Setup, Processing Logic, and Output choices. This template already includes guideline-friendly notes.
by Br1
Load Jira open issues with comments into Pinecone + RAG Agent (Direct Tool or MCP) Who’s it for This workflow is designed for support teams, data engineers, and AI developers who want to centralize Jira issue data into a vector database. It collects open issues and their associated comments, converts them into embeddings, and loads them into Pinecone for semantic search, retrieval-augmented generation (RAG), or AI-powered support bots. It’s also published as an MCP tool, so external applications can query the indexed issues directly. How it works The workflow automates Jira issue extraction, comment processing, and vector storage in Pinecone. Importantly, the Pinecone index is recreated at every run so that it always reflects the current set of unresolved tickets. Trigger – A schedule trigger runs the workflow at defined times (e.g., 8, 11, 14, and 17 on weekdays). Issue extraction with pagination – Calls the Jira REST API to fetch open issues matching a JQL query (unresolved cases created in the last year). Pagination is fully handled: issues are retrieved in batches of 25, and the workflow continues iterating until all open issues are loaded. Data transformation – Extracts key fields (issue ID, key, summary, description, product, customer, classification, status, registration date). Comments integration – Fetches all comments for each issue, filters out empty/irrelevant ones (images, dots, empty markdown), and merges them with the issue data. Text cleaning – Converts HTML descriptions into clean plain text for processing. Embedding generation – Uses the OpenAI Embeddings node to vectorize text. Vector storage with index recreation – Loads embeddings and metadata into Pinecone under the jira namespace and the openissues index. The namespace is cleared at every run to ensure the index contains only unresolved tickets. Document chunking – Splits long issue texts into smaller chunks (512 tokens, 50 overlap) for better embedding quality. MCP publishing – Exposes the Pinecone index as an MCP tool (openissues), enabling external systems to query Jira issues semantically. How to set up Jira – Configure a Jira account and generate a token. Update the Jira node with credentials and adjust the JQL query if needed. OpenAI – Set up an OpenAI API key for embeddings. Configure embedding dimensions (default: 512). Pinecone – Create an index (e.g., openissues) with matching dimensions (512). Configure Pinecone API credentials and namespace (jira). The index will be cleared automatically at every run before reloading unresolved issues. Schedule – Adjust the cron expression in the Schedule Trigger to fit your update frequency. Optional MCP – If you want to query Jira issues via MCP, configure the MCP trigger and tool nodes. Requirements Jira account with API access and permissions to read issues and comments. OpenAI API key with access to the embedding model. Pinecone account with an index created (dimensions = 512). n8n instance with credentials set up for Jira, OpenAI, and Pinecone. How to customize the workflow JQL query**: Modify it to control which issues are extracted (e.g., by project, type, or time window). Pagination size**: Adjust the maxResults parameter (default 25) if you want larger or smaller batches per iteration. Metadata fields**: Add or remove fields in the “Extract Relevant Info” code node. Chunk size**: Adjust chunk size/overlap in the Document Chunker for different embedding strategies. Embedding model**: Switch to a different embedding provider if preferred. Vector store**: Replace Pinecone with another supported vector database if needed. Downstream use**: Extend with notifications, dashboards, or AI assistants that consume the vector data. AI Chatbot for Jira open tickets with SLA insights Who’s it for This workflow is designed for commercial teams, customer support, and service managers who need quick, conversational access to unresolved Jira tickets. It enables them to check whether a client has open issues, see related details, and understand SLA implications without manually browsing Jira. How it works Chat interface** – Provides a web-based chat that team members can use to ask natural language questions such as: “Are there any issues from client ACME?” “Do we have tickets that have been open for a long time?” AI Agent** – Powered by OpenAI, it interprets questions and queries the Pinecone vector store (openissues index, jira namespace). Memory** – Maintains short-term chat history for more natural conversations. Ticket retrieval** – Uses Pinecone embeddings (dimension = 512) to fetch unresolved tickets enriched with metadata: Issue key, description, customer, product, severity color, status, AM contract type, and SLA. SLA integration** – Service levels (Basic, Advanced, Full Service, with optional Fast Support) are provided via the SLA node. The agent explains which SLA applies based on ticket severity, registration date, and contract type. AI response** – Returns a friendly, collaborative summary of all tickets found, including: Ticket identifier Description Customer and product Severity level (Red, Yellow, Green, White) Ticket status Contract level and SLA explanation Setup Configure Jira → Pinecone index (openissues, 512 dimensions) already populated with unresolved tickets. Provide OpenAI API credentials. Ensure the SLA node includes the correct service-level definitions. Adjust chat branding (title, subtitle, CSS) if desired. Requirements Jira account with API access. Pinecone account with an index (openissues, dimensions = 512). OpenAI API key. n8n instance with LangChain and chatTrigger nodes enabled. How to customize Change the SLA node text if your service levels differ. Adjust the chat interface design (colors, title, subtitle). Expand metadata in Pinecone (e.g., add project type, priority, or assigned team). Train with additional examples in the system message to refine AI behavior.
by Ranjan Dailata
Who this is for? This workflow is designed for: Marketing analysts, **SEO specialists, and content strategists who want automated intelligence on their online competitors. Growth teams** that need quick insights from SERP (Search Engine Results Pages) without manual data scraping. Agencies** managing multiple clients’ SEO presence and tracking competitive positioning in real-time. What problem is this workflow solving? Manual competitor research is time-consuming, fragmented, and often lacks actionable insights. This workflow automates the entire process by: Fetching SERP results from multiple search engines (Google, Bing, Yandex, DuckDuckGo) using Thordata’s Scraper API. Using OpenAI GPT-4.1-mini to analyze, summarize, and extract keyword opportunities, topic clusters, and competitor weaknesses. Producing structured, JSON-based insights ready for dashboards or reports. Essentially, it transforms raw SERP data into strategic marketing intelligence — saving hours of research time. What this workflow does Here’s a step-by-step overview of how the workflow operates: Step 1: Manual Trigger Initiates the process on demand when you click “Execute Workflow.” Step 2: Set the Input Query The “Set Input Fields” node defines your search query, such as: > “Top SEO strategies for e-commerce in 2025” Step 3: Multi-Engine SERP Fetching Four HTTP request tools send the query to Thordata Scraper API to retrieve results from: Google Bing Yandex DuckDuckGo Each uses Bearer Authentication configured via “Thordata SERP Bearer Auth Account.” Step 4: AI Agent Processing The LangChain AI Agent orchestrates the data flow, combining inputs and preparing them for structured analysis. Step 5: SEO Analysis The SEO Analyst node (powered by GPT-4.1-mini) parses SERP results into a structured schema, extracting: Competitor domains Page titles & content types Ranking positions Keyword overlaps Traffic share estimations Strengths and weaknesses Step 6: Summarization The Summarize the content node distills complex data into a concise executive summary using GPT-4.1-mini. Step 7: Keyword & Topic Extraction The Keyword and Topic Analysis node extracts: Primary and secondary keywords Topic clusters and content gaps SEO strength scores Competitor insights Step 8: Output Formatting The Structured Output Parser ensures results are clean, validated JSON objects for further integration (e.g., Google Sheets, Notion, or dashboards). 4. Setup Prerequisites n8n Cloud or Self-Hosted instance** Thordata Scraper API Key** (for SERP data retrieval) OpenAI API Key** (for GPT-based reasoning) Setup Steps Add Credentials Go to Credentials → Add New → HTTP Bearer Auth → Paste your Thordata API token. Add OpenAI API Credentials for the GPT model. Import the Workflow Copy the provided JSON or upload it into your n8n instance. Set Input In the “Set the Input Fields” node, replace the example query with your desired topic, e.g.: “Google Search for Top SEO strategies for e-commerce in 2025” Execute Click “Execute Workflow” to run the analysis. How to customize this workflow to your needs Modify Search Query Change the search_query variable in the Set Node to any target keyword or topic. Change AI Model In the OpenAI Chat Model nodes, you can switch from gpt-4.1-mini to another model for better quality or lower cost. Extend Analysis Edit the JSON schema in the “Information Extractor” nodes to include: Sentiment analysis of top pages SERP volatility metrics Content freshness indicators Export Results Connect the output to: Google Sheets / Airtable** for analytics Notion / Slack** for team reporting Webhook / Database** for automated storage Summary This workflow creates an AI-powered Competitor Intelligence System inside n8n by blending: Real-time SERP scraping (Thordata) Automated AI reasoning (OpenAI GPT-4.1-mini) Structured data extraction (LangChain Information Extractors)
by Alex Gurinovich
AI powered Automated Crypto Insights with Chart-img and BrowserAI Tired of paying for costly crypto updates? Or reading long analyses? This n8n workflow automates the delivery of personalized crypto insights, using Chart-img for capturing coin graphs of BTC, ETH, SOL, and XRP as base64 images, and BrowserAI for web scraping and information gathering of news and articles. This setup ensures thorough market coverage and timely updates, without breaking the bank. Overview Designed for crypto enthusiasts, traders, and analysts, this workflow automates the process of collecting and distributing valuable crypto information. It’s perfect for anyone wanting consistent and accurate updates conveniently. Setup Instructions Pre-conditions Chart-img Account: Register for a Chart-img account and obtain an API key here. BrowserAI Account: Sign up for BrowserAI and get your API key from your BrowserAI dashboard. Step-by-Step Setup 🗓️ Schedule and Date Calculation Triggers twice daily at 8AM and 8PM to ensure up-to-date insights, and can be changed to your like. Calculates yesterday’s date dynamically for accurate data retrieval. 📊 Coin Graph Capture with Chart-img Uses Chart-img API to capture 24-hour graphs for BTC, ETH, SOL, and XRP. Converts images to base64 strings for easy integration into analysis. 🌐 Web Scraping with BrowserAI Creates tasks in BrowserAI to gather the latest crypto news and insights. Automates data extraction for comprehensive market analysis. ⌛ Monitor and Complete Tasks Incorporates status checks to ensure BrowserAI tasks complete successfully before proceeding. ✏️ Analyze and Synthesize Information Combines graph data with web-scraped insights for an enriched summary. Uses AI to generate simple, informative descriptions under 60 words to not overload you. 📩 Deliver Insights Efficiently Sends the compiled analysis to your Telegram, with easy options to switch to WhatsApp, email, or any other communication channel. Customization Guidance Content Personalization:** Customize the datasets and keywords for tailored updates. Modify Schedule:** Adjust triggering times according to your needs using n8n’s scheduling options. This workflow delivers a seamless and cost-effective approach to staying informed about crypto market trends, combining the latest technology for superior insights. ++WARNING:++ This template is intended for personal use only and does not constitute financial advice. Any actions taken using this tool are solely the user's responsibility.
by Angel Menendez
Enhance Security Operations with the Venafi Slack CertBot! Venafi Presentation - Watch Video Our Venafi Slack CertBot is strategically designed to facilitate immediate security operations directly from Slack. This tool allows end users to request Certificate Signing Requests that are automatically approved or passed to the Secops team for manual approval depending on the Virustotal analysis of the requested domain. Not only does this help centralize requests, but it helps an organization maintain the security certifications by allowing automated processes to log and analyze requests in real time. Workflow Highlights: Interactive Modals**: Utilizes Slack modals to gather user inputs for scan configurations and report generation, providing a user-friendly interface for complex operations. Dynamic Workflow Execution**: Integrates seamlessly with Venafi to execute CSR generation and if any issues are found, AI can generate a custom report that is then passed to a slack teams channel for manual approval with the press of a single button. Operational Flow: Parse Webhook Data**: Captures and parses incoming data from Slack to understand user commands accurately. Execute Actions**: Depending on the user's selection, the workflow triggers other actions within the flow like automatic Virustotal Scanning. Respond to Slack**: Ensures that every interaction is acknowledged, maintaining a smooth user experience by managing modal popups and sending appropriate responses. Setup Instructions: Verify that Slack and Qualys API integrations are correctly configured for seamless interaction. Customize the modal interfaces to align with your organization's operational protocols and security policies. Test the workflow to ensure that it responds accurately to Slack commands and that the integration with Qualys is functioning as expected. Need Assistance? Explore Venafi's Documentation or get help from the n8n Community for more detailed guidance on setup and customization. Deploy this bot within your Slack environment to significantly enhance the efficiency and responsiveness of your security operations, enabling proactive management of CSR's.
by Naser Emad
Create sophisticated shortened URLs with advanced targeting and tracking capabilities using the BioURL.link API Overview This n8n workflow provides a powerful webhook-based URL shortening service that integrates with BioURL.link's comprehensive link management platform. Transform any long URL into a smart, trackable short link with advanced targeting features through a simple HTTP POST request. Key Features Smart URL Shortening**: Create custom short URLs with optional passwords and expiry dates Geo-targeting**: Redirect users to different URLs based on their geographic location Device Targeting**: Serve different content to iPhone, Android, or other device users Language Targeting**: Customize redirects based on user language preferences Deep Linking**: Automatically redirect mobile users to app store or native apps Custom Meta Tags**: Set custom titles, descriptions, and images for social media sharing Analytics & Tracking**: Integrate tracking pixels and campaign parameters Splash Pages**: Create branded landing pages before redirects How It Works Webhook Trigger: Send a POST request to the /shorten-link endpoint API Integration: Workflow processes the request and calls BioURL.link's API Response: Returns the shortened URL with all configured features Setup Requirements BioURL.link account and API key Replace "Bearer YOURAPIKEY" in the HTTP Request node with your actual API key Customize the JSON body parameters as needed for your use case Use Cases Marketing campaigns with geo-specific landing pages Mobile app promotion with automatic app store redirects A/B testing with device or language-based targeting Social media sharing with custom preview metadata Time-sensitive promotions with expiry dates Password-protected internal links Technical Details Trigger**: Webhook (POST to /shorten-link) API Endpoint**: https://biourl.link/api/url/add Response**: Complete shortened URL data Authentication**: Bearer token required Perfect for marketers, developers, and businesses needing advanced URL management capabilities beyond basic shortening services.
by Lucas Peyrin
Overview This template provides a powerful and configurable utility to convert JSON data into a clean, well-structured XML format. It is designed for developers, data analysts, and n8n users who need to interface with legacy systems, generate structured reports, or prepare data for consumption by Large Language Models (LLMs), which often exhibit improved understanding and parsing with XML-formatted input. Use Cases This workflow is ideal for solving several common data transformation problems: Preparing Data for AI Prompts:** LLMs like GPT-4 often parse XML more reliably than JSON within a prompt. The explicit closing tags and hierarchical nature of XML reduce ambiguity, leading to better and more consistent responses from the AI. Interfacing with Legacy Systems:** Many enterprise systems, SOAP APIs, and older software exclusively accept or produce data in XML format. This template acts as a bridge, allowing modern JSON-based services to communicate with them seamlessly. Generating Structured Reports:** Create XML files for reporting or data interchange standards that require a specific, well-defined structure. Improving Data Readability:** For complex nested data, a well-formatted XML can be easier for humans to read and debug than a compact JSON string. How it works This workflow acts as a powerful, configurable JSON to XML converter. It takes a JSON object as input and performs the following steps: Recursively Parses JSON: It intelligently navigates through the entire JSON structure, including nested objects and arrays. Handles Data Types: Primitive Arrays (e.g., ["a", "b", "c"]) are joined into a single string with a safe delimiter. Complex Arrays (of objects) are converted into indexed XML tags (<0>, <1>, etc.). Dates are automatically detected and formatted into a readable YYYY-MM-DD HH:mm:ss format. Generates XML String: It constructs a final XML string based on the logic and configuration set inside the Code node. The output is provided in a single xml field, ready for use. Set up steps Setup time: ~1 minute This workflow is designed to be used as a sub-workflow (or "child workflow"). In your main workflow, add an Execute Workflow node. In the Workflow parameter of that node, select this "JSON to XML Converter" workflow. That's it! You can now send JSON data to the Execute Workflow node and it will return the converted XML string in the xml field. Customization Options The true power of this template lies in its customizability, all managed within the configuration section at the top of the Code node. This allows you to fine-tune the output XML to your exact needs. REMOVE_EMPTY_VALUES**: Set to true (default) to completely omit tags for null, undefined, or empty string values, resulting in a cleaner XML. Set to false to include empty tags like <myTag></myTag>. Newline Formatting**: Control the spacing and readability of the output with four distinct settings: NEWLINES_TOP_LEVEL: Adjusts the newlines between root-level elements. NEWLINES_ARRAY_ITEMS: Controls spacing between items in a complex array (e.g., between <0> and <1>). NEWLINES_OBJECT_PROPERTIES: Manages newlines between the properties of an object. NEWLINES_WITHIN_TAGS: Adds newlines between an opening/closing tag and its content for an "indented" look. Prerequisites An active n8n instance. Basic familiarity with JSON and XML data structures. Understanding of how to use the Execute Workflow node to run sub-workflows.