by Hemanth Arety
Generate AEO strategy from brand input using AI competitor analysis This workflow automatically creates a comprehensive Answer Engine Optimization (AEO) strategy by identifying your top competitors, analyzing their positioning, and generating custom recommendations to help your brand rank in AI-powered search engines like ChatGPT, Perplexity, and Google SGE. Who it's for This template is perfect for: Digital marketing agencies** offering AEO services to clients In-house marketers** optimizing content for AI search engines Brand strategists** analyzing competitive positioning Content teams** creating AI-optimized content strategies SEO professionals** expanding into Answer Engine Optimization What it does The workflow automates the entire AEO research and strategy process in 6 steps: Collects brand information via a user-friendly web form (brand name, website, niche, product type, email) Identifies top 3 competitors using Google Gemini AI based on product overlap, market position, digital presence, and geographic factors Scrapes target brand website with Firecrawl to extract value propositions, features, and content themes Scrapes competitor websites in parallel to gather competitive intelligence Generates comprehensive AEO strategy using OpenAI GPT-4 with 15+ actionable recommendations Delivers formatted report via email with executive summary, competitive analysis, and implementation roadmap The entire process runs automatically and takes approximately 5-7 minutes to complete. How to set up Requirements You'll need API credentials for: Google Gemini API** (for competitor analysis) - Get API key OpenAI API** (for strategy generation) - Get API key Firecrawl API** (for web scraping) - Get API key Gmail account** (for email delivery) - Use OAuth2 authentication Setup Steps Import the workflow into your n8n instance Configure credentials: Add your Google Gemini API key to the "Google Gemini Chat Model" node Add your OpenAI API key to the "OpenAI Chat Model" node Add your Firecrawl API key as HTTP Header Auth credentials Connect your Gmail account using OAuth2 Activate the workflow and copy the form webhook URL Test the workflow by submitting a real brand through the form Check your email for the generated AEO strategy report Credentials Setup Tips For Firecrawl: Create HTTP Header Auth credentials with header name Authorization and value Bearer YOUR_API_KEY For Gmail: Use OAuth2 to avoid authentication issues with 2FA Test each API credential individually before running the full workflow How it works Competitor Identification The Google Gemini AI agent analyzes your brand based on 4 weighted criteria: product/service overlap (40%), market position (30%), digital presence (20%), and geographic overlap (10%). It returns structured JSON data with competitor names, URLs, overlap percentages, and detailed reasoning. Web Scraping Firecrawl extracts structured data from websites using custom schemas. For each site, it captures: company name, products/services, value proposition, target audience, key features, pricing info, and content themes. This runs asynchronously with 60-second waits to allow for complete extraction. Strategy Generation OpenAI GPT-4 analyzes the combined brand and competitor data to generate a comprehensive report including: executive summary, competitive analysis, 15+ specific AEO tactics across 4 categories (content optimization, structural improvements, authority building, answer engine targeting), content priority matrix with 10 ranked topics, and a detailed implementation roadmap. Email Delivery The strategy is formatted as a professional HTML email with clear sections, visual hierarchy, and actionable next steps. Recipients get an immediately implementable roadmap for improving their AEO performance. How to customize the workflow Change AI Models Replace Google Gemini** with Claude, GPT-4, or other LLM in the competitor analysis node Replace OpenAI** with Anthropic Claude or Google Gemini in the strategy generation node Both use LangChain agent nodes, making model swapping straightforward Modify Competitor Analysis Find more competitors**: Edit the AI prompt to request 5 or 10 competitors instead of 3 Add filtering criteria**: Include factors like company size, funding stage, or geographic focus Change ranking weights**: Adjust the 40/30/20/10 weighting in the prompt Enhance Data Collection Add social media scraping**: Include LinkedIn, Twitter/X, or Facebook page analysis Pull review data**: Integrate G2, Capterra, or Trustpilot APIs for customer sentiment Include traffic data**: Add SimilarWeb or Semrush API calls for competitive metrics Change Output Format Export to Google Docs**: Replace Gmail with Google Docs node to create shareable documents Send to Slack/Discord**: Post strategy summaries to team channels for collaboration Save to database**: Store results in Airtable, PostgreSQL, or MongoDB for tracking Create presentations**: Generate PowerPoint slides using automation tools Add More Features Schedule periodic analysis**: Run monthly competitive audits for specific brands A/B test strategies**: Generate multiple strategies and compare results over time Multi-language support**: Add translation nodes for international brands Custom branding**: Modify email templates with your agency's logo and colors Adjust Scraping Behavior Change Firecrawl schema**: Customize extracted data fields based on industry needs Add timeout handling**: Implement retry logic for failed scraping attempts Scrape more pages**: Extend beyond homepage to include blog, pricing, and about pages Use different scrapers**: Replace Firecrawl with Apify, Browserless, or custom solutions Tips for best results Provide clear brand information**: The more specific the product type and niche, the better the competitor identification Ensure websites are accessible**: Some sites block scrapers; consider adding user agents or rotating IPs Monitor API costs**: Firecrawl and OpenAI charges can add up; set usage limits Review generated strategies**: AI recommendations should be reviewed and customized for your specific context Iterate on prompts**: Fine-tune the AI prompts based on output quality over multiple runs Common use cases Client onboarding** for marketing agencies - Generate initial AEO assessments Content strategy planning** - Identify topics and angles competitors are missing Quarterly audits** - Track competitive positioning changes over time Product launches** - Understand competitive landscape before entering market Sales enablement** - Equip sales teams with competitive intelligence Note: This workflow uses community and AI nodes that require external API access. Make sure your n8n instance can make outbound HTTP requests and has the necessary LangChain nodes installed.
by TOMOMITSU ASANO
{ "name": "IoT Sensor Data Aggregation with AI-Powered Anomaly Detection", "nodes": [ { "parameters": { "content": "## How it works\nThis workflow monitors IoT sensors in real-time. It ingests data via MQTT or a schedule, normalizes the format, and removes duplicates using data fingerprinting. An AI Agent then analyzes readings against defined thresholds to detect anomalies. Finally, it routes alerts to Slack or Email based on severity and logs everything to Google Sheets.\n\n## Setup steps\n1. Configure the MQTT Trigger with your broker details.\n2. Set your specific limits in the Define Sensor Thresholds node.\n3. Connect your OpenAI credential to the Chat Model node.\n4. Authenticate the Gmail, Slack, and Google Sheets nodes.\n5. Create a Google Sheet with headers: timestamp, sensorId, location, readings, analysis.", "height": 484, "width": 360 }, "id": "298da7ff-0e47-4b6c-85f5-2ce77275cdf3", "name": "Main Overview", "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ -2352, -480 ] }, { "parameters": { "content": "## 1. Data Ingestion\nCaptures sensor data via MQTT for real-time streams or runs on a schedule for batch processing. Both streams are merged for unified handling.", "height": 488, "width": 412, "color": 7 }, "id": "4794b396-cd71-429c-bcef-61780a55d707", "name": "Section: Ingestion", "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ -1822, -48 ] }, { "parameters": { "content": "## 2. Normalization & Deduplication\nSets monitoring thresholds, standardizes the JSON structure, creates a content hash, and filters out duplicate readings to prevent redundant API calls.", "height": 316, "width": 884, "color": 7 }, "id": "339e7cb7-491e-44c9-b561-983e147237d8", "name": "Section: Processing", "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ -1376, 32 ] }, { "parameters": { "content": "## 3. AI Anomaly Detection\nAn AI Agent evaluates sensor data against thresholds to identify anomalies, assigning severity levels and providing actionable recommendations.", "height": 528, "width": 460, "color": 7 }, "id": "ebcb7ca3-f70c-4a90-8a2a-f489e7be4c73", "name": "Section: AI Analysis", "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ -422, 24 ] }, { "parameters": { "content": "## 4. Routing & Archiving\nRoutes alerts based on severity (Critical = Email+Slack, Warning = Slack) and archives all data points to Google Sheets for historical analysis.", "height": 756, "width": 900, "color": 7 }, "id": "7f2b32a5-d3b2-4fea-844f-4b39b8e8a239", "name": "Section: Alerting", "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ 94, -196 ] }, { "parameters": { "topics": "sensors/+/data", "options": {} }, "id": "bc86720b-9de9-4693-b090-343d3ebad3a3", "name": "MQTT Sensor Trigger", "type": "n8n-nodes-base.mqttTrigger", "typeVersion": 1, "position": [ -1760, 88 ] }, { "parameters": { "rule": { "interval": [ { "field": "minutes", "minutesInterval": 15 } ] } }, "id": "1c38f2d0-aa00-447e-bdae-bffd08c38461", "name": "Batch Process Schedule", "type": "n8n-nodes-base.scheduleTrigger", "typeVersion": 1.2, "position": [ -1760, 280 ] }, { "parameters": { "mode": "chooseBranch" }, "id": "f9b41822-ee61-448b-b324-38483036e0e1", "name": "Merge Triggers", "type": "n8n-nodes-base.merge", "typeVersion": 3, "position": [ -1536, 184 ] }, { "parameters": { "mode": "raw", "jsonOutput": "{\n \"thresholds\": {\n \"temperature\": {\"min\": -10, \"max\": 50, \"unit\": \"C\"},\n \"humidity\": {\"min\": 20, \"max\": 90, \"unit\": \"%\"},\n \"pressure\": {\"min\": 950, \"max\": 1050, \"unit\": \"hPa\"},\n \"co2\": {\"min\": 400, \"max\": 2000, \"unit\": \"ppm\"}\n },\n \"alertConfig\": {\n \"criticalChannel\": \"#iot-critical\",\n \"warningChannel\": \"#iot-alerts\",\n \"emailRecipients\": \"ops@example.com\"\n }\n}", "options": {} }, "id": "308705a8-edc7-4435-9250-487aa528e033", "name": "Define Sensor Thresholds", "type": "n8n-nodes-base.set", "typeVersion": 3.4, "position": [ -1312, 184 ] }, { "parameters": { "jsCode": "const items = $input.all();\nconst thresholds = $('Define Sensor Thresholds').first().json.thresholds;\nconst results = [];\n\nfor (const item of items) {\n let sensorData;\n try {\n sensorData = typeof item.json.message === 'string' \n ? JSON.parse(item.json.message) \n : item.json;\n } catch (e) {\n sensorData = item.json;\n }\n \n const now = new Date();\n const reading = {\n sensorId: sensorData.sensorId || sensorData.topic?.split('/')[1] || 'unknown',\n location: sensorData.location || 'Main Facility',\n timestamp: now.toISOString(),\n readings: {\n temperature: sensorData.temperature ?? null,\n humidity: sensorData.humidity ?? null,\n pressure: sensorData.pressure ?? null,\n co2: sensorData.co2 ?? null\n },\n metadata: {\n receivedAt: now.toISOString(),\n source: item.json.topic || 'batch',\n thresholds: thresholds\n }\n };\n \n results.push({ json: reading });\n}\n\nreturn results;" }, "id": "a2008189-5ace-418b-b0db-d51d63dcf2d8", "name": "Parse Sensor Payload", "type": "n8n-nodes-base.code", "typeVersion": 2, "position": [ -1088, 184 ] }, { "parameters": { "type": "SHA256", "value": "={{ $json.sensorId + '-' + $json.timestamp + '-' + JSON.stringify($json.readings) }}", "dataPropertyName": "dataHash" }, "id": "bf8db555-a10e-4468-a44a-cdc4c97e5b80", "name": "Generate Data Fingerprint", "type": "n8n-nodes-base.crypto", "typeVersion": 1, "position": [ -864, 184 ] }, { "parameters": { "compare": "selectedFields", "fieldsToCompare": "dataHash", "options": {} }, "id": "a45405e2-d211-449d-84d7-4538eaf56fcd", "name": "Remove Duplicate Readings", "type": "n8n-nodes-base.removeDuplicates", "typeVersion": 1, "position": [ -640, 184 ] }, { "parameters": { "text": "=Analyze this IoT sensor reading and determine if there are any anomalies:\n\nSensor ID: {{ $json.sensorId }}\nLocation: {{ $json.location }}\nTimestamp: {{ $json.timestamp }}\n\nReadings:\n- Temperature: {{ $json.readings.temperature }}°C (Normal: {{ $json.metadata.thresholds.temperature.min }} to {{ $json.metadata.thresholds.temperature.max }})\n- Humidity: {{ $json.readings.humidity }}% (Normal: {{ $json.metadata.thresholds.humidity.min }} to {{ $json.metadata.thresholds.humidity.max }})\n- CO2: {{ $json.readings.co2 }} ppm (Normal: {{ $json.metadata.thresholds.co2.min }} to {{ $json.metadata.thresholds.co2.max }})\n\nProvide your analysis in this exact JSON format:\n{\n \"hasAnomaly\": true/false,\n \"severity\": \"critical\"/\"warning\"/\"normal\",\n \"anomalies\": [\"list of detected issues\"],\n \"reasoning\": \"explanation of your analysis\",\n \"recommendation\": \"suggested action\"\n}", "options": { "systemMessage": "You are an IoT monitoring expert. Analyze sensor data and detect anomalies based on the provided thresholds. Be precise and provide actionable recommendations. Always respond in valid JSON format." } }, "id": "b60194ba-7b99-44e0-b0d7-9f1632dce4d4", "name": "AI Anomaly Detector", "type": "@n8n/n8n-nodes-langchain.agent", "typeVersion": 1.7, "position": [ -416, 184 ] }, { "parameters": { "jsCode": "const item = $input.first();\nconst originalData = $('Remove Duplicate Readings').first().json;\n\nlet aiAnalysis;\ntry {\n const responseText = item.json.output || item.json.text || '';\n const jsonMatch = responseText.match(/\\{[\\s\\S]*\\}/);\n aiAnalysis = jsonMatch ? JSON.parse(jsonMatch[0]) : {\n hasAnomaly: false,\n severity: 'normal',\n anomalies: [],\n reasoning: 'Unable to parse AI response',\n recommendation: 'Manual review required'\n };\n} catch (e) {\n aiAnalysis = {\n hasAnomaly: false,\n severity: 'normal',\n anomalies: [],\n reasoning: 'Parse error: ' + e.message,\n recommendation: 'Manual review required'\n };\n}\n\nreturn [{\n json: {\n ...originalData,\n analysis: aiAnalysis,\n alertLevel: aiAnalysis.severity,\n requiresAlert: aiAnalysis.hasAnomaly && aiAnalysis.severity !== 'normal'\n }\n}];" }, "id": "a145a8c7-538c-411a-95c6-9485acdcb969", "name": "Parse AI Analysis", "type": "n8n-nodes-base.code", "typeVersion": 2, "position": [ -64, 184 ] }, { "parameters": { "rules": { "values": [ { "conditions": { "options": { "caseSensitive": true, "typeValidation": "strict" }, "combinator": "and", "conditions": [ { "id": "critical", "operator": { "type": "string", "operation": "equals" }, "leftValue": "={{ $json.alertLevel }}", "rightValue": "critical" } ] }, "renameOutput": true, "outputKey": "Critical" }, { "conditions": { "options": { "caseSensitive": true, "typeValidation": "strict" }, "combinator": "and", "conditions": [ { "id": "warning", "operator": { "type": "string", "operation": "equals" }, "leftValue": "={{ $json.alertLevel }}", "rightValue": "warning" } ] }, "renameOutput": true, "outputKey": "Warning" } ] }, "options": { "fallbackOutput": "extra" } }, "id": "1ab9785d-9f7f-4840-b1e9-0afc62b00b12", "name": "Route by Severity", "type": "n8n-nodes-base.switch", "typeVersion": 3.2, "position": [ 160, 168 ] }, { "parameters": { "sendTo": "={{ $('Define Sensor Thresholds').first().json.alertConfig.emailRecipients }}", "subject": "=CRITICAL IoT Alert: {{ $json.sensorId }} - {{ $json.analysis.anomalies[0] || 'Anomaly Detected' }}", "message": "=CRITICAL IoT SENSOR ALERT\n\nSensor: {{ $json.sensorId }}\nLocation: {{ $json.location }}\nTime: {{ $json.timestamp }}\n\nReadings:\n- Temperature: {{ $json.readings.temperature }}°C\n- Humidity: {{ $json.readings.humidity }}%\n- CO2: {{ $json.readings.co2 }} ppm\n\nAI Analysis:\n{{ $json.analysis.reasoning }}\n\nDetected Issues:\n{{ $json.analysis.anomalies.join('\\n- ') }}\n\nRecommendation:\n{{ $json.analysis.recommendation }}", "options": {} }, "id": "28201a6c-10b5-4387-be89-10a57c634622", "name": "Send Critical Email", "type": "n8n-nodes-base.gmail", "typeVersion": 2.1, "position": [ 384, -80 ], "webhookId": "35b9f8fa-4a50-456e-b552-9fd20a25ccc5" }, { "parameters": { "select": "channel", "channelId": { "__rl": true, "mode": "name", "value": "#iot-critical" }, "text": "=🚨 CRITICAL IoT ALERT\n\nSensor: {{ $json.sensorId }}\nLocation: {{ $json.location }}\n\nReadings:\n• Temperature: {{ $json.readings.temperature }}°C\n• Humidity: {{ $json.readings.humidity }}%\n• CO2: {{ $json.readings.co2 }} ppm\n\nAI Analysis: {{ $json.analysis.reasoning }}\nRecommendation: {{ $json.analysis.recommendation }}", "otherOptions": {} }, "id": "c5a297be-ccef-40ba-9178-65805262efba", "name": "Slack Critical Alert", "type": "n8n-nodes-base.slack", "typeVersion": 2.2, "position": [ 384, 112 ], "webhookId": "19113595-0208-4b37-b68c-c9788c19f618" }, { "parameters": { "select": "channel", "channelId": { "__rl": true, "mode": "name", "value": "#iot-alerts" }, "text": "=⚠️ IoT Warning\n\nSensor: {{ $json.sensorId }} | Location: {{ $json.location }}\nIssue: {{ $json.analysis.anomalies[0] || 'Threshold approaching' }}\nRecommendation: {{ $json.analysis.recommendation }}", "otherOptions": {} }, "id": "5c3d7acf-0211-44dd-9f4b-a43d3796abb1", "name": "Slack Warning Alert", "type": "n8n-nodes-base.slack", "typeVersion": 2.2, "position": [ 384, 400 ], "webhookId": "37abfb19-f82f-4449-bd69-a65635b99606" }, { "parameters": {}, "id": "6bcbb42f-ec14-4f00-a091-babcc2d2d5c4", "name": "Merge Alert Outputs", "type": "n8n-nodes-base.merge", "typeVersion": 3, "position": [ 608, 184 ] }, { "parameters": { "operation": "append", "documentId": { "__rl": true, "mode": "list", "value": "" }, "sheetName": { "__rl": true, "mode": "list", "value": "" } }, "id": "6243aa23-408d-4928-a512-811eeb3b5f9e", "name": "Archive to Google Sheets", "type": "n8n-nodes-base.googleSheets", "typeVersion": 4.5, "position": [ 832, 184 ] }, { "parameters": { "model": "gpt-4o-mini", "options": { "temperature": 0.3 } }, "id": "61081e8a-ebc9-465f-8beb-88af225e59f2", "name": "OpenAI Chat Model", "type": "@n8n/n8n-nodes-langchain.lmChatOpenAi", "typeVersion": 1.2, "position": [ -344, 408 ] } ], "pinData": {}, "connections": { "MQTT Sensor Trigger": { "main": [ [ { "node": "Merge Triggers", "type": "main", "index": 0 } ] ] }, "Batch Process Schedule": { "main": [ [ { "node": "Merge Triggers", "type": "main", "index": 1 } ] ] }, "Merge Triggers": { "main": [ [ { "node": "Define Sensor Thresholds", "type": "main", "index": 0 } ] ] }, "Define Sensor Thresholds": { "main": [ [ { "node": "Parse Sensor Payload", "type": "main", "index": 0 } ] ] }, "Parse Sensor Payload": { "main": [ [ { "node": "Generate Data Fingerprint", "type": "main", "index": 0 } ] ] }, "Generate Data Fingerprint": { "main": [ [ { "node": "Remove Duplicate Readings", "type": "main", "index": 0 } ] ] }, "Remove Duplicate Readings": { "main": [ [ { "node": "AI Anomaly Detector", "type": "main", "index": 0 } ] ] }, "AI Anomaly Detector": { "main": [ [ { "node": "Parse AI Analysis", "type": "main", "index": 0 } ] ] }, "Parse AI Analysis": { "main": [ [ { "node": "Route by Severity", "type": "main", "index": 0 } ] ] }, "Route by Severity": { "main": [ [ { "node": "Send Critical Email", "type": "main", "index": 0 }, { "node": "Slack Critical Alert", "type": "main", "index": 0 } ], [ { "node": "Slack Warning Alert", "type": "main", "index": 0 } ], [ { "node": "Merge Alert Outputs", "type": "main", "index": 0 } ] ] }, "Send Critical Email": { "main": [ [ { "node": "Merge Alert Outputs", "type": "main", "index": 0 } ] ] }, "Slack Critical Alert": { "main": [ [ { "node": "Merge Alert Outputs", "type": "main", "index": 0 } ] ] }, "Slack Warning Alert": { "main": [ [ { "node": "Merge Alert Outputs", "type": "main", "index": 0 } ] ] }, "Merge Alert Outputs": { "main": [ [ { "node": "Archive to Google Sheets", "type": "main", "index": 0 } ] ] }, "OpenAI Chat Model": { "ai_languageModel": [ [ { "node": "AI Anomaly Detector", "type": "ai_languageModel", "index": 0 } ] ] } }, "active": false, "settings": { "executionOrder": "v1" }, "versionId": "", "meta": { "instanceId": "15d6057a37b8367f33882dd60593ee5f6cc0c59310ff1dc66b626d726083b48d" }, "tags": [] }
by Habeeb Mohammed
Who's it for This workflow is perfect for individuals who want to maintain detailed financial records without the overhead of complex budgeting apps. If you prefer natural language over data entry forms and want an AI assistant to handle the bookkeeping, this template is for you. It's especially useful for: People who want to track cash and online transactions separately Anyone who lends money to friends/family and needs debt tracking Users comfortable with Slack as their primary interface Those who prefer conversational interactions over manual spreadsheet updates What it does This AI-powered finance tracker transforms your Slack workspace into a personal finance command center. Simply mention your bot with transactions in plain English (e.g., "₹500 cash food, borrowed ₹1000 from John"), and the AI agent will: Parse transactions using natural language understanding via Google Gemini Calculate balance changes for cash and online accounts Show a preview of changes before saving anything Update Google Sheets only after you approve Track debts (who owes you, who you owe, repayments) Send daily reminders at 11 PM with current balances and active debts The workflow maintains conversational context using PostgreSQL memory, so you can say things like "yesterday's transactions" or "that payment to Sarah" and it understands the context. How it works Scheduled Daily Check-in (11 PM) Fetches current balances from Google Sheets Retrieves all active debts Formats and sends a Slack message with balance summary Prompts you to share the day's transactions AI Agent Transaction Processing When you mention the bot in Slack: Phase 1: Parse & Analyze Extracts amount, payment type (cash/online), category (food, travel, etc.) Identifies transaction type (expense, income, borrowed, lent, repaid) Stores conversation context in PostgreSQL memory Phase 2: Calculate & Preview Reads current balances from Google Sheets Calculates new balances based on transactions Shows formatted preview with projected changes Waits for your approval ("yes"/"no") Phase 3: Update Database (only after approval) Logs transactions with unique IDs and timestamps Updates debt records with person names and status Recalculates and stores new balances Handles debt lifecycle (Active → Settled) Phase 4: Confirmation Sends success message with updated balances Shows active debts summary Includes logging timestamp Requirements Essential Services: n8n instance (self-hosted or cloud) Slack workspace with admin access Google account Google Gemini API key PostgreSQL database Recommended: Claude AI model (mentioned in workflow notes as better alternative to Gemini) How to set up 1. Google Sheets Setup Create a new Google Sheet with three tabs named exactly: Balances Tab: | Date | Cash_Balance | Online_Balance | Total_Balance | |------|--------------|----------------|---------------| Transactions Tab: | Transaction_ID | Date | Time | Amount | Payment_Type | Category | Transaction_Type | Person_Name | Description | Added_At | |----------------|------|------|--------|--------------|----------|------------------|-------------|-------------|----------| Debts Tab: | Person_Name | Amount | Type | Date_created | Status | Notes | |-------------|--------|------|--------------|--------|-------| Add header rows and one initial balance row in the Balances tab with today's date and starting amounts. 2. Slack App Setup Go to api.slack.com/apps and create a new app Under OAuth & Permissions, add these Bot Token Scopes: app_mentions:read chat:write channels:read Install the app to your workspace Copy the Bot User OAuth Token Create a dedicated channel (e.g., #personal-finance-tracker) Invite your bot to the channel 3. Google Gemini API Visit ai.google.dev Create an API key Save it for n8n credentials setup 4. PostgreSQL Database Set up a PostgreSQL database (you can use Supabase free tier): Create a new project Note down connection details (host, port, database name, user, password) The workflow will auto-create the required table 5. n8n Workflow Configuration Import the workflow and configure: A. Credentials Google Sheets OAuth2**: Connect your Google account Slack API**: Add your Bot User OAuth Token Google Gemini API**: Add your API key PostgreSQL**: Add database connection details B. Update Node Parameters All Google Sheets nodes: Select your finance spreadsheet Slack nodes: Select your finance channel Schedule Trigger: Adjust time if you prefer a different check-in hour (default: 11 PM) Postgres Chat Memory: Change sessionKey to something unique (e.g., finance_tracker_your_name) Keep tableName as n8n_chat_history_finance or rename consistently C. Slack Trigger Setup Activate the "Bot Mention trigger" node Copy the webhook URL from n8n In Slack App settings, go to Event Subscriptions Enable events and paste the webhook URL Subscribe to bot event: app_mention Save changes 6. Test the Workflow Activate both workflow branches (scheduled and agent) In your Slack channel, mention the bot: @YourBot ₹100 cash snacks Bot should respond with a preview Reply "yes" to approve Verify Google Sheets are updated How to customize Change Transaction Categories Edit the AI Agent's system message to add/remove categories. Current categories: travel, food, entertainment, utilities, shopping, health, education, other Modify Daily Check-in Time Change the Schedule Trigger's triggerAtHour value (0-23 in 24-hour format). Add Currency Support Replace ₹ with your currency symbol in: Format Daily Message code node AI Agent system prompt examples Switch AI Models The workflow uses Google Gemini, but notes recommend Claude. To switch: Replace "Google Gemini Chat Model" node Add Claude credentials Connect to AI Agent node Customize Debt Types Modify AI Agent's system prompt to change debt handling logic: Currently: I_Owe and They_Owe_Me You can add more types or change naming Add More Payment Methods Current: cash, online To add more (e.g., credit card): Update AI Agent prompt Modify Balances sheet structure Update balance calculation logic Change Approval Keywords Edit AI Agent's Phase 2 approval logic to recognize different approval phrases. Add Spending Analytics Extend the daily check-in to calculate: Weekly/monthly spending summaries Category-wise breakdowns Use additional Code nodes to process transaction history Important Notes ⚠️ Never trigger with normal messages - Only use app mentions (@botname) to avoid infinite loops where the bot replies to its own messages. 💡 Context Awareness - The bot remembers conversation history, so you can reference "yesterday", "last week", or previous transactions naturally. 🔒 Data Privacy - All your financial data stays in your Google Sheets and PostgreSQL database. The AI only processes transaction text temporarily. 📊 Backup Regularly - Export your Google Sheets periodically as backup. Pro Tips: Start with small test transactions to ensure everything works Use consistent person names for debt tracking The bot understands various formats: "₹500 cash food" = "paid 500 rupees in cash for food" You can batch transactions in one message: "₹100 travel, ₹200 food, ₹50 snacks"
by Charles
🚀 Daily IndieHackers Reddit Trend Analysis to Slack > Transform Reddit chaos into actionable startup intelligence > Get AI-powered insights from r/indiehackers delivered to your Slack every morning 🎯 Who's It For This template is designed for startup founders, growth teams, and product managers who need to: Stay ahead of indie hacker trends without manual Reddit browsing Understand what's working in the entrepreneurial community Get actionable insights for product and marketing decisions Keep their team informed about emerging opportunities Perfect for teams building products for entrepreneurs or anyone wanting to leverage community intelligence for competitive advantage. ✨ What It Does Transform your morning routine with automated intelligence gathering that delivers structured, AI-powered summaries of the hottest r/indiehackers discussions directly to your Slack channel. 🧠 Smart Analysis Features | Feature | Description | |---------|-------------| | 🔥 Hotness Scoring | Calculates engagement scores using time-decay algorithms | | 📊 Topic Extraction | Identifies key themes and trending subjects | | 💰 Traction Signals | Spots revenue, metrics, and growth indicators | | 🎯 Theme Clustering | Groups posts into actionable categories | | ⚡ Action Items | Generates specific recommendations for your team | 📱 Slack Integration Receive beautifully formatted messages with: Executive summaries and key takeaways Top 3 hottest posts with engagement metrics Interactive buttons for deeper exploration Team discussion prompts ⚙️ How It Works graph LR A[🕐 Daily 8AM Trigger] --> B[📱 Fetch Reddit Posts] B --> C[🔄 Process Data] C --> D[🤖 Gemini AI Analysis] D --> E[✨ Groq Slack Formatting] E --> F[💬 Deliver to Slack] 🔄 The Complete Process Step 1: Automated Trigger Every morning at 8 AM, the workflow springs into action Step 2: Reddit Data Collection Fetches the latest 5 posts from r/indiehackers with full metadata Step 3: Data Processing Structures raw Reddit data for optimal AI analysis Step 4: AI-Powered Analysis Gemini AI performs deep analysis calculating hotness scores, extracting topics, and identifying patterns Step 5: Slack Formatting Groq AI Agent transforms insights into beautiful Slack Block Kit messages Step 6: Team Delivery Your designated Slack channel receives the formatted analysis 🛠️ Requirements You'll need API access for: Reddit (OAuth2), Google Gemini, Groq, and Slack (OAuth2). All have free tiers available. 🚀 Setup Guide 1️⃣ Configure Your Credentials Add these credentials in n8n: Reddit OAuth2, Google Gemini, Groq, and Slack OAuth2. The workflow will guide you through each setup. 2️⃣ Customize the Schedule Default: Daily at 8:00 AM To modify: Edit the "Daily Schedule" cron trigger node // Example: Run at 9:30 AM { "triggerTimes": { "item": [{ "hour": 9, "minute": 30 }] } } 3️⃣ Set Your Slack Destination Open the "Send to Slack" node Select your target channel Configure notification preferences 4️⃣ Adjust Analysis Parameters Post Limit: Change from default 5 posts // In "Get many posts" Reddit node "limit": 10 // Recommended: 3-10 posts Context Customization: { "channel_type": "team", "audience": "Growth, Product, and Founders", "cta_link": "https://your-dashboard.com", "timeframe_label": "This Week" } 🎨 Customization Options 🔍 Analysis Focus Areas Transform the workflow for different insights: SaaS-Focused Analysis Add to Gemini prompt: "Focus on SaaS and B2B insights, prioritizing recurring revenue and product-market fit signals" Geographic Targeting Add: "Prioritize posts relevant to [your region/market]" Stage-Specific Insights Add: "Focus on [early-stage/growth-stage] startup challenges" 📈 Hotness Algorithm Tweaking Default Formula: (ups + 2*num_comments) * freshness_decay Emphasize Comments: (ups + 3*num_comments) * freshness_decay Include Upvote Ratio: (ups * upvote_ratio + 2*num_comments) * freshness_decay 🌐 Multi-Subreddit Analysis Expand beyond r/indiehackers: Additional Communities: r/startups r/entrepreneur r/SideProject r/buildinpublic r/nocode 💾 Data Storage Extensions Enhance with historical tracking: | Node Type | Purpose | Benefit | |-----------|---------|---------| | Google Sheets | Trend storage | Historical analysis | | Airtable | Advanced data management | Rich analytics | | Webhook | External analytics | Custom dashboards | 📊 Expected Output 📱 Daily Slack Message Structure 🚀 IndieHackers Trends — This Week 📋 TL;DR: [One-sentence key insight] 🔥 Hot Posts (Top 3) [Post Title] (Hotness: 8.7) Topics: SaaS launch, pricing strategy 💬 23 comments | 👍 156 ups | 📅 Posted 4 hours ago [Open Reddit Button] 🧭 Themes Summary Go-to-market tactics — 3 posts, hotness: 24.1 Product launches — 2 posts, hotness: 18.3 ✅ What to Do Now Test pricing page variations based on community feedback Consider cold email strategies mentioned in hot posts Validate product ideas using discussed frameworks [Open Dashboard Button] 💡 Pro Tips for Success 🎯 Optimization Strategies Week 1-2: Baseline Monitor output quality and team engagement Note which insights generate the most discussion Week 3-4: Refinement Adjust AI prompts based on feedback Fine-tune hotness scoring for your needs Month 2+: Advanced Usage Add historical trend analysis Create custom dashboards with stored data Build feedback loops for continuous improvement 🚨 Common Pitfalls to Avoid | Issue | Solution | |-------|---------| | API Rate Limits | Reduce post count or increase time intervals | | Poor Insight Quality | Refine prompts with specific examples | | Team Engagement Drop | Rotate focus areas and encourage thread discussions | | Information Overload | Limit to top 3 posts and key themes only | 🔧 Troubleshooting ❌ Common Issues & Solutions "Model not found" Error Cause: Gemini regional availability Fix: Check supported regions or switch to alternative AI model Slack Formatting Broken Cause: Invalid Block Kit JSON Fix: Validate JSON structure in AI Agent output Missing Reddit Data Cause: API credentials or rate limits Fix: Verify OAuth2 setup and check usage quotas AI Timeouts Cause: Too much data or complex prompts Fix: Reduce post count or simplify analysis requests ⚡ Performance Optimization Keep analysis under 10 posts for optimal speed Monitor execution times in n8n logs Add error handling nodes for production reliability Use webhook timeouts for external API calls 🌟 Advanced Use Cases 📈 Competitive Intelligence Modify prompts to track specific competitors or market segments mentioned in discussions 🎯 Product Validation Focus analysis on posts related to your product category for market research 📝 Content Strategy Use trending topics to inform your content calendar and thought leadership 🤝 Community Engagement Identify opportunities to participate in discussions and build relationships Ready to transform your startup intelligence gathering? 🚀 Deploy this workflow and start receiving actionable insights tomorrow morning!
by Manav Desai
WhatsApp RAG Chatbot with Supabase, Gemini 2.5 Flash, and OpenAI Embeddings This n8n template demonstrates how to build a WhatsApp-based AI chatbot that answers user questions using document retrieval (RAG) powered by Supabase for storage, OpenAI embeddings for semantic search, and Gemini 2.5 Flash LLM for generating high-quality responses. Use cases are many: Turn your WhatsApp into a knowledge assistant for FAQs, customer support, or internal company documents — all without coding. Good to know The workflow uses OpenAI embeddings for both document embeddings and query embeddings, ensuring accurate semantic search. Gemini 2.5 Flash LLM** is used to generate user-friendly answers from the retrieved context. Messages are processed in real-time and sent back directly to WhatsApp. Workflow is modular — you can split document ingestion and query handling for large-scale setups. Supabase and WhatsApp API credentials must be configured before running. How it works Trigger: A new WhatsApp message triggers the workflow via webhook. Message Check: Determines if the message is a query or a document upload. Document Handling: Fetch file URL from WhatsApp. Convert binary to text. Generate embeddings with OpenAI and store them in Supabase. Query Handling: Generate query embeddings with OpenAI. Retrieve relevant context from Supabase. Pass context to Gemini 2.5 Flash LLM to compose a response. Response: Send the answer back to the user on WhatsApp. Optional: Add Gmail node to forward chat logs or daily summaries. How to use Configure WhatsApp Business API webhook for incoming messages. Add your Supabase and OpenAI credentials in n8n’s credentials manager. Upload documents via WhatsApp to populate the Supabase vector store. Ask queries — the bot retrieves context and answers using Gemini 2.5 Flash. Requirements WhatsApp Business API** (or Twilio WhatsApp Sandbox) Supabase account** (vector storage for embeddings) OpenAI API key** (for generating embeddings) Gemini API access** (for LLM responses) Customising this workflow Swap WhatsApp with Telegram, Slack, or email for different chat channels. Extend ingestion to other sources like Google Drive or Notion. Adjust the number of retrieved documents or prompt style in Gemini for tone control. Add a Gmail output node to send logs or alerts automatically.
by Akshay
Overview This project is an AI-powered hotel receptionist built using n8n, designed to handle guest queries automatically through WhatsApp. It integrates Google Gemini, Redis, MySQL, and Google Sheets via LangChain to create an intelligent conversational system that understands and answers booking-related questions in real time. A standout feature of this workflow is its AI model-switching system — it dynamically assigns users to different Gemini models, balancing traffic, improving performance, and reducing API costs. How It Works WhatsApp Trigger The workflow starts when a hotel guest sends a message through WhatsApp. The system captures the message text, contact details, and session information for further processing. Redis-Based Model Management The workflow checks Redis for a saved record of the user’s previously assigned AI model. If no record exists, a Model Decider node assigns a new model (e.g., Gemini 1 or Gemini 2). Redis then stores this model assignment for an hour, ensuring consistent routing and controlled traffic distribution. Model Selector The Model Selector routes each user’s request to the correct Gemini instance, enabling parallel execution across multiple AI models for faster response times and cost optimization. AI Agent Logic The LangChain AI Agent serves as the system’s reasoning core. It: Interprets guest questions such as: “Who checked in today?” “Show me tomorrow’s bookings.” “What’s the price for a deluxe suite for two nights?” Generates safe, read-only SQL SELECT queries. Fetches the requested data from the MySQL database. Combines this with dynamic pricing or promotions from Google Sheets, if available. Response Delivery Once the AI Agent formulates an answer, it sends a natural-sounding message back to the guest via WhatsApp, completing the interaction loop. Setup & Requirements Prerequisites Before deploying this workflow, ensure the following: n8n Instance** (local or hosted) WhatsApp Cloud API** with messaging permissions Google Gemini API Key** (for both models) Redis Database** for user session and model routing MySQL Database** for hotel booking and guest data Google Sheets Account** (optional, for pricing or offer data) Step-by-Step Setup Configure Credentials Add all API credentials in n8n → Settings → Credentials (WhatsApp, Redis, MySQL, Google). Prepare Databases MySQL Tables Example: bookings(id, guest_name, room_type, check_in, check_out) rooms(id, type, rate, status) Ensure the MySQL user has read-only permissions. Set Up Redis Create Redis keys for each user: llm-user:<whatsapp_id> = { "modelIndex": 0 } TTL: 3600 seconds (1 hour). Connect Google Sheets (Optional) Add your sheet under Google Sheets OAuth2. Use it to manage room rates, discounts, or seasonal offers dynamically. WhatsApp Webhook Configuration In Meta’s Developer Console, set the webhook URL to your n8n instance. Select message updates to trigger the workflow. Testing the Workflow Send messages like “Who booked today?” or a voice message. Confirm responses include real data from MySQL and contextual replies. Key Features Text & voice support** for guest interactions Automatic AI model-switching** using Redis Session memory** for context-aware conversations Read-only SQL query generation** for database safety Google Sheets integration** for live pricing and availability Scalable design** supporting multiple LLM instances Example Guest Queries | Guest Query | AI Response Example | |--------------|--------------------| | “Who checked in today?” | “Two guests have checked in today: Mr. Ahmed (Room 203) and Ms. Priya (Room 410).” | | “How much is a deluxe room for two nights?” | “A deluxe room costs $120 per night. The total for two nights is $240.” | | “Do you have any discounts this week?” | “Yes! We’re offering a 10% weekend discount on all deluxe and suite rooms.” | | “Show me tomorrow’s check-outs.” | “Three check-outs are scheduled tomorrow: Mr. Khan (101), Ms. Lee (207), and Mr. Singh (309).” | Customization Options 🧩 Model Assignment Logic You can modify the Model Decider node to: Assign models based on user load, region, or priority level. Increase or decrease TTL in Redis for longer model persistence. 🧠 AI Agent Prompt Adjust the system prompt to control tone and response behavior — for example: Add multilingual support. Include upselling or booking confirmation messages. 🗂️ Database Expansion Extend MySQL to include: Staff schedules Maintenance records Restaurant reservations Then link new queries in the AI Agent node for richer responses. Tech Stack n8n** – Workflow automation & orchestration Google Gemini (PaLM)** – LLM for reasoning & generation Redis** – Model assignment & session management MySQL** – Booking & guest data storage Google Sheets** – Dynamic pricing reference WhatsApp Cloud API** – Messaging interface Outcome This workflow demonstrates how AI automation can transform hotel operations by combining WhatsApp communication, database intelligence, and multi-model AI reasoning. It’s a production-ready foundation for scalable, cost-optimized, AI-driven hospitality solutions that deliver fast, accurate, and personalized guest interactions.
by franck fambou
Overview This intelligent chatbot workflow enables natural language conversations with your documents, supporting multiple file formats including PDFs, Word documents, Excel spreadsheets, and text files. Built with advanced RAG (Retrieval-Augmented Generation) technology, this chatbot can understand, analyze, and answer questions about your document content with contextual accuracy and intelligent responses. How It Works Intelligent Document Processing & Conversation Pipeline: Multi-Format Document Ingestion**: Automatically processes and indexes various document formats (PDF, DOCX, XLSX, TXT, etc.) Smart Content Chunking**: Breaks down documents into meaningful segments while preserving context and relationships Vector Database Storage**: Creates searchable embeddings for fast and accurate information retrieval Contextual Conversation Engine**: Uses AI to understand user queries and retrieve relevant document sections Natural Language Responses**: Generates human-like responses with citations and source references Multi-Turn Conversations**: Maintains conversation history and context across multiple interactions Real-Time Processing**: Instant responses with live document updates and dynamic content refresh Setup Instructions Estimated Setup Time: 15-20 minutes Prerequisites n8n instance (v0.200.0 or higher recommended) OpenAI/Gemini API key for embeddings and chat completion Vector database service (optional: Pinecone, Weaviate, or Qdrant) File storage service (optional: Google Drive, Dropbox, AWS S3) Web server for chatbot interface (optional) Configuration Steps Configure Document Input Sources Set up file upload webhook for direct document submission Configure cloud storage watchers for automatic document processing Add support for multiple file formats and size limits Set up document validation and security checks Setup Document Processing Pipeline Configure text extraction engines for different file types Set up intelligent chunking parameters (chunk size, overlap, boundaries) Add metadata extraction for document categorization Configure OCR for scanned documents (optional) Configure Vector Database Set up your chosen vector database credentials Configure embedding model settings (Gemini models/text-embedding-004 recommended) Set up collection/index structure for document storage Configure search parameters and similarity thresholds Setup AI Chat Engine Add your AI service API credentials (Gemini, Claude, etc.) Configure conversation prompts and system instructions Set up context window management and token optimization Add response formatting and citation rules Configure Chat Interface Set up webhook endpoints for chat API Configure session management and conversation history Add authentication and rate limiting (optional) Set up real-time updates and streaming responses Setup Monitoring & Analytics Configure conversation logging and analytics Set up performance monitoring for response times Add usage tracking and cost monitoring Configure error handling and failover mechanisms Use Cases Business & Enterprise Knowledge Base Queries**: Ask questions about company policies, procedures, and documentation Contract Analysis**: Query legal documents, contracts, and compliance materials Training Materials**: Interactive learning with training manuals and educational content Financial Reports**: Analyze and discuss financial statements, budgets, and forecasts Research & Academia Research Paper Analysis**: Discuss findings, methodologies, and citations from academic papers Literature Reviews**: Compare and contrast multiple research documents Thesis Support**: Get insights from reference materials and research data Grant Proposals**: Analyze requirements and optimize proposal content Legal & Compliance Legal Document Review**: Query contracts, agreements, and legal texts Regulatory Compliance**: Understand compliance requirements from regulatory documents Case Law Research**: Analyze legal precedents and court decisions Policy Analysis**: Interpret organizational policies and procedures Technical Documentation API Documentation**: Interactive queries about technical specifications User Manuals**: Get help and guidance from product documentation Code Documentation**: Understand codebases and technical implementations Troubleshooting Guides**: Interactive problem-solving with technical guides Personal Productivity Document Summarization**: Get quick summaries of long documents Information Extraction**: Find specific data points across multiple documents Content Research**: Research topics across your personal document library Meeting Notes**: Query and analyze meeting transcripts and notes Key Features Advanced Document Processing Multi-Format Support**: PDF, DOCX, XLSX, TXT, PPTX, and more Intelligent Chunking**: Context-aware document segmentation Metadata Extraction**: Automatic categorization and tagging OCR Integration**: Process scanned documents and images with text Intelligent Conversation Contextual Understanding**: Maintains conversation context and document relationships Source Attribution**: Provides citations and references for all answers Multi-Document Queries**: Compare and analyze across multiple documents Follow-up Questions**: Natural conversation flow with clarifying questions Performance & Scalability Fast Retrieval**: Vector-based semantic search for instant responses Scalable Architecture**: Handle large document collections efficiently Batch Processing**: Process multiple documents simultaneously Caching System**: Optimized response times with intelligent caching Security & Privacy Document Encryption**: Secure storage and transmission of sensitive documents Access Control**: User-based permissions and document access restrictions Audit Logging**: Complete conversation and access audit trails Data Retention**: Configurable data retention and deletion policies Technical Architecture Document Processing Flow File Upload → Format Detection → Text Extraction → Content Chunking Metadata Extraction → Embedding Generation → Vector Storage → Index Creation Conversation Flow User Query → Intent Analysis → Vector Search → Context Retrieval Response Generation → Source Attribution → Answer Formatting → Delivery Supported File Formats Documents**: PDF, DOC, DOCX, RTF, TXT, MD Spreadsheets**: XLS, XLSX, CSV Presentations**: PPT, PPTX Images**: PNG, JPG (with OCR) Archives**: ZIP (auto-extracts supported formats) Web**: HTML, XML Integration Options Chat Interfaces Web Widget**: Embeddable chat widget for websites API Endpoints**: RESTful API for custom integrations Slack/Teams**: Direct integration with team collaboration tools Mobile Apps**: API-first design for mobile application integration Data Sources Cloud Storage**: Google Drive, Dropbox, OneDrive, AWS S3 Document Systems**: SharePoint, Confluence, Notion Email**: Process attachments from email systems CRM/ERP**: Integration with business systems Performance Specifications Response Time**: < 3 seconds for typical queries Document Capacity**: Supports collections of 10,000+ documents Concurrent Users**: Scales to handle multiple simultaneous conversations Accuracy**: >90% relevance for domain-specific queries Advanced Configuration Options Customization Custom Prompts**: Tailor AI behavior for specific use cases Branding**: Customize chat interface with your company branding Language Support**: Multi-language document processing and responses Domain Expertise**: Fine-tune for specific industries or domains Analytics & Monitoring Usage Analytics**: Track popular queries and document usage Performance Metrics**: Monitor response times and accuracy User Feedback**: Collect ratings and improve responses A/B Testing**: Test different configurations and prompts Troubleshooting & Support Common Issues Slow Responses**: Check vector database performance and API limits Inaccurate Answers**: Review chunking strategy and embedding quality Format Errors**: Verify document formats and processing capabilities Memory Issues**: Monitor token usage and context window limits Optimization Tips Use clear, specific questions for best results Ensure documents are well-formatted with proper headers Regular vector database maintenance for optimal performance Monitor API usage to optimize costs and performance
by Jimleuk
This n8n template shows you how to create an MCP server out of your existing n8n workflows. With this, any MCP client connected can get more done with powerful end-to-end workflows rather than just simple tools. Designing agent tools for outcome rather than utility has been a long recommended practice of mine and it applies well when it comes to building MCP servers; In gist, agents to be making the least amount of calls possible to complete a task. This is why n8n can be a great fit for MCP servers! This template connects your agent/MCP client (like Claude Desktop) to your existing workflows by allowing the AI to discover, manage and run these workflows indirectly. How it works An MCP trigger is used and attaches 4 custom workflow tools to discover and manage existing workflows to use and 1 custom workflow tool to execute them. We'll introduce an idea of "available" workflows which the agent is allowed to use. This will help limit and avoid some issues when trying to use every workflow such as clashes or non-production. The n8n node is a core node which taps into your n8n instance API and is able to retrieve all workflows or filter by tag. For our example, we've tagged the workflows we want to use with "mcp" and these are exposed through the tool "search workflows". Redis is used as our main memory for keeping track of which workflows are "available". The tools we have are "add Workflow", "remove workflow" and "list workflows". The agent should be able to manage this autonomously. Our approach to allow the agent to execute workflows is to use the Subworkflow trigger. The tricky part is figuring out the input schema for each but was eventually solved by pulling this information out of the workflow's template JSON and adding it as part of the "available" workflow's description. To pass parameters through the Subworkflow trigger, we can do so via the passthrough method - which is that incoming data is used when parameters are not explicitly set within the node. When running, the agent will not see the "available" workflows immediately but will need to discover them via "list" and "search". The human will need to make the agent aware that these workflows will be preferred when answering queries or completing tasks. How to use First, decide which workflows will be made visible to the MCP server. This example uses the tag of "mcp" but you can all workflows or filter in other ways. Next, ensure these workflows have Subworkflow triggers with input schema set. This is how the MCP server will run them. Set the MCP server to "active" which turns on production mode and makes available to production URL. Use this production URL in your MCP client. For Claude Desktop, see the instructions here - https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-langchain.mcptrigger/#integrating-with-claude-desktop. There is a small learning curve which will shape how you communicate with this MCP server so be patient and test. The MCP server will work better if there is a focused goal in mind ie. Research and report, rather than just a collection of unrelated tools. Requirements N8N API key to filter for selected workflows. N8N workflows with Subworkflow triggers! Redis for memory and tracking the "available" workflows. MCP Client or Agent for usage such as Claude Desktop - https://claude.ai/download Customising this workflow If your targeted workflows do not use the subworkflow trigger, it is possible to amend the executeTool to use HTTP requests for webhooks. Managing available workflows helps if you have many workflows where some may be too similar for the agent. If this isn't a problem for you however, feel free to remove the concept of "available" and let the agent discover and use all workflows!
by Tiartyos
Voice Cloning Workflow - Zyphra Zonos API Who is this for? This workflow is designed for developers, content creators, and businesses looking to automate high-quality voice synthesis using AI voice cloning technology. What problem does this solve? It automates the process of generating natural-sounding speech from text using a sample voice file, eliminating the need for manual voice recording and providing consistent voice output for applications like audiobooks, virtual assistants, or content localization. What this workflow does The workflow receives text and voice cloning parameters via webhook, reads a sample voice file from your storage, sends the data to Zyphra's Zonos API for voice synthesis, and saves the generated audio file to your specified output location. Prerequisites You'll need: API key from Zyphra (obtain from https://playground.zyphra.com/settings/api-keys) Account registration at https://playground.zyphra.com Sample voice file stored on accessible disk/cloud storage n8n instance running with webhook capabilities Setup Configure your Zyphra API key in the "Call Zyphra Clone API" node under Header Parameters (Name: X-API-Key, Value: your-api-key) Ensure your sample voice files are accessible at the paths you'll specify Test the webhook endpoint is accessible Supported Audio Formats The API supports multiple output formats through the mime_type parameter: WebM** (default) - audio/webm Ogg** - audio/ogg WAV** - audio/wav MP3** - audio/mp3 or audio/mpeg MP4/AAC** - audio/mp4 or audio/aac Usage Example Endpoint: POST http://localhost:5678/webhook-test/voice-clone Headers: Content-Type: application/json Request Body: { "text": "Hello there! This voice sounds just like the sample!", "speaking_rate": 18, "sample_voice_path": "/data/output/sampleVoice.wav", "output_path": "/data/output/", "language_iso_code": "en-us", "mime_type": "audio/wav", "model": "zonos-v0.1-transformer", "emotion": { "happiness": 0.8, "neutral": 0.3, "sadness": 0.05, "disgust": 0.05, "fear": 0.05, "surprise": 0.05, "anger": 0.05, "other": 0.5 } } Parameters Required Parameters text**: Text to synthesize into speech sample_voice_path**: Path to your voice sample file output_path**: Directory where generated audio will be saved Optional Parameters (with defaults) speaking_rate**: 15 - Speech speed language_iso_code**: "en-us" - Language code mime_type**: "audio/wav" - Output audio format model**: "zonos-v0.1-transformer" - AI model to use emotion**: Object with emotion levels (0-1 scale)
by Mario
Purpose Use a lightweight Voice Interface, for you and your entire organization, to interact with an AI Supervisor, a personal AI Assistant, which has access to your custom workflows. You can also connect the supervisor to your already existing Agents. Demo & Explanation How it works After recording a message in the Vagent App, it gets transcribed and sent in combination with a session ID to the registered webhook The Main Agent acts as a router. I interprets the message while using the stored chat history (bound to the session ID) and chooses which tool to use to perform the required action and. Tools on this level are workflows, which contain subordinated Agents. Since the Main Agent interprets the original message, the raw input is passed to the Tools/Sub-Agents as a separate parameter Within the Sub-Agents the actual processing takes place. Each of those has it’s separate chat memory (with a suffix to the main session ID), to achieve a clear separation of concerns Depending on the required action an HTTP Request Tool is called. The result is being formatted in Markdown and returned to the Main Agent with an additional short prompt, so it does not get interpreted by the Main Agent. Drafts are separated from a short message by added indentation (angle brackets). If some information is missing, no tool is called just yet, instead a message is returned back to the user The Main Agent then outputs the result from the called Sub-Agent. If a draft is included, it gets separated from the spoken output Finally the formatted output is returned as response to the webhook. The message is split into a spoken and a text version, which enables the App to read out loud unnecessary information like drafts in this example See the full documentation of Vagent: https://vagent.io/docs Setup Import this workflow into your n8n instance Follow the instructions given in the sticky notes on the canvas Setup your credentials. OpenAI can be replaced by another LLM in the workflow, but is required for the App to work. Google Calendar and Notion are required for all scenarios to work Copy the Webhook URL from the Webhook node of the main workflow Download the Vagent App from https://vagent.io In the settings paste your OpenAI API Token, the Webhook URL and the password defined for Header Auth Now you can use the App to interact with the Multi-Agent using your Voice by tapping the Mic symbol in the App to record your message. To use the chat trigger (for testing) properly, temporarily disable the nodes after the Tools Agent.
by Jimleuk
This n8n template introduces the Dynamic Prompts Ai workflow pattern which are incredible for certain types of data extraction tasks where attributes are unknown or need to remain flexible. The general idea behind this pattern is that the prompts for requested attributes to be extracted live outside the template and so can be changed at any time - without needing to edit the template. This seriously cuts down on maintainance requirements and is reusable for any number of tables at little cost. Check out the video demo I did for n8n Studio here: https://www.youtube.com/watch?v=_fNAD1u8BZw Check out the example Airtable here: https://airtable.com/appAyH3GCBJ56cfXl/shrXzR1Tj99kuQbyL Looking for the Baserow Version? https://n8n.io/workflows/2780-ai-data-extraction-with-dynamic-prompts-and-baserow/ How it works Given we have an "input" field for context and a number of fields for the data we want to extract, this template will run in the background to react to any changes to either the "input" or fields and automatically update the rows accordingly. The key is that Airtable fields have a special property called the "field description". In this pattern, we use this property to allow the user to store a simple prompt describing the data that should exist in the column. Our n8n template reads these column descriptions aka "prompts" to use as instructions to perform tasks on the "input". In this template, the "input" is a PDF of a resume/CV and the columns are attributes a HR person would want to extract from it - such as full name, address, last position, years of experience etc. How to use First publish this template and ensure it's accessible via webhook URL. You then have to run the "create airtable webhooks" mini-flow to configure your Airtable to send change events to the n8n template. This mini-flow exists in the template but you'll have to update the IDs. Check the template for more instructions. Requirements Airtable for Tables/Database OpenAI for LLM and extraction. Feel free to choose another LLM if preferred. Customising this workflow If you're not using files, you can replace the "input" field with anything you like. For example, the "input" could be single line text.
by max e
Turn plain-language chat like “Tomorrow 9 AM: write blog post” into neatly organised Todoist tasks with GPT-4o and n8n—zero code. 🪄 Ultimate Personal Todoist Agent Turn natural-language requests into perfectly-organized Todoist tasks—all on autopilot inside n8n. > “Add Finish quarterly report by Friday afternoon” → the agent creates the task, sets the due date & priority, and even drops it into the right project. ✨ 🌟 Why this workflow rocks All-in-one Todoist super‑powers** – create, update, complete, move, archive… every major Todoist endpoint is wired up (tasks, projects, sections, labels, comments). LLM‑powered intent detection** – an OpenAI model interprets plain-English (or emoji‑filled!) messages so you don’t have to remember slash‑commands. Minimal setup** – just two credentials and you’re live. Battle‑tested building block** – use it as‑is, or plug the Todoist Agent node into your own agents & chatbots. 🛠️ What you’ll need | Credential | Where it’s used | How to set it up | | ------------------ | -------------------------------------- | --------------------------------------------------------------------------------------------- | | OpenAI API | Orchestrator & LLM nodes | Paste your OpenAI secret key into an OpenAI credential in n8n. | | Todoist OAuth2 | Todoist node and HTTP Request node | Log in Todoist from your browser to set up credential in n8n. | > That’s it—no webhooks, no extra secrets. > Tested with *gpt‑4o‑latest* – the fastest & most accurate model in our trials. ⚡ Quick‑start (5 minutes) Import the JSON template (hit ▶️ Try it out on the n8n template page or drag‑drop the file into your canvas). Select your credentials in the two credential dropdowns. Click Test workflow. In the sample Function node, tweak the message field (e.g. “Tomorrow at 9 am: write blog post”). Run → watch your new Todoist task appear. (Optional) Swap the Function node for your favourite chat trigger (Telegram, Slack, WhatsApp, Discord, you name it). Boom—your personal Todoist genie is alive! 🧞♂️ 🧩 How it works (under the hood) [Trigger / Chat message] │ ▼ [🗂️ Orchestrator Agent] ← OpenAI Chat Model + Short‑term Memory │ ↳ Parses intent & entities │ ▼ [🤖 Todoist Agent] ← 15+ Todoist endpoints │ ↳ Executes the right call (create, update, complete, etc.) ▼ [Done ✅ ] The Orchestrator is an example. In production you can drop it and simply expose the Todoist Agent as a tool for any other agent workflow. 🎛️ Customising & extending | Idea | How to do it | | ------------------------- | ---------------------------------------------------------------------------------------- | | Notion / Sheets sync | After the Todoist Agent node, add a Notion or Google Sheets node to log completed items. | | Voice commands | Swap the chat trigger for a Speech‑to‑Text node (e.g. Whisper). | 🤝 Need custom automations? Want me to build or tweak something for you? → Email maxemelyanenko@gmail.com and let’s make it happen! ⚠️ What’s not included (yet) Shared projects & other Todoist Pro/Business endpoints. File attachments in the comments. Editing comments. Pull requests welcome! 🙌