by Hemanth Arety
Generate AEO strategy from brand input using AI competitor analysis This workflow automatically creates a comprehensive Answer Engine Optimization (AEO) strategy by identifying your top competitors, analyzing their positioning, and generating custom recommendations to help your brand rank in AI-powered search engines like ChatGPT, Perplexity, and Google SGE. Who it's for This template is perfect for: Digital marketing agencies** offering AEO services to clients In-house marketers** optimizing content for AI search engines Brand strategists** analyzing competitive positioning Content teams** creating AI-optimized content strategies SEO professionals** expanding into Answer Engine Optimization What it does The workflow automates the entire AEO research and strategy process in 6 steps: Collects brand information via a user-friendly web form (brand name, website, niche, product type, email) Identifies top 3 competitors using Google Gemini AI based on product overlap, market position, digital presence, and geographic factors Scrapes target brand website with Firecrawl to extract value propositions, features, and content themes Scrapes competitor websites in parallel to gather competitive intelligence Generates comprehensive AEO strategy using OpenAI GPT-4 with 15+ actionable recommendations Delivers formatted report via email with executive summary, competitive analysis, and implementation roadmap The entire process runs automatically and takes approximately 5-7 minutes to complete. How to set up Requirements You'll need API credentials for: Google Gemini API** (for competitor analysis) - Get API key OpenAI API** (for strategy generation) - Get API key Firecrawl API** (for web scraping) - Get API key Gmail account** (for email delivery) - Use OAuth2 authentication Setup Steps Import the workflow into your n8n instance Configure credentials: Add your Google Gemini API key to the "Google Gemini Chat Model" node Add your OpenAI API key to the "OpenAI Chat Model" node Add your Firecrawl API key as HTTP Header Auth credentials Connect your Gmail account using OAuth2 Activate the workflow and copy the form webhook URL Test the workflow by submitting a real brand through the form Check your email for the generated AEO strategy report Credentials Setup Tips For Firecrawl: Create HTTP Header Auth credentials with header name Authorization and value Bearer YOUR_API_KEY For Gmail: Use OAuth2 to avoid authentication issues with 2FA Test each API credential individually before running the full workflow How it works Competitor Identification The Google Gemini AI agent analyzes your brand based on 4 weighted criteria: product/service overlap (40%), market position (30%), digital presence (20%), and geographic overlap (10%). It returns structured JSON data with competitor names, URLs, overlap percentages, and detailed reasoning. Web Scraping Firecrawl extracts structured data from websites using custom schemas. For each site, it captures: company name, products/services, value proposition, target audience, key features, pricing info, and content themes. This runs asynchronously with 60-second waits to allow for complete extraction. Strategy Generation OpenAI GPT-4 analyzes the combined brand and competitor data to generate a comprehensive report including: executive summary, competitive analysis, 15+ specific AEO tactics across 4 categories (content optimization, structural improvements, authority building, answer engine targeting), content priority matrix with 10 ranked topics, and a detailed implementation roadmap. Email Delivery The strategy is formatted as a professional HTML email with clear sections, visual hierarchy, and actionable next steps. Recipients get an immediately implementable roadmap for improving their AEO performance. How to customize the workflow Change AI Models Replace Google Gemini** with Claude, GPT-4, or other LLM in the competitor analysis node Replace OpenAI** with Anthropic Claude or Google Gemini in the strategy generation node Both use LangChain agent nodes, making model swapping straightforward Modify Competitor Analysis Find more competitors**: Edit the AI prompt to request 5 or 10 competitors instead of 3 Add filtering criteria**: Include factors like company size, funding stage, or geographic focus Change ranking weights**: Adjust the 40/30/20/10 weighting in the prompt Enhance Data Collection Add social media scraping**: Include LinkedIn, Twitter/X, or Facebook page analysis Pull review data**: Integrate G2, Capterra, or Trustpilot APIs for customer sentiment Include traffic data**: Add SimilarWeb or Semrush API calls for competitive metrics Change Output Format Export to Google Docs**: Replace Gmail with Google Docs node to create shareable documents Send to Slack/Discord**: Post strategy summaries to team channels for collaboration Save to database**: Store results in Airtable, PostgreSQL, or MongoDB for tracking Create presentations**: Generate PowerPoint slides using automation tools Add More Features Schedule periodic analysis**: Run monthly competitive audits for specific brands A/B test strategies**: Generate multiple strategies and compare results over time Multi-language support**: Add translation nodes for international brands Custom branding**: Modify email templates with your agency's logo and colors Adjust Scraping Behavior Change Firecrawl schema**: Customize extracted data fields based on industry needs Add timeout handling**: Implement retry logic for failed scraping attempts Scrape more pages**: Extend beyond homepage to include blog, pricing, and about pages Use different scrapers**: Replace Firecrawl with Apify, Browserless, or custom solutions Tips for best results Provide clear brand information**: The more specific the product type and niche, the better the competitor identification Ensure websites are accessible**: Some sites block scrapers; consider adding user agents or rotating IPs Monitor API costs**: Firecrawl and OpenAI charges can add up; set usage limits Review generated strategies**: AI recommendations should be reviewed and customized for your specific context Iterate on prompts**: Fine-tune the AI prompts based on output quality over multiple runs Common use cases Client onboarding** for marketing agencies - Generate initial AEO assessments Content strategy planning** - Identify topics and angles competitors are missing Quarterly audits** - Track competitive positioning changes over time Product launches** - Understand competitive landscape before entering market Sales enablement** - Equip sales teams with competitive intelligence Note: This workflow uses community and AI nodes that require external API access. Make sure your n8n instance can make outbound HTTP requests and has the necessary LangChain nodes installed.
by Charles
🚀 Daily IndieHackers Reddit Trend Analysis to Slack > Transform Reddit chaos into actionable startup intelligence > Get AI-powered insights from r/indiehackers delivered to your Slack every morning 🎯 Who's It For This template is designed for startup founders, growth teams, and product managers who need to: Stay ahead of indie hacker trends without manual Reddit browsing Understand what's working in the entrepreneurial community Get actionable insights for product and marketing decisions Keep their team informed about emerging opportunities Perfect for teams building products for entrepreneurs or anyone wanting to leverage community intelligence for competitive advantage. ✨ What It Does Transform your morning routine with automated intelligence gathering that delivers structured, AI-powered summaries of the hottest r/indiehackers discussions directly to your Slack channel. 🧠 Smart Analysis Features | Feature | Description | |---------|-------------| | 🔥 Hotness Scoring | Calculates engagement scores using time-decay algorithms | | 📊 Topic Extraction | Identifies key themes and trending subjects | | 💰 Traction Signals | Spots revenue, metrics, and growth indicators | | 🎯 Theme Clustering | Groups posts into actionable categories | | ⚡ Action Items | Generates specific recommendations for your team | 📱 Slack Integration Receive beautifully formatted messages with: Executive summaries and key takeaways Top 3 hottest posts with engagement metrics Interactive buttons for deeper exploration Team discussion prompts ⚙️ How It Works graph LR A[🕐 Daily 8AM Trigger] --> B[📱 Fetch Reddit Posts] B --> C[🔄 Process Data] C --> D[🤖 Gemini AI Analysis] D --> E[✨ Groq Slack Formatting] E --> F[💬 Deliver to Slack] 🔄 The Complete Process Step 1: Automated Trigger Every morning at 8 AM, the workflow springs into action Step 2: Reddit Data Collection Fetches the latest 5 posts from r/indiehackers with full metadata Step 3: Data Processing Structures raw Reddit data for optimal AI analysis Step 4: AI-Powered Analysis Gemini AI performs deep analysis calculating hotness scores, extracting topics, and identifying patterns Step 5: Slack Formatting Groq AI Agent transforms insights into beautiful Slack Block Kit messages Step 6: Team Delivery Your designated Slack channel receives the formatted analysis 🛠️ Requirements You'll need API access for: Reddit (OAuth2), Google Gemini, Groq, and Slack (OAuth2). All have free tiers available. 🚀 Setup Guide 1️⃣ Configure Your Credentials Add these credentials in n8n: Reddit OAuth2, Google Gemini, Groq, and Slack OAuth2. The workflow will guide you through each setup. 2️⃣ Customize the Schedule Default: Daily at 8:00 AM To modify: Edit the "Daily Schedule" cron trigger node // Example: Run at 9:30 AM { "triggerTimes": { "item": [{ "hour": 9, "minute": 30 }] } } 3️⃣ Set Your Slack Destination Open the "Send to Slack" node Select your target channel Configure notification preferences 4️⃣ Adjust Analysis Parameters Post Limit: Change from default 5 posts // In "Get many posts" Reddit node "limit": 10 // Recommended: 3-10 posts Context Customization: { "channel_type": "team", "audience": "Growth, Product, and Founders", "cta_link": "https://your-dashboard.com", "timeframe_label": "This Week" } 🎨 Customization Options 🔍 Analysis Focus Areas Transform the workflow for different insights: SaaS-Focused Analysis Add to Gemini prompt: "Focus on SaaS and B2B insights, prioritizing recurring revenue and product-market fit signals" Geographic Targeting Add: "Prioritize posts relevant to [your region/market]" Stage-Specific Insights Add: "Focus on [early-stage/growth-stage] startup challenges" 📈 Hotness Algorithm Tweaking Default Formula: (ups + 2*num_comments) * freshness_decay Emphasize Comments: (ups + 3*num_comments) * freshness_decay Include Upvote Ratio: (ups * upvote_ratio + 2*num_comments) * freshness_decay 🌐 Multi-Subreddit Analysis Expand beyond r/indiehackers: Additional Communities: r/startups r/entrepreneur r/SideProject r/buildinpublic r/nocode 💾 Data Storage Extensions Enhance with historical tracking: | Node Type | Purpose | Benefit | |-----------|---------|---------| | Google Sheets | Trend storage | Historical analysis | | Airtable | Advanced data management | Rich analytics | | Webhook | External analytics | Custom dashboards | 📊 Expected Output 📱 Daily Slack Message Structure 🚀 IndieHackers Trends — This Week 📋 TL;DR: [One-sentence key insight] 🔥 Hot Posts (Top 3) [Post Title] (Hotness: 8.7) Topics: SaaS launch, pricing strategy 💬 23 comments | 👍 156 ups | 📅 Posted 4 hours ago [Open Reddit Button] 🧭 Themes Summary Go-to-market tactics — 3 posts, hotness: 24.1 Product launches — 2 posts, hotness: 18.3 ✅ What to Do Now Test pricing page variations based on community feedback Consider cold email strategies mentioned in hot posts Validate product ideas using discussed frameworks [Open Dashboard Button] 💡 Pro Tips for Success 🎯 Optimization Strategies Week 1-2: Baseline Monitor output quality and team engagement Note which insights generate the most discussion Week 3-4: Refinement Adjust AI prompts based on feedback Fine-tune hotness scoring for your needs Month 2+: Advanced Usage Add historical trend analysis Create custom dashboards with stored data Build feedback loops for continuous improvement 🚨 Common Pitfalls to Avoid | Issue | Solution | |-------|---------| | API Rate Limits | Reduce post count or increase time intervals | | Poor Insight Quality | Refine prompts with specific examples | | Team Engagement Drop | Rotate focus areas and encourage thread discussions | | Information Overload | Limit to top 3 posts and key themes only | 🔧 Troubleshooting ❌ Common Issues & Solutions "Model not found" Error Cause: Gemini regional availability Fix: Check supported regions or switch to alternative AI model Slack Formatting Broken Cause: Invalid Block Kit JSON Fix: Validate JSON structure in AI Agent output Missing Reddit Data Cause: API credentials or rate limits Fix: Verify OAuth2 setup and check usage quotas AI Timeouts Cause: Too much data or complex prompts Fix: Reduce post count or simplify analysis requests ⚡ Performance Optimization Keep analysis under 10 posts for optimal speed Monitor execution times in n8n logs Add error handling nodes for production reliability Use webhook timeouts for external API calls 🌟 Advanced Use Cases 📈 Competitive Intelligence Modify prompts to track specific competitors or market segments mentioned in discussions 🎯 Product Validation Focus analysis on posts related to your product category for market research 📝 Content Strategy Use trending topics to inform your content calendar and thought leadership 🤝 Community Engagement Identify opportunities to participate in discussions and build relationships Ready to transform your startup intelligence gathering? 🚀 Deploy this workflow and start receiving actionable insights tomorrow morning!
by TOMOMITSU ASANO
{ "name": "IoT Sensor Data Aggregation with AI-Powered Anomaly Detection", "nodes": [ { "parameters": { "content": "## How it works\nThis workflow monitors IoT sensors in real-time. It ingests data via MQTT or a schedule, normalizes the format, and removes duplicates using data fingerprinting. An AI Agent then analyzes readings against defined thresholds to detect anomalies. Finally, it routes alerts to Slack or Email based on severity and logs everything to Google Sheets.\n\n## Setup steps\n1. Configure the MQTT Trigger with your broker details.\n2. Set your specific limits in the Define Sensor Thresholds node.\n3. Connect your OpenAI credential to the Chat Model node.\n4. Authenticate the Gmail, Slack, and Google Sheets nodes.\n5. Create a Google Sheet with headers: timestamp, sensorId, location, readings, analysis.", "height": 484, "width": 360 }, "id": "298da7ff-0e47-4b6c-85f5-2ce77275cdf3", "name": "Main Overview", "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ -2352, -480 ] }, { "parameters": { "content": "## 1. Data Ingestion\nCaptures sensor data via MQTT for real-time streams or runs on a schedule for batch processing. Both streams are merged for unified handling.", "height": 488, "width": 412, "color": 7 }, "id": "4794b396-cd71-429c-bcef-61780a55d707", "name": "Section: Ingestion", "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ -1822, -48 ] }, { "parameters": { "content": "## 2. Normalization & Deduplication\nSets monitoring thresholds, standardizes the JSON structure, creates a content hash, and filters out duplicate readings to prevent redundant API calls.", "height": 316, "width": 884, "color": 7 }, "id": "339e7cb7-491e-44c9-b561-983e147237d8", "name": "Section: Processing", "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ -1376, 32 ] }, { "parameters": { "content": "## 3. AI Anomaly Detection\nAn AI Agent evaluates sensor data against thresholds to identify anomalies, assigning severity levels and providing actionable recommendations.", "height": 528, "width": 460, "color": 7 }, "id": "ebcb7ca3-f70c-4a90-8a2a-f489e7be4c73", "name": "Section: AI Analysis", "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ -422, 24 ] }, { "parameters": { "content": "## 4. Routing & Archiving\nRoutes alerts based on severity (Critical = Email+Slack, Warning = Slack) and archives all data points to Google Sheets for historical analysis.", "height": 756, "width": 900, "color": 7 }, "id": "7f2b32a5-d3b2-4fea-844f-4b39b8e8a239", "name": "Section: Alerting", "type": "n8n-nodes-base.stickyNote", "typeVersion": 1, "position": [ 94, -196 ] }, { "parameters": { "topics": "sensors/+/data", "options": {} }, "id": "bc86720b-9de9-4693-b090-343d3ebad3a3", "name": "MQTT Sensor Trigger", "type": "n8n-nodes-base.mqttTrigger", "typeVersion": 1, "position": [ -1760, 88 ] }, { "parameters": { "rule": { "interval": [ { "field": "minutes", "minutesInterval": 15 } ] } }, "id": "1c38f2d0-aa00-447e-bdae-bffd08c38461", "name": "Batch Process Schedule", "type": "n8n-nodes-base.scheduleTrigger", "typeVersion": 1.2, "position": [ -1760, 280 ] }, { "parameters": { "mode": "chooseBranch" }, "id": "f9b41822-ee61-448b-b324-38483036e0e1", "name": "Merge Triggers", "type": "n8n-nodes-base.merge", "typeVersion": 3, "position": [ -1536, 184 ] }, { "parameters": { "mode": "raw", "jsonOutput": "{\n \"thresholds\": {\n \"temperature\": {\"min\": -10, \"max\": 50, \"unit\": \"C\"},\n \"humidity\": {\"min\": 20, \"max\": 90, \"unit\": \"%\"},\n \"pressure\": {\"min\": 950, \"max\": 1050, \"unit\": \"hPa\"},\n \"co2\": {\"min\": 400, \"max\": 2000, \"unit\": \"ppm\"}\n },\n \"alertConfig\": {\n \"criticalChannel\": \"#iot-critical\",\n \"warningChannel\": \"#iot-alerts\",\n \"emailRecipients\": \"ops@example.com\"\n }\n}", "options": {} }, "id": "308705a8-edc7-4435-9250-487aa528e033", "name": "Define Sensor Thresholds", "type": "n8n-nodes-base.set", "typeVersion": 3.4, "position": [ -1312, 184 ] }, { "parameters": { "jsCode": "const items = $input.all();\nconst thresholds = $('Define Sensor Thresholds').first().json.thresholds;\nconst results = [];\n\nfor (const item of items) {\n let sensorData;\n try {\n sensorData = typeof item.json.message === 'string' \n ? JSON.parse(item.json.message) \n : item.json;\n } catch (e) {\n sensorData = item.json;\n }\n \n const now = new Date();\n const reading = {\n sensorId: sensorData.sensorId || sensorData.topic?.split('/')[1] || 'unknown',\n location: sensorData.location || 'Main Facility',\n timestamp: now.toISOString(),\n readings: {\n temperature: sensorData.temperature ?? null,\n humidity: sensorData.humidity ?? null,\n pressure: sensorData.pressure ?? null,\n co2: sensorData.co2 ?? null\n },\n metadata: {\n receivedAt: now.toISOString(),\n source: item.json.topic || 'batch',\n thresholds: thresholds\n }\n };\n \n results.push({ json: reading });\n}\n\nreturn results;" }, "id": "a2008189-5ace-418b-b0db-d51d63dcf2d8", "name": "Parse Sensor Payload", "type": "n8n-nodes-base.code", "typeVersion": 2, "position": [ -1088, 184 ] }, { "parameters": { "type": "SHA256", "value": "={{ $json.sensorId + '-' + $json.timestamp + '-' + JSON.stringify($json.readings) }}", "dataPropertyName": "dataHash" }, "id": "bf8db555-a10e-4468-a44a-cdc4c97e5b80", "name": "Generate Data Fingerprint", "type": "n8n-nodes-base.crypto", "typeVersion": 1, "position": [ -864, 184 ] }, { "parameters": { "compare": "selectedFields", "fieldsToCompare": "dataHash", "options": {} }, "id": "a45405e2-d211-449d-84d7-4538eaf56fcd", "name": "Remove Duplicate Readings", "type": "n8n-nodes-base.removeDuplicates", "typeVersion": 1, "position": [ -640, 184 ] }, { "parameters": { "text": "=Analyze this IoT sensor reading and determine if there are any anomalies:\n\nSensor ID: {{ $json.sensorId }}\nLocation: {{ $json.location }}\nTimestamp: {{ $json.timestamp }}\n\nReadings:\n- Temperature: {{ $json.readings.temperature }}°C (Normal: {{ $json.metadata.thresholds.temperature.min }} to {{ $json.metadata.thresholds.temperature.max }})\n- Humidity: {{ $json.readings.humidity }}% (Normal: {{ $json.metadata.thresholds.humidity.min }} to {{ $json.metadata.thresholds.humidity.max }})\n- CO2: {{ $json.readings.co2 }} ppm (Normal: {{ $json.metadata.thresholds.co2.min }} to {{ $json.metadata.thresholds.co2.max }})\n\nProvide your analysis in this exact JSON format:\n{\n \"hasAnomaly\": true/false,\n \"severity\": \"critical\"/\"warning\"/\"normal\",\n \"anomalies\": [\"list of detected issues\"],\n \"reasoning\": \"explanation of your analysis\",\n \"recommendation\": \"suggested action\"\n}", "options": { "systemMessage": "You are an IoT monitoring expert. Analyze sensor data and detect anomalies based on the provided thresholds. Be precise and provide actionable recommendations. Always respond in valid JSON format." } }, "id": "b60194ba-7b99-44e0-b0d7-9f1632dce4d4", "name": "AI Anomaly Detector", "type": "@n8n/n8n-nodes-langchain.agent", "typeVersion": 1.7, "position": [ -416, 184 ] }, { "parameters": { "jsCode": "const item = $input.first();\nconst originalData = $('Remove Duplicate Readings').first().json;\n\nlet aiAnalysis;\ntry {\n const responseText = item.json.output || item.json.text || '';\n const jsonMatch = responseText.match(/\\{[\\s\\S]*\\}/);\n aiAnalysis = jsonMatch ? JSON.parse(jsonMatch[0]) : {\n hasAnomaly: false,\n severity: 'normal',\n anomalies: [],\n reasoning: 'Unable to parse AI response',\n recommendation: 'Manual review required'\n };\n} catch (e) {\n aiAnalysis = {\n hasAnomaly: false,\n severity: 'normal',\n anomalies: [],\n reasoning: 'Parse error: ' + e.message,\n recommendation: 'Manual review required'\n };\n}\n\nreturn [{\n json: {\n ...originalData,\n analysis: aiAnalysis,\n alertLevel: aiAnalysis.severity,\n requiresAlert: aiAnalysis.hasAnomaly && aiAnalysis.severity !== 'normal'\n }\n}];" }, "id": "a145a8c7-538c-411a-95c6-9485acdcb969", "name": "Parse AI Analysis", "type": "n8n-nodes-base.code", "typeVersion": 2, "position": [ -64, 184 ] }, { "parameters": { "rules": { "values": [ { "conditions": { "options": { "caseSensitive": true, "typeValidation": "strict" }, "combinator": "and", "conditions": [ { "id": "critical", "operator": { "type": "string", "operation": "equals" }, "leftValue": "={{ $json.alertLevel }}", "rightValue": "critical" } ] }, "renameOutput": true, "outputKey": "Critical" }, { "conditions": { "options": { "caseSensitive": true, "typeValidation": "strict" }, "combinator": "and", "conditions": [ { "id": "warning", "operator": { "type": "string", "operation": "equals" }, "leftValue": "={{ $json.alertLevel }}", "rightValue": "warning" } ] }, "renameOutput": true, "outputKey": "Warning" } ] }, "options": { "fallbackOutput": "extra" } }, "id": "1ab9785d-9f7f-4840-b1e9-0afc62b00b12", "name": "Route by Severity", "type": "n8n-nodes-base.switch", "typeVersion": 3.2, "position": [ 160, 168 ] }, { "parameters": { "sendTo": "={{ $('Define Sensor Thresholds').first().json.alertConfig.emailRecipients }}", "subject": "=CRITICAL IoT Alert: {{ $json.sensorId }} - {{ $json.analysis.anomalies[0] || 'Anomaly Detected' }}", "message": "=CRITICAL IoT SENSOR ALERT\n\nSensor: {{ $json.sensorId }}\nLocation: {{ $json.location }}\nTime: {{ $json.timestamp }}\n\nReadings:\n- Temperature: {{ $json.readings.temperature }}°C\n- Humidity: {{ $json.readings.humidity }}%\n- CO2: {{ $json.readings.co2 }} ppm\n\nAI Analysis:\n{{ $json.analysis.reasoning }}\n\nDetected Issues:\n{{ $json.analysis.anomalies.join('\\n- ') }}\n\nRecommendation:\n{{ $json.analysis.recommendation }}", "options": {} }, "id": "28201a6c-10b5-4387-be89-10a57c634622", "name": "Send Critical Email", "type": "n8n-nodes-base.gmail", "typeVersion": 2.1, "position": [ 384, -80 ], "webhookId": "35b9f8fa-4a50-456e-b552-9fd20a25ccc5" }, { "parameters": { "select": "channel", "channelId": { "__rl": true, "mode": "name", "value": "#iot-critical" }, "text": "=🚨 CRITICAL IoT ALERT\n\nSensor: {{ $json.sensorId }}\nLocation: {{ $json.location }}\n\nReadings:\n• Temperature: {{ $json.readings.temperature }}°C\n• Humidity: {{ $json.readings.humidity }}%\n• CO2: {{ $json.readings.co2 }} ppm\n\nAI Analysis: {{ $json.analysis.reasoning }}\nRecommendation: {{ $json.analysis.recommendation }}", "otherOptions": {} }, "id": "c5a297be-ccef-40ba-9178-65805262efba", "name": "Slack Critical Alert", "type": "n8n-nodes-base.slack", "typeVersion": 2.2, "position": [ 384, 112 ], "webhookId": "19113595-0208-4b37-b68c-c9788c19f618" }, { "parameters": { "select": "channel", "channelId": { "__rl": true, "mode": "name", "value": "#iot-alerts" }, "text": "=⚠️ IoT Warning\n\nSensor: {{ $json.sensorId }} | Location: {{ $json.location }}\nIssue: {{ $json.analysis.anomalies[0] || 'Threshold approaching' }}\nRecommendation: {{ $json.analysis.recommendation }}", "otherOptions": {} }, "id": "5c3d7acf-0211-44dd-9f4b-a43d3796abb1", "name": "Slack Warning Alert", "type": "n8n-nodes-base.slack", "typeVersion": 2.2, "position": [ 384, 400 ], "webhookId": "37abfb19-f82f-4449-bd69-a65635b99606" }, { "parameters": {}, "id": "6bcbb42f-ec14-4f00-a091-babcc2d2d5c4", "name": "Merge Alert Outputs", "type": "n8n-nodes-base.merge", "typeVersion": 3, "position": [ 608, 184 ] }, { "parameters": { "operation": "append", "documentId": { "__rl": true, "mode": "list", "value": "" }, "sheetName": { "__rl": true, "mode": "list", "value": "" } }, "id": "6243aa23-408d-4928-a512-811eeb3b5f9e", "name": "Archive to Google Sheets", "type": "n8n-nodes-base.googleSheets", "typeVersion": 4.5, "position": [ 832, 184 ] }, { "parameters": { "model": "gpt-4o-mini", "options": { "temperature": 0.3 } }, "id": "61081e8a-ebc9-465f-8beb-88af225e59f2", "name": "OpenAI Chat Model", "type": "@n8n/n8n-nodes-langchain.lmChatOpenAi", "typeVersion": 1.2, "position": [ -344, 408 ] } ], "pinData": {}, "connections": { "MQTT Sensor Trigger": { "main": [ [ { "node": "Merge Triggers", "type": "main", "index": 0 } ] ] }, "Batch Process Schedule": { "main": [ [ { "node": "Merge Triggers", "type": "main", "index": 1 } ] ] }, "Merge Triggers": { "main": [ [ { "node": "Define Sensor Thresholds", "type": "main", "index": 0 } ] ] }, "Define Sensor Thresholds": { "main": [ [ { "node": "Parse Sensor Payload", "type": "main", "index": 0 } ] ] }, "Parse Sensor Payload": { "main": [ [ { "node": "Generate Data Fingerprint", "type": "main", "index": 0 } ] ] }, "Generate Data Fingerprint": { "main": [ [ { "node": "Remove Duplicate Readings", "type": "main", "index": 0 } ] ] }, "Remove Duplicate Readings": { "main": [ [ { "node": "AI Anomaly Detector", "type": "main", "index": 0 } ] ] }, "AI Anomaly Detector": { "main": [ [ { "node": "Parse AI Analysis", "type": "main", "index": 0 } ] ] }, "Parse AI Analysis": { "main": [ [ { "node": "Route by Severity", "type": "main", "index": 0 } ] ] }, "Route by Severity": { "main": [ [ { "node": "Send Critical Email", "type": "main", "index": 0 }, { "node": "Slack Critical Alert", "type": "main", "index": 0 } ], [ { "node": "Slack Warning Alert", "type": "main", "index": 0 } ], [ { "node": "Merge Alert Outputs", "type": "main", "index": 0 } ] ] }, "Send Critical Email": { "main": [ [ { "node": "Merge Alert Outputs", "type": "main", "index": 0 } ] ] }, "Slack Critical Alert": { "main": [ [ { "node": "Merge Alert Outputs", "type": "main", "index": 0 } ] ] }, "Slack Warning Alert": { "main": [ [ { "node": "Merge Alert Outputs", "type": "main", "index": 0 } ] ] }, "Merge Alert Outputs": { "main": [ [ { "node": "Archive to Google Sheets", "type": "main", "index": 0 } ] ] }, "OpenAI Chat Model": { "ai_languageModel": [ [ { "node": "AI Anomaly Detector", "type": "ai_languageModel", "index": 0 } ] ] } }, "active": false, "settings": { "executionOrder": "v1" }, "versionId": "", "meta": { "instanceId": "15d6057a37b8367f33882dd60593ee5f6cc0c59310ff1dc66b626d726083b48d" }, "tags": [] }
by Habeeb Mohammed
Who's it for This workflow is perfect for individuals who want to maintain detailed financial records without the overhead of complex budgeting apps. If you prefer natural language over data entry forms and want an AI assistant to handle the bookkeeping, this template is for you. It's especially useful for: People who want to track cash and online transactions separately Anyone who lends money to friends/family and needs debt tracking Users comfortable with Slack as their primary interface Those who prefer conversational interactions over manual spreadsheet updates What it does This AI-powered finance tracker transforms your Slack workspace into a personal finance command center. Simply mention your bot with transactions in plain English (e.g., "₹500 cash food, borrowed ₹1000 from John"), and the AI agent will: Parse transactions using natural language understanding via Google Gemini Calculate balance changes for cash and online accounts Show a preview of changes before saving anything Update Google Sheets only after you approve Track debts (who owes you, who you owe, repayments) Send daily reminders at 11 PM with current balances and active debts The workflow maintains conversational context using PostgreSQL memory, so you can say things like "yesterday's transactions" or "that payment to Sarah" and it understands the context. How it works Scheduled Daily Check-in (11 PM) Fetches current balances from Google Sheets Retrieves all active debts Formats and sends a Slack message with balance summary Prompts you to share the day's transactions AI Agent Transaction Processing When you mention the bot in Slack: Phase 1: Parse & Analyze Extracts amount, payment type (cash/online), category (food, travel, etc.) Identifies transaction type (expense, income, borrowed, lent, repaid) Stores conversation context in PostgreSQL memory Phase 2: Calculate & Preview Reads current balances from Google Sheets Calculates new balances based on transactions Shows formatted preview with projected changes Waits for your approval ("yes"/"no") Phase 3: Update Database (only after approval) Logs transactions with unique IDs and timestamps Updates debt records with person names and status Recalculates and stores new balances Handles debt lifecycle (Active → Settled) Phase 4: Confirmation Sends success message with updated balances Shows active debts summary Includes logging timestamp Requirements Essential Services: n8n instance (self-hosted or cloud) Slack workspace with admin access Google account Google Gemini API key PostgreSQL database Recommended: Claude AI model (mentioned in workflow notes as better alternative to Gemini) How to set up 1. Google Sheets Setup Create a new Google Sheet with three tabs named exactly: Balances Tab: | Date | Cash_Balance | Online_Balance | Total_Balance | |------|--------------|----------------|---------------| Transactions Tab: | Transaction_ID | Date | Time | Amount | Payment_Type | Category | Transaction_Type | Person_Name | Description | Added_At | |----------------|------|------|--------|--------------|----------|------------------|-------------|-------------|----------| Debts Tab: | Person_Name | Amount | Type | Date_created | Status | Notes | |-------------|--------|------|--------------|--------|-------| Add header rows and one initial balance row in the Balances tab with today's date and starting amounts. 2. Slack App Setup Go to api.slack.com/apps and create a new app Under OAuth & Permissions, add these Bot Token Scopes: app_mentions:read chat:write channels:read Install the app to your workspace Copy the Bot User OAuth Token Create a dedicated channel (e.g., #personal-finance-tracker) Invite your bot to the channel 3. Google Gemini API Visit ai.google.dev Create an API key Save it for n8n credentials setup 4. PostgreSQL Database Set up a PostgreSQL database (you can use Supabase free tier): Create a new project Note down connection details (host, port, database name, user, password) The workflow will auto-create the required table 5. n8n Workflow Configuration Import the workflow and configure: A. Credentials Google Sheets OAuth2**: Connect your Google account Slack API**: Add your Bot User OAuth Token Google Gemini API**: Add your API key PostgreSQL**: Add database connection details B. Update Node Parameters All Google Sheets nodes: Select your finance spreadsheet Slack nodes: Select your finance channel Schedule Trigger: Adjust time if you prefer a different check-in hour (default: 11 PM) Postgres Chat Memory: Change sessionKey to something unique (e.g., finance_tracker_your_name) Keep tableName as n8n_chat_history_finance or rename consistently C. Slack Trigger Setup Activate the "Bot Mention trigger" node Copy the webhook URL from n8n In Slack App settings, go to Event Subscriptions Enable events and paste the webhook URL Subscribe to bot event: app_mention Save changes 6. Test the Workflow Activate both workflow branches (scheduled and agent) In your Slack channel, mention the bot: @YourBot ₹100 cash snacks Bot should respond with a preview Reply "yes" to approve Verify Google Sheets are updated How to customize Change Transaction Categories Edit the AI Agent's system message to add/remove categories. Current categories: travel, food, entertainment, utilities, shopping, health, education, other Modify Daily Check-in Time Change the Schedule Trigger's triggerAtHour value (0-23 in 24-hour format). Add Currency Support Replace ₹ with your currency symbol in: Format Daily Message code node AI Agent system prompt examples Switch AI Models The workflow uses Google Gemini, but notes recommend Claude. To switch: Replace "Google Gemini Chat Model" node Add Claude credentials Connect to AI Agent node Customize Debt Types Modify AI Agent's system prompt to change debt handling logic: Currently: I_Owe and They_Owe_Me You can add more types or change naming Add More Payment Methods Current: cash, online To add more (e.g., credit card): Update AI Agent prompt Modify Balances sheet structure Update balance calculation logic Change Approval Keywords Edit AI Agent's Phase 2 approval logic to recognize different approval phrases. Add Spending Analytics Extend the daily check-in to calculate: Weekly/monthly spending summaries Category-wise breakdowns Use additional Code nodes to process transaction history Important Notes ⚠️ Never trigger with normal messages - Only use app mentions (@botname) to avoid infinite loops where the bot replies to its own messages. 💡 Context Awareness - The bot remembers conversation history, so you can reference "yesterday", "last week", or previous transactions naturally. 🔒 Data Privacy - All your financial data stays in your Google Sheets and PostgreSQL database. The AI only processes transaction text temporarily. 📊 Backup Regularly - Export your Google Sheets periodically as backup. Pro Tips: Start with small test transactions to ensure everything works Use consistent person names for debt tracking The bot understands various formats: "₹500 cash food" = "paid 500 rupees in cash for food" You can batch transactions in one message: "₹100 travel, ₹200 food, ₹50 snacks"
by Akshay
Overview This project is an AI-powered hotel receptionist built using n8n, designed to handle guest queries automatically through WhatsApp. It integrates Google Gemini, Redis, MySQL, and Google Sheets via LangChain to create an intelligent conversational system that understands and answers booking-related questions in real time. A standout feature of this workflow is its AI model-switching system — it dynamically assigns users to different Gemini models, balancing traffic, improving performance, and reducing API costs. How It Works WhatsApp Trigger The workflow starts when a hotel guest sends a message through WhatsApp. The system captures the message text, contact details, and session information for further processing. Redis-Based Model Management The workflow checks Redis for a saved record of the user’s previously assigned AI model. If no record exists, a Model Decider node assigns a new model (e.g., Gemini 1 or Gemini 2). Redis then stores this model assignment for an hour, ensuring consistent routing and controlled traffic distribution. Model Selector The Model Selector routes each user’s request to the correct Gemini instance, enabling parallel execution across multiple AI models for faster response times and cost optimization. AI Agent Logic The LangChain AI Agent serves as the system’s reasoning core. It: Interprets guest questions such as: “Who checked in today?” “Show me tomorrow’s bookings.” “What’s the price for a deluxe suite for two nights?” Generates safe, read-only SQL SELECT queries. Fetches the requested data from the MySQL database. Combines this with dynamic pricing or promotions from Google Sheets, if available. Response Delivery Once the AI Agent formulates an answer, it sends a natural-sounding message back to the guest via WhatsApp, completing the interaction loop. Setup & Requirements Prerequisites Before deploying this workflow, ensure the following: n8n Instance** (local or hosted) WhatsApp Cloud API** with messaging permissions Google Gemini API Key** (for both models) Redis Database** for user session and model routing MySQL Database** for hotel booking and guest data Google Sheets Account** (optional, for pricing or offer data) Step-by-Step Setup Configure Credentials Add all API credentials in n8n → Settings → Credentials (WhatsApp, Redis, MySQL, Google). Prepare Databases MySQL Tables Example: bookings(id, guest_name, room_type, check_in, check_out) rooms(id, type, rate, status) Ensure the MySQL user has read-only permissions. Set Up Redis Create Redis keys for each user: llm-user:<whatsapp_id> = { "modelIndex": 0 } TTL: 3600 seconds (1 hour). Connect Google Sheets (Optional) Add your sheet under Google Sheets OAuth2. Use it to manage room rates, discounts, or seasonal offers dynamically. WhatsApp Webhook Configuration In Meta’s Developer Console, set the webhook URL to your n8n instance. Select message updates to trigger the workflow. Testing the Workflow Send messages like “Who booked today?” or a voice message. Confirm responses include real data from MySQL and contextual replies. Key Features Text & voice support** for guest interactions Automatic AI model-switching** using Redis Session memory** for context-aware conversations Read-only SQL query generation** for database safety Google Sheets integration** for live pricing and availability Scalable design** supporting multiple LLM instances Example Guest Queries | Guest Query | AI Response Example | |--------------|--------------------| | “Who checked in today?” | “Two guests have checked in today: Mr. Ahmed (Room 203) and Ms. Priya (Room 410).” | | “How much is a deluxe room for two nights?” | “A deluxe room costs $120 per night. The total for two nights is $240.” | | “Do you have any discounts this week?” | “Yes! We’re offering a 10% weekend discount on all deluxe and suite rooms.” | | “Show me tomorrow’s check-outs.” | “Three check-outs are scheduled tomorrow: Mr. Khan (101), Ms. Lee (207), and Mr. Singh (309).” | Customization Options 🧩 Model Assignment Logic You can modify the Model Decider node to: Assign models based on user load, region, or priority level. Increase or decrease TTL in Redis for longer model persistence. 🧠 AI Agent Prompt Adjust the system prompt to control tone and response behavior — for example: Add multilingual support. Include upselling or booking confirmation messages. 🗂️ Database Expansion Extend MySQL to include: Staff schedules Maintenance records Restaurant reservations Then link new queries in the AI Agent node for richer responses. Tech Stack n8n** – Workflow automation & orchestration Google Gemini (PaLM)** – LLM for reasoning & generation Redis** – Model assignment & session management MySQL** – Booking & guest data storage Google Sheets** – Dynamic pricing reference WhatsApp Cloud API** – Messaging interface Outcome This workflow demonstrates how AI automation can transform hotel operations by combining WhatsApp communication, database intelligence, and multi-model AI reasoning. It’s a production-ready foundation for scalable, cost-optimized, AI-driven hospitality solutions that deliver fast, accurate, and personalized guest interactions.
by Manav Desai
WhatsApp RAG Chatbot with Supabase, Gemini 2.5 Flash, and OpenAI Embeddings This n8n template demonstrates how to build a WhatsApp-based AI chatbot that answers user questions using document retrieval (RAG) powered by Supabase for storage, OpenAI embeddings for semantic search, and Gemini 2.5 Flash LLM for generating high-quality responses. Use cases are many: Turn your WhatsApp into a knowledge assistant for FAQs, customer support, or internal company documents — all without coding. Good to know The workflow uses OpenAI embeddings for both document embeddings and query embeddings, ensuring accurate semantic search. Gemini 2.5 Flash LLM** is used to generate user-friendly answers from the retrieved context. Messages are processed in real-time and sent back directly to WhatsApp. Workflow is modular — you can split document ingestion and query handling for large-scale setups. Supabase and WhatsApp API credentials must be configured before running. How it works Trigger: A new WhatsApp message triggers the workflow via webhook. Message Check: Determines if the message is a query or a document upload. Document Handling: Fetch file URL from WhatsApp. Convert binary to text. Generate embeddings with OpenAI and store them in Supabase. Query Handling: Generate query embeddings with OpenAI. Retrieve relevant context from Supabase. Pass context to Gemini 2.5 Flash LLM to compose a response. Response: Send the answer back to the user on WhatsApp. Optional: Add Gmail node to forward chat logs or daily summaries. How to use Configure WhatsApp Business API webhook for incoming messages. Add your Supabase and OpenAI credentials in n8n’s credentials manager. Upload documents via WhatsApp to populate the Supabase vector store. Ask queries — the bot retrieves context and answers using Gemini 2.5 Flash. Requirements WhatsApp Business API** (or Twilio WhatsApp Sandbox) Supabase account** (vector storage for embeddings) OpenAI API key** (for generating embeddings) Gemini API access** (for LLM responses) Customising this workflow Swap WhatsApp with Telegram, Slack, or email for different chat channels. Extend ingestion to other sources like Google Drive or Notion. Adjust the number of retrieved documents or prompt style in Gemini for tone control. Add a Gmail output node to send logs or alerts automatically.
by franck fambou
Overview This intelligent chatbot workflow enables natural language conversations with your documents, supporting multiple file formats including PDFs, Word documents, Excel spreadsheets, and text files. Built with advanced RAG (Retrieval-Augmented Generation) technology, this chatbot can understand, analyze, and answer questions about your document content with contextual accuracy and intelligent responses. How It Works Intelligent Document Processing & Conversation Pipeline: Multi-Format Document Ingestion**: Automatically processes and indexes various document formats (PDF, DOCX, XLSX, TXT, etc.) Smart Content Chunking**: Breaks down documents into meaningful segments while preserving context and relationships Vector Database Storage**: Creates searchable embeddings for fast and accurate information retrieval Contextual Conversation Engine**: Uses AI to understand user queries and retrieve relevant document sections Natural Language Responses**: Generates human-like responses with citations and source references Multi-Turn Conversations**: Maintains conversation history and context across multiple interactions Real-Time Processing**: Instant responses with live document updates and dynamic content refresh Setup Instructions Estimated Setup Time: 15-20 minutes Prerequisites n8n instance (v0.200.0 or higher recommended) OpenAI/Gemini API key for embeddings and chat completion Vector database service (optional: Pinecone, Weaviate, or Qdrant) File storage service (optional: Google Drive, Dropbox, AWS S3) Web server for chatbot interface (optional) Configuration Steps Configure Document Input Sources Set up file upload webhook for direct document submission Configure cloud storage watchers for automatic document processing Add support for multiple file formats and size limits Set up document validation and security checks Setup Document Processing Pipeline Configure text extraction engines for different file types Set up intelligent chunking parameters (chunk size, overlap, boundaries) Add metadata extraction for document categorization Configure OCR for scanned documents (optional) Configure Vector Database Set up your chosen vector database credentials Configure embedding model settings (Gemini models/text-embedding-004 recommended) Set up collection/index structure for document storage Configure search parameters and similarity thresholds Setup AI Chat Engine Add your AI service API credentials (Gemini, Claude, etc.) Configure conversation prompts and system instructions Set up context window management and token optimization Add response formatting and citation rules Configure Chat Interface Set up webhook endpoints for chat API Configure session management and conversation history Add authentication and rate limiting (optional) Set up real-time updates and streaming responses Setup Monitoring & Analytics Configure conversation logging and analytics Set up performance monitoring for response times Add usage tracking and cost monitoring Configure error handling and failover mechanisms Use Cases Business & Enterprise Knowledge Base Queries**: Ask questions about company policies, procedures, and documentation Contract Analysis**: Query legal documents, contracts, and compliance materials Training Materials**: Interactive learning with training manuals and educational content Financial Reports**: Analyze and discuss financial statements, budgets, and forecasts Research & Academia Research Paper Analysis**: Discuss findings, methodologies, and citations from academic papers Literature Reviews**: Compare and contrast multiple research documents Thesis Support**: Get insights from reference materials and research data Grant Proposals**: Analyze requirements and optimize proposal content Legal & Compliance Legal Document Review**: Query contracts, agreements, and legal texts Regulatory Compliance**: Understand compliance requirements from regulatory documents Case Law Research**: Analyze legal precedents and court decisions Policy Analysis**: Interpret organizational policies and procedures Technical Documentation API Documentation**: Interactive queries about technical specifications User Manuals**: Get help and guidance from product documentation Code Documentation**: Understand codebases and technical implementations Troubleshooting Guides**: Interactive problem-solving with technical guides Personal Productivity Document Summarization**: Get quick summaries of long documents Information Extraction**: Find specific data points across multiple documents Content Research**: Research topics across your personal document library Meeting Notes**: Query and analyze meeting transcripts and notes Key Features Advanced Document Processing Multi-Format Support**: PDF, DOCX, XLSX, TXT, PPTX, and more Intelligent Chunking**: Context-aware document segmentation Metadata Extraction**: Automatic categorization and tagging OCR Integration**: Process scanned documents and images with text Intelligent Conversation Contextual Understanding**: Maintains conversation context and document relationships Source Attribution**: Provides citations and references for all answers Multi-Document Queries**: Compare and analyze across multiple documents Follow-up Questions**: Natural conversation flow with clarifying questions Performance & Scalability Fast Retrieval**: Vector-based semantic search for instant responses Scalable Architecture**: Handle large document collections efficiently Batch Processing**: Process multiple documents simultaneously Caching System**: Optimized response times with intelligent caching Security & Privacy Document Encryption**: Secure storage and transmission of sensitive documents Access Control**: User-based permissions and document access restrictions Audit Logging**: Complete conversation and access audit trails Data Retention**: Configurable data retention and deletion policies Technical Architecture Document Processing Flow File Upload → Format Detection → Text Extraction → Content Chunking Metadata Extraction → Embedding Generation → Vector Storage → Index Creation Conversation Flow User Query → Intent Analysis → Vector Search → Context Retrieval Response Generation → Source Attribution → Answer Formatting → Delivery Supported File Formats Documents**: PDF, DOC, DOCX, RTF, TXT, MD Spreadsheets**: XLS, XLSX, CSV Presentations**: PPT, PPTX Images**: PNG, JPG (with OCR) Archives**: ZIP (auto-extracts supported formats) Web**: HTML, XML Integration Options Chat Interfaces Web Widget**: Embeddable chat widget for websites API Endpoints**: RESTful API for custom integrations Slack/Teams**: Direct integration with team collaboration tools Mobile Apps**: API-first design for mobile application integration Data Sources Cloud Storage**: Google Drive, Dropbox, OneDrive, AWS S3 Document Systems**: SharePoint, Confluence, Notion Email**: Process attachments from email systems CRM/ERP**: Integration with business systems Performance Specifications Response Time**: < 3 seconds for typical queries Document Capacity**: Supports collections of 10,000+ documents Concurrent Users**: Scales to handle multiple simultaneous conversations Accuracy**: >90% relevance for domain-specific queries Advanced Configuration Options Customization Custom Prompts**: Tailor AI behavior for specific use cases Branding**: Customize chat interface with your company branding Language Support**: Multi-language document processing and responses Domain Expertise**: Fine-tune for specific industries or domains Analytics & Monitoring Usage Analytics**: Track popular queries and document usage Performance Metrics**: Monitor response times and accuracy User Feedback**: Collect ratings and improve responses A/B Testing**: Test different configurations and prompts Troubleshooting & Support Common Issues Slow Responses**: Check vector database performance and API limits Inaccurate Answers**: Review chunking strategy and embedding quality Format Errors**: Verify document formats and processing capabilities Memory Issues**: Monitor token usage and context window limits Optimization Tips Use clear, specific questions for best results Ensure documents are well-formatted with proper headers Regular vector database maintenance for optimal performance Monitor API usage to optimize costs and performance
by phil
AI-Powered SEO Keyword Research Workflow with n8n > automates comprehensive keyword research for content creation Table of Contents Introduction Workflow Architecture NocoDB Integration Data Flow Core Components Setup Requirements Possible Improvements Introduction This n8n workflow automates SEO keyword research using AI and data-driven analytics. It combines OpenAI's language models with DataForSEO's analytics to generate comprehensive keyword strategies for content creation. The workflow is triggered by a webhook from NocoDB, processes the input data through multiple stages, and returns a detailed content brief with optimized keywords. Workflow Architecture The workflow follows a structured process: Input Collection: Receives data via webhook from NocoDB Topic Expansion: Generates keywords using AI Keyword Metrics Analysis: Gathers search volume, CPC, and difficulty metrics Competitor Analysis: Analyzes competitor content for ranking keywords Final Strategy Creation: Combines all data to generate a comprehensive keyword strategy Output Storage: Saves results back to NocoDB and sends notifications NocoDB Integration Database Structure The workflow integrates with two tables in NocoDB: Input Table Schema This table collects the input parameters for the keyword research: | Field Name | Type | Description | | --------------- | ------------- | --------------------------------------------------------------------------- | | ID | Auto Number | Unique identifier | | Primary Topic | Text | The main keyword/topic to research | | Competitor URLs | Text | Comma-separated list of competitor websites | | Target Audience | Single Select | Description of the target audience (Solopreneurs, Marketing Managers, etc.) | | Content Type | Single Select | Type of content (Blog, Product page, etc.) | | Location | Single Select | Target geographic location | | Language | Single Select | Target language for keywords | | Status | Single Select | Workflow status (Pending, Started, Done) | | Start Research | Checkbox | Active Workflow when you set this to true | Output Table Schema This table stores the generated keyword strategy: | Field Name | Type | Description | | ------------------ | ----------- | ------------------------------------------------ | | ID | Auto Number | Unique identifier | | primary_topic_used | Text | The topic that was researched | | report_content | Long Text | The complete keyword strategy in Markdown format | | generatedAt | Datetime | Automatically generated by NocoDb | Webhook Settings NocoDB Webhook Settings Data Flow The workflow handles data in the following sequence: Webhook Trigger: Receives input from NocoDB when a new keyword research request is created Field Extraction: Extracts primary topic, competitor URLs, audience, and other parameters AI Topic Expansion: Uses OpenAI to generate related keywords, categorized by type and intent Keyword Analysis: Sends primary keywords to DataForSEO to get search volume, CPC, and difficulty Competitor Research: Analyzes competitor pages to identify their keyword rankings Strategy Generation: Combines all data to create a comprehensive keyword strategy Storage & Notification: Saves the strategy to NocoDB and sends a notification to Slack Core Components 1. Topic Expansion This component uses OpenAI and a structured output parser to generate: 20 primary keywords 30 long-tail keywords with search intent 15 question-based keywords 10 related topics 2. DataForSEO Integration Two API endpoints are used: Search Volume & CPC**: Gets monthly search volume and cost-per-click data Keyword Difficulty**: Evaluates how difficult it would be to rank for each keyword 3. Competitor Analysis This component: Analyzes competitor URLs to identify which keywords they rank for Identifies content gaps or opportunities Determines the search intent their content targets 4. Final Keyword Strategy The AI-generated strategy includes: Top 10 primary keywords with metrics 15 long-tail opportunities with low competition 5 question-based keywords to address in content Content structure recommendations 3 potential content titles optimized for SEO Setup Requirements To use this workflow, you'll need: n8n Instance: Either cloud or self-hosted NocoDB Account: For data input and storage API Keys: OpenAI API key DataForSEO API credentials Slack API token (for notifications) Database Setup: Create the required tables in NocoDB as described above Possible Improvements The workflow could be enhanced with the following improvements: Enhanced Keyword Strategy Add topic clustering to group related keywords Enhance the final output with more specific content structure suggestions Include word count recommendations for each content section Additional Data Sources Integrate Google Search Console data for existing content optimization Add Google Trends data to identify rising topics Include sentiment analysis for different keyword groups Improved Competitor Analysis Analyze content length and structure from top-ranking pages Identify common backlink sources for competitor content Extract content headings to better understand content organization Automation Enhancements Add scheduling capabilities to run updates on existing content Implement content performance tracking over time Create alert thresholds for changes in keyword difficulty or search volume Example Output Here is an example Output the Workflow generated based on the following inputs. Inputs: Primary Topic: AI Automation Competitor URLs: n8n.io, zapier.com, make.com Target Audience: Small Business Owners Content Type: Landing Page Location: United States Language: English Output: Final Keyword Strategy The workflow provides a powerful automation for content marketers and SEO specialists to develop data-driven keyword strategies with minimal manual effort. > Original Workflow: AI-Powered SEO Keyword Research Automation - The vibe Marketer
by Oneclick AI Squad
In this guide, we’ll walk you through setting up a smart workflow that triggers on new restaurant orders, extracts and formats customer and dish details from Google Sheets, uses Gemini AI to recommend dishes or offers, and sends suggestions via Telegram. Ready to automate your order processing and enhance customer experience? Let’s dive in! What’s the Goal? Automatically trigger the workflow when a new order is placed. Extract and format customer information and order details from Google Sheets. Use Gemini AI to analyze orders and recommend dishes or offers. Send personalized suggestions to customers via Telegram. Enable real-time order processing and customer engagement. By the end, you’ll have a smart system that processes orders and suggests items effortlessly. Why Does It Matter? Manual order processing and suggestion generation are inefficient and miss opportunities. Here’s why this workflow is a game changer: Real-Time Efficiency**: Instantly process orders and suggest items. Personalized Engagement**: AI-driven suggestions enhance customer satisfaction. Time-Saving Automation**: Reduce manual effort in order management. Improved Sales**: Targeted recommendations can boost order value. Think of it as your intelligent assistant for orders and customer delight. How It Works Here’s the step-by-step magic behind the automation: Step 1: New Order Trigger Trigger the workflow when a new order is detected (e.g., via a form submission). Step 2: Extract & Format Order Extract and format dish ordering details from the customer order details sheet for further processing. Step 3: Save Customer Info Save customer information (e.g., ID, name, mobile number) from the customer details sheet. Step 4: Save Dish Info Save dish details (e.g., name, quantity, price) from the customer order details sheet. Step 5: Prepare Dish Details for AI Prepare the dish details for AI analysis to generate recommendations. Step 6: Clean Data for Input to Improve AI Understanding Clean and structure the data to enhance AI comprehension. Step 7: Use Gemini AI to Recommend Dishes or Offers Utilize Gemini AI (via Google Chat Model and Think Tool) to recommend dishes or offers based on order data. Step 8: Format AI Suggestions Format the AI-generated suggestions into a Telegram-friendly message. Step 9: Send Suggestions via Telegram Send the formatted suggestions directly to the customer via Telegram. How to Use the Workflow? Importing a workflow in n8n is a straightforward process that allows you to use pre-built workflows to save time. Below is a step-by-step guide to importing the Smart Restaurant Order & Suggestion System workflow in n8n. Steps to Import a Workflow in n8n Obtain the Workflow JSON Source the Workflow: Workflows are shared as JSON files or code snippets, e.g., from the n8n community, a colleague, or exported from another n8n instance. Format: Ensure you have the workflow in JSON format, either as a file (e.g., workflow.json) or copied text. Access the n8n Workflow Editor Log in to n8n (via n8n Cloud or self-hosted instance). Navigate to the Workflows tab in the n8n dashboard. Click Add Workflow to create a blank workflow. Import the Workflow Option 1: Import via JSON Code (Clipboard): Click the three dots (⋯) in the top-right corner to open the menu. Select Import from Clipboard. Paste the JSON code into the text box. Click Import to load the workflow. Option 2: Import via JSON File: Click the three dots (⋯) in the top-right corner. Select Import from File. Choose the .json file from your computer. Click Open to import. Setup Notes Google Sheet Columns**: Customer Details Sheet: Customer id, Customer name, Customer mobile number (e.g., CUST-JW4Z8Y, ajay, 9898989898; CUST-VEITPW, akash, 9898976898). Customer Order Details Sheet: Customer id, Dish name, Dish quantity, Per unit price, Actual price (e.g., CUST-JW4Z8Y, Tandoori Chicken, 1, 250, 250; CUST-VEITPW, Masala Dosa, 1, 150, 150). Google Sheets Credentials**: Configure OAuth2 settings in the extract and save nodes with your Google Sheet ID and credentials. Gemini AI**: Set up the Gemini AI node with Google Chat Model and Think Tool credentials. Telegram Integration**: Authorize the Send Suggestions node with Telegram API credentials and the customer’s chat ID or mobile number. Trigger Setup**: Configure the New Order Trigger node to detect new orders (e.g., via form or webhook).
by HoangSP
SEO Blog Generator with GPT-4o, Perplexity, and Telegram Integration This workflow helps you automatically generate SEO-optimized blog posts using Perplexity.ai, OpenAI GPT-4o, and optionally Telegram for interaction. 🚀 Features 🧠 Topic research via Perplexity sub-workflow ✍️ AI-written blog post generated with GPT-4o 📊 Structured output with metadata: title, slug, meta description 📩 Integration with Telegram to trigger workflows or receive outputs (optional) ⚙️ Requirements ✅ OpenAI API Key (GPT-4o or GPT-3.5) ✅ Perplexity API Key (with access to /chat/completions) ✅ (Optional) Telegram Bot Token and webhook setup 🛠 Setup Instructions Credentials: Add your OpenAI credentials (openAiApi) Add your Perplexity credentials under httpHeaderAuth Optional: Setup Telegram credentials under telegramApi Inputs: Use the Form Trigger or Telegram input node to send a Research Query Subworkflow: Make sure to import and activate the subworkflow Perplexity_Searcher to fetch recent search results Customization: Edit prompt texts inside the Blog Content Generator and Metadata Generator to change writing style or target industry Add or remove output nodes like Google Sheets, Notion, etc. 📦 Output Format The final blog post includes: ✅ Blog content (1500-2000 words) ✅ Metadata: title, slug, and meta description ✅ Extracted summary in JSON ✅ Delivered to Telegram (if connected) Need help? Reach out on the n8n community forum
by David Ashby
🛠️ MailerLite Tool MCP Server Complete MCP server exposing all MailerLite Tool operations to AI agents. Zero configuration needed - all 4 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every MailerLite Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n MailerLite Tool tool with full error handling 📋 Available Operations (4 total) Every possible MailerLite Tool operation is included: 🔧 Subscriber (4 operations) • Create a subscriber • Get a subscriber • Get many subscribers • Update a subscriber 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native MailerLite Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every MailerLite Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Dvir Sharon
🛒 Monitor Google Shopping Prices with Bright Data & Email Alerts This template requires a self-hosted n8n instance to run. A comprehensive n8n automation that monitors product prices daily using Bright Data's Google Shopping dataset and sends smart email alerts when price conditions are met. 📋 Overview This workflow provides an automated price monitoring solution that tracks product prices from Google Shopping daily and sends intelligent email notifications. Perfect for e-commerce monitoring, competitor analysis, deal hunting, and inventory management. ✨ Key Features 🕘 Scheduled Monitoring: Daily automated price checks at 9 AM 🛍️ Google Shopping Integration: Uses Bright Data's dataset for accurate pricing 📊 Smart Price Comparison: Compares current prices with historical data 📧 Intelligent Alerts: Sends emails only when prices meet criteria 📈 Data Storage: Updates Google Sheets with latest pricing data 🔄 Batch Processing: Handles multiple products with rate limiting ⚡ Fast & Reliable: Built-in error handling 🎯 Customizable Filters: Advanced price comparison logic 🎯 What This Workflow Does Schedule Trigger: Runs daily at 9 AM Data Retrieval: Fetches product list from Google Sheets Price Extraction: Scrapes current prices using Bright Data Data Update: Updates Google Sheets with new prices Price Comparison: Compares new vs. old prices Smart Filtering: Filters products that meet alert criteria Email Notifications: Sends alerts for qualifying changes Rate Limiting: Adds delay between emails Output Data Points | Field | Description | Example | | :------------ | :------------------------- | :------------------------------- | | Product URL | Original Google Shopping URL | https://shopping.google.com/product/... | | Product Name | Product title | iPhone 15 Pro Max 256GB | | Ratings | Product rating score | 4.5 | | Reviews | Number of reviews | 1,247 | | Old Price | Previous price | $1,199.00 | | New Price | Current scraped price | $1,199.00 | | Timestamp | When the check occurred | 2025-05-30T09:00:00Z | 🚀 Setup Instructions Prerequisites n8n instance (self-hosted or cloud) Google account with Sheets access Bright Data account with Google Shopping dataset access Gmail account for notifications Steps Import the workflow JSON into n8n Configure Bright Data credentials and dataset access Set up Google Sheets with required columns Configure Gmail OAuth2 credentials Update sheet IDs and schedule settings Test with sample products and activate 📖 Usage Guide Google Sheet Structure Your Google Sheet should have the following columns to ensure the workflow functions correctly: Product URL** (Text): The direct URL to the Google Shopping product page. This is the primary identifier for the product. Product Name** (Text): The name of the product. This will be automatically populated or updated by the workflow. Old Price** (Number/Currency): The price of the product from the previous check. This column is crucial for price comparison. New Price** (Number/Currency): The most recently scraped price of the product. Ratings** (Number): The star rating of the product. Reviews** (Number): The total number of reviews for the product. Timestamp** (Datetime): The date and time when the price check was performed. Adding Products Add Google Shopping URLs to your Google Sheet. The workflow will fetch product details and track prices. Historical price data builds over time. Understanding Price Alerts The default setting for this workflow is to send an email alert when the new price equals the old price. This might seem counterintuitive, but it's useful for specific scenarios, such as: Monitoring stable pricing:** If you are tracking a product and want to be notified when its price has remained consistent over time, indicating a potential stable buying opportunity or a benchmark. Verifying data consistency:** To confirm that the scraping process is working correctly and consistently retrieving the same price when no changes are expected. You can easily customize the alert logic to trigger on different conditions as described below. Customizing Alert Logic Price drops:** new_price < old_price Significant drops:** new_price < (old_price * 0.9) (e.g., price dropped by more than 10%) Price increases:** new_price > old_price Any change:** new_price != old_price Reading the Results Real-time pricing data Historical tracking Product metadata Timestamps for each check 🔧 Customization Options Add More Data:** Descriptions, availability, seller info, shipping, images Modify Email Templates:** Customize subject and body Multiple Recipients:** Duplicate email node and change recipients Webhook Integration:** Add real-time triggers or Slack alerts 🚨 Troubleshooting Bright Data connection failed:** Check API credentials and dataset access No price data extracted:** Verify URLs and test with different products Google Sheets permission denied:** Re-authenticate and check sharing Emails not sending:** Re-auth Gmail OAuth and verify recipients Filter not working:** Check price formats and logic Workflow failed:** Check logs, retry logic, and network status 📊 Use Cases & Examples E-commerce Monitoring:** Track competitor pricing and trends Deal Hunting:** Get alerts for price drops on wishlist items Inventory Management:** Monitor supplier pricing for procurement Market Research:** Analyze pricing trends and generate reports ⚙️ Advanced Configuration Batch Processing:** Increase batch size, add delays, use parallel processing Price History:** Store historical data, calculate averages, forecast trends Tool Integration:** CRM, Slack, databases, BI tools (Tableau, Power BI) 📈 Performance & Limits Single URL:** 2–5 seconds Concurrent Requests:** 3–5 (depends on Bright Data plan) Data Accuracy:** 95%+ Success Rate:** 90%+ Daily Capacity:** 100–500 products Memory:** ~100MB per execution API Calls:** 1 Bright Data + 2 Google Sheets per product 🤝 Support & Community n8n Forum:** <https://community.n8n.io> Documentation:** <https://docs.n8n.io> Bright Data Support:** Via your Bright Data dashboard GitHub Issues:** Report bugs and request features 🎯 Ready to Use! Your workflow provides a solid foundation for automated price monitoring. Customize it to fit your specific needs and use cases for maximum effectiveness in tracking Google Shopping prices with intelligent email notifications. Please note that this template uses Community Nodes. Ensure you understand the risks before using community nodes.