by rangelstoilov
This will send your Github notifications to a discord webhook. Since Github doesn't send push notifications to mobile devices other then @mention this is a great workaround to receive notifications on Discord with this. Using a github trigger was not a good option as there is no trigger for notifications only events (which don't work on org repos). Using http request on notifications api is way better. ++TAGGING USER IN MESSATGE:++ Change ** with your discord Id to get tagged when sending notifications. To find your own id type in any channel backslash followed by your username with the 4 digit hash code You can copy this by clicking on your username next to your profile picture Example: \@username#9999 Enjoy!
by mariskarthick
Reduce human delays between malware detection and remediation in MSSP/SOC environments. This workflow automates full endpoint antivirus scanning immediately after high-severity endpoint infection wazuh alerts, closing the gap between alerting and action. Why Use This Workflow? Malware alerts are only effective if acted upon swiftly. Manual follow-ups are slow or often missed, letting threats persist. Automates detection, triage, scan initiation, and notification—all within one minute of alerting. Ensures consistent, auditable actions across endpoints running Linux or Windows. 🔑 Key Features Listens for high-severity Wazuh AV infection alerts (e.g., rule 52502). Uses GPT-4 for AI-powered alert summaries to speed triage and decision making. Extracts exact infected file paths using AI and regex for targeted scanning. Runs ClamAV/defender scans directly on endpoints via SSH with least-privilege credentials. Sends real-time scan results and remediation updates through Telegram, Slack, or email. Runs locally with limited permissions—no need for elevated Wazuh manager access. 🎯 Impact Eliminates manual lag—scans start automatically and immediately. Standardizes response playbooks for reliable, repeatable remediation. Reduces threat dwell time, minimizing risk exposure. Provides full event-to-remediation visibility via logs and notifications. 🚀 Get Started Configure Wazuh Manager to forward AV alerts to this n8n webhook. Import this workflow JSON into your n8n instance. Set up required credentials: OpenAI API, SSH access for ClamAV scanning, notification channels (Telegram/Slack/email). Activate the workflow and monitor alerts triggering automated scans and reports. 📂 Enjoy customizing Swap ClamAV with your preferred antivirus commands (e.g., Defender) as needed. Integrate with your existing communication or ticketing systems. Extend or adapt for multi-endpoint orchestration or other alert rules. Created by Mariskarthick M Senior Security Analyst | Detection Engineer | Threat Hunter | Open-Source Enthusiast
by Jonathan
This workflow will take an alert from Syncro, determine if it's an agent_offline_trigger type, then determine if it's a new alert or a close to an existing alert, and then submit it to OpsGenie. New alerts will create a new alert in OpsGenie and resolved alerts will close the alert in OpsGenie. It doesn't require any kind of Google Sheets because OpsGenie allows you to submit a unique ID (known as an alias) along with the alert, which can be referenced later when closing the alert. The trigger type can be changed to suit your needs. You will need to create an API integration in OpsGenie. In Syncro, in addition to setting up the appropriate notification to webhook, you will also need a script that closes the agent_offline_trigger alert and an automated remediation to trigger that script when the asset goes offline (the script is queued and run when the asset comes back online). > This workflow is part of an MSP collection, The original can be found here: https://github.com/bionemesis/n8nsyncro
by n8n Team
This workflow demonstrates how to export SQL to XML and present the data nicely formatted using an XSL Template. The upper part of the workflow starts with a webhook. Then it gets several random records from the SQL table and converts them into an XML string. Then a final XML file is created that contains a link to the XML Stylesheet file. The lower part of the workflow contains a helper Webhook that reads an XSL Template from the GitHub gist and serves it back via the Respond to Webhook node. This is required to comply with the CORS rules of modern browsers. These rules dictate that both XML data and a stylesheet file should come from the same domain.
by Diptamoy Barman
Lead Qualification & Smart Outreach — Automated Scoring System Automate your lead intake, scoring, and outreach pipeline. This workflow collects leads from forms, enriches and scores them using Relevance AI, routes them by quality, and triggers the right follow-up — all without manual busywork. 🚀 What it Does Collects leads from your forms in real-time. Enriches each lead (individual + company) for better context. Scores leads automatically using Relevance AI templates. Routes leads into HOT / WARM / COLD tiers for prioritization. Drafts or sends personalized outreach emails for each tier. Logs all leads and outcomes into your CRM or Google Sheets. Notifies your team (e.g., via Slack) when a hot lead arrives. 🧩 Why Use It Save time:** stop manually sorting through raw leads. Focus on the best opportunities:** route only top leads to your sales team. Personalized outreach:** automated but tailored by lead quality. Scalable & repeatable:** works for startups, agencies, or larger teams. Adaptable:** swap CRMs, forms, or email providers easily. 🔧 Prerequisites & Setup Before importing or running the workflow, set up these connections: Relevance AI** Clone the tools (Resources provided in the workflow) for lead scoring and company scoring, and copy your API key into the HTTP Request nodes. Form Intake** Use n8n’s built-in form trigger or connect Typeform, Tally, HubSpot Forms, or any webhook-based intake. CRM or Database** Start with Google Sheets (included in the sample workflow) or connect HubSpot, Salesforce, Pipedrive, Zoho, Airtable, Notion, or any SQL/NoSQL DB. Email Provider** Use Gmail (included), or swap in Outlook, HubSpot Email, SendGrid, Mailgun, etc. Team Notifications (Optional)** Configure Slack (or other tools) for instant alerts on hot leads. ⚙️ How It Works (Simplified Flow) Lead Intake: Collects leads from your form or CRM. Lead Enrichment: Uses Relevance AI to score: Individual Fit: role, expertise, influence. Company Fit: size, industry, market relevance. Scoring & Insights: Combines both into a final lead score with labels and notes. Routing: Splits leads into HOT / WARM / COLD tiers. Outreach: HOT → drafts a review-ready email for your team. WARM / COLD → auto-sends appropriate follow-up emails. Logging & Alerts: Saves structured data to your CRM or sheet and notifies your team of hot leads. 🙋♂️ Who is This For Startups & SaaS teams** that need to prioritize a flood of inbound leads. Agencies & consultancies** qualifying prospects from ads or webinars. Small sales teams** that want to spend time only on the best leads. Freelancers or solopreneurs** who want a lightweight but effective qualification process. Automation newbies* who want a production-ready system to *sell for 1k-3k** 💡 Why It Stands Out Real intelligence:** uses data-driven Relevance AI scoring rather than static rules. Action-oriented:** routes and triggers the right next step immediately. Personalized yet scalable:** adapts outreach to each lead tier. Flexible integrations:** works with most popular CRMs, forms, and email tools. 🔥 Best Practices & Tips Adjust the weighting of individual vs. company scores in your Relevance AI template (default: 40% vs 60%). Tune Router thresholds (e.g., HOT ≥ 80, WARM 60-79, COLD < 60) to match your sales goals. Add a human approval step for high-value deals. Expand with enrichment APIs (e.g., Clearbit, Apollo) for richer lead data. Keep all API keys private and out of screenshots or repos. 🎉 With this workflow, Sales teams can focus on building relationships — while the system qualifies and organizes leads automatically OR You can sell to sales teams for ~3k Note: Demo data is pinned in some nodes to help you understand what the data looks like. Make sure to unpin those nodes when using for production.
by Hemanth Arety
Generate AEO strategy from brand input using AI competitor analysis This workflow automatically creates a comprehensive Answer Engine Optimization (AEO) strategy by identifying your top competitors, analyzing their positioning, and generating custom recommendations to help your brand rank in AI-powered search engines like ChatGPT, Perplexity, and Google SGE. Who it's for This template is perfect for: Digital marketing agencies** offering AEO services to clients In-house marketers** optimizing content for AI search engines Brand strategists** analyzing competitive positioning Content teams** creating AI-optimized content strategies SEO professionals** expanding into Answer Engine Optimization What it does The workflow automates the entire AEO research and strategy process in 6 steps: Collects brand information via a user-friendly web form (brand name, website, niche, product type, email) Identifies top 3 competitors using Google Gemini AI based on product overlap, market position, digital presence, and geographic factors Scrapes target brand website with Firecrawl to extract value propositions, features, and content themes Scrapes competitor websites in parallel to gather competitive intelligence Generates comprehensive AEO strategy using OpenAI GPT-4 with 15+ actionable recommendations Delivers formatted report via email with executive summary, competitive analysis, and implementation roadmap The entire process runs automatically and takes approximately 5-7 minutes to complete. How to set up Requirements You'll need API credentials for: Google Gemini API** (for competitor analysis) - Get API key OpenAI API** (for strategy generation) - Get API key Firecrawl API** (for web scraping) - Get API key Gmail account** (for email delivery) - Use OAuth2 authentication Setup Steps Import the workflow into your n8n instance Configure credentials: Add your Google Gemini API key to the "Google Gemini Chat Model" node Add your OpenAI API key to the "OpenAI Chat Model" node Add your Firecrawl API key as HTTP Header Auth credentials Connect your Gmail account using OAuth2 Activate the workflow and copy the form webhook URL Test the workflow by submitting a real brand through the form Check your email for the generated AEO strategy report Credentials Setup Tips For Firecrawl: Create HTTP Header Auth credentials with header name Authorization and value Bearer YOUR_API_KEY For Gmail: Use OAuth2 to avoid authentication issues with 2FA Test each API credential individually before running the full workflow How it works Competitor Identification The Google Gemini AI agent analyzes your brand based on 4 weighted criteria: product/service overlap (40%), market position (30%), digital presence (20%), and geographic overlap (10%). It returns structured JSON data with competitor names, URLs, overlap percentages, and detailed reasoning. Web Scraping Firecrawl extracts structured data from websites using custom schemas. For each site, it captures: company name, products/services, value proposition, target audience, key features, pricing info, and content themes. This runs asynchronously with 60-second waits to allow for complete extraction. Strategy Generation OpenAI GPT-4 analyzes the combined brand and competitor data to generate a comprehensive report including: executive summary, competitive analysis, 15+ specific AEO tactics across 4 categories (content optimization, structural improvements, authority building, answer engine targeting), content priority matrix with 10 ranked topics, and a detailed implementation roadmap. Email Delivery The strategy is formatted as a professional HTML email with clear sections, visual hierarchy, and actionable next steps. Recipients get an immediately implementable roadmap for improving their AEO performance. How to customize the workflow Change AI Models Replace Google Gemini** with Claude, GPT-4, or other LLM in the competitor analysis node Replace OpenAI** with Anthropic Claude or Google Gemini in the strategy generation node Both use LangChain agent nodes, making model swapping straightforward Modify Competitor Analysis Find more competitors**: Edit the AI prompt to request 5 or 10 competitors instead of 3 Add filtering criteria**: Include factors like company size, funding stage, or geographic focus Change ranking weights**: Adjust the 40/30/20/10 weighting in the prompt Enhance Data Collection Add social media scraping**: Include LinkedIn, Twitter/X, or Facebook page analysis Pull review data**: Integrate G2, Capterra, or Trustpilot APIs for customer sentiment Include traffic data**: Add SimilarWeb or Semrush API calls for competitive metrics Change Output Format Export to Google Docs**: Replace Gmail with Google Docs node to create shareable documents Send to Slack/Discord**: Post strategy summaries to team channels for collaboration Save to database**: Store results in Airtable, PostgreSQL, or MongoDB for tracking Create presentations**: Generate PowerPoint slides using automation tools Add More Features Schedule periodic analysis**: Run monthly competitive audits for specific brands A/B test strategies**: Generate multiple strategies and compare results over time Multi-language support**: Add translation nodes for international brands Custom branding**: Modify email templates with your agency's logo and colors Adjust Scraping Behavior Change Firecrawl schema**: Customize extracted data fields based on industry needs Add timeout handling**: Implement retry logic for failed scraping attempts Scrape more pages**: Extend beyond homepage to include blog, pricing, and about pages Use different scrapers**: Replace Firecrawl with Apify, Browserless, or custom solutions Tips for best results Provide clear brand information**: The more specific the product type and niche, the better the competitor identification Ensure websites are accessible**: Some sites block scrapers; consider adding user agents or rotating IPs Monitor API costs**: Firecrawl and OpenAI charges can add up; set usage limits Review generated strategies**: AI recommendations should be reviewed and customized for your specific context Iterate on prompts**: Fine-tune the AI prompts based on output quality over multiple runs Common use cases Client onboarding** for marketing agencies - Generate initial AEO assessments Content strategy planning** - Identify topics and angles competitors are missing Quarterly audits** - Track competitive positioning changes over time Product launches** - Understand competitive landscape before entering market Sales enablement** - Equip sales teams with competitive intelligence Note: This workflow uses community and AI nodes that require external API access. Make sure your n8n instance can make outbound HTTP requests and has the necessary LangChain nodes installed.
by Akshay
Overview This project is an AI-powered hotel receptionist built using n8n, designed to handle guest queries automatically through WhatsApp. It integrates Google Gemini, Redis, MySQL, and Google Sheets via LangChain to create an intelligent conversational system that understands and answers booking-related questions in real time. A standout feature of this workflow is its AI model-switching system — it dynamically assigns users to different Gemini models, balancing traffic, improving performance, and reducing API costs. How It Works WhatsApp Trigger The workflow starts when a hotel guest sends a message through WhatsApp. The system captures the message text, contact details, and session information for further processing. Redis-Based Model Management The workflow checks Redis for a saved record of the user’s previously assigned AI model. If no record exists, a Model Decider node assigns a new model (e.g., Gemini 1 or Gemini 2). Redis then stores this model assignment for an hour, ensuring consistent routing and controlled traffic distribution. Model Selector The Model Selector routes each user’s request to the correct Gemini instance, enabling parallel execution across multiple AI models for faster response times and cost optimization. AI Agent Logic The LangChain AI Agent serves as the system’s reasoning core. It: Interprets guest questions such as: “Who checked in today?” “Show me tomorrow’s bookings.” “What’s the price for a deluxe suite for two nights?” Generates safe, read-only SQL SELECT queries. Fetches the requested data from the MySQL database. Combines this with dynamic pricing or promotions from Google Sheets, if available. Response Delivery Once the AI Agent formulates an answer, it sends a natural-sounding message back to the guest via WhatsApp, completing the interaction loop. Setup & Requirements Prerequisites Before deploying this workflow, ensure the following: n8n Instance** (local or hosted) WhatsApp Cloud API** with messaging permissions Google Gemini API Key** (for both models) Redis Database** for user session and model routing MySQL Database** for hotel booking and guest data Google Sheets Account** (optional, for pricing or offer data) Step-by-Step Setup Configure Credentials Add all API credentials in n8n → Settings → Credentials (WhatsApp, Redis, MySQL, Google). Prepare Databases MySQL Tables Example: bookings(id, guest_name, room_type, check_in, check_out) rooms(id, type, rate, status) Ensure the MySQL user has read-only permissions. Set Up Redis Create Redis keys for each user: llm-user:<whatsapp_id> = { "modelIndex": 0 } TTL: 3600 seconds (1 hour). Connect Google Sheets (Optional) Add your sheet under Google Sheets OAuth2. Use it to manage room rates, discounts, or seasonal offers dynamically. WhatsApp Webhook Configuration In Meta’s Developer Console, set the webhook URL to your n8n instance. Select message updates to trigger the workflow. Testing the Workflow Send messages like “Who booked today?” or a voice message. Confirm responses include real data from MySQL and contextual replies. Key Features Text & voice support** for guest interactions Automatic AI model-switching** using Redis Session memory** for context-aware conversations Read-only SQL query generation** for database safety Google Sheets integration** for live pricing and availability Scalable design** supporting multiple LLM instances Example Guest Queries | Guest Query | AI Response Example | |--------------|--------------------| | “Who checked in today?” | “Two guests have checked in today: Mr. Ahmed (Room 203) and Ms. Priya (Room 410).” | | “How much is a deluxe room for two nights?” | “A deluxe room costs $120 per night. The total for two nights is $240.” | | “Do you have any discounts this week?” | “Yes! We’re offering a 10% weekend discount on all deluxe and suite rooms.” | | “Show me tomorrow’s check-outs.” | “Three check-outs are scheduled tomorrow: Mr. Khan (101), Ms. Lee (207), and Mr. Singh (309).” | Customization Options 🧩 Model Assignment Logic You can modify the Model Decider node to: Assign models based on user load, region, or priority level. Increase or decrease TTL in Redis for longer model persistence. 🧠 AI Agent Prompt Adjust the system prompt to control tone and response behavior — for example: Add multilingual support. Include upselling or booking confirmation messages. 🗂️ Database Expansion Extend MySQL to include: Staff schedules Maintenance records Restaurant reservations Then link new queries in the AI Agent node for richer responses. Tech Stack n8n** – Workflow automation & orchestration Google Gemini (PaLM)** – LLM for reasoning & generation Redis** – Model assignment & session management MySQL** – Booking & guest data storage Google Sheets** – Dynamic pricing reference WhatsApp Cloud API** – Messaging interface Outcome This workflow demonstrates how AI automation can transform hotel operations by combining WhatsApp communication, database intelligence, and multi-model AI reasoning. It’s a production-ready foundation for scalable, cost-optimized, AI-driven hospitality solutions that deliver fast, accurate, and personalized guest interactions.
by Manav Desai
WhatsApp RAG Chatbot with Supabase, Gemini 2.5 Flash, and OpenAI Embeddings This n8n template demonstrates how to build a WhatsApp-based AI chatbot that answers user questions using document retrieval (RAG) powered by Supabase for storage, OpenAI embeddings for semantic search, and Gemini 2.5 Flash LLM for generating high-quality responses. Use cases are many: Turn your WhatsApp into a knowledge assistant for FAQs, customer support, or internal company documents — all without coding. Good to know The workflow uses OpenAI embeddings for both document embeddings and query embeddings, ensuring accurate semantic search. Gemini 2.5 Flash LLM** is used to generate user-friendly answers from the retrieved context. Messages are processed in real-time and sent back directly to WhatsApp. Workflow is modular — you can split document ingestion and query handling for large-scale setups. Supabase and WhatsApp API credentials must be configured before running. How it works Trigger: A new WhatsApp message triggers the workflow via webhook. Message Check: Determines if the message is a query or a document upload. Document Handling: Fetch file URL from WhatsApp. Convert binary to text. Generate embeddings with OpenAI and store them in Supabase. Query Handling: Generate query embeddings with OpenAI. Retrieve relevant context from Supabase. Pass context to Gemini 2.5 Flash LLM to compose a response. Response: Send the answer back to the user on WhatsApp. Optional: Add Gmail node to forward chat logs or daily summaries. How to use Configure WhatsApp Business API webhook for incoming messages. Add your Supabase and OpenAI credentials in n8n’s credentials manager. Upload documents via WhatsApp to populate the Supabase vector store. Ask queries — the bot retrieves context and answers using Gemini 2.5 Flash. Requirements WhatsApp Business API** (or Twilio WhatsApp Sandbox) Supabase account** (vector storage for embeddings) OpenAI API key** (for generating embeddings) Gemini API access** (for LLM responses) Customising this workflow Swap WhatsApp with Telegram, Slack, or email for different chat channels. Extend ingestion to other sources like Google Drive or Notion. Adjust the number of retrieved documents or prompt style in Gemini for tone control. Add a Gmail output node to send logs or alerts automatically.
by Habeeb Mohammed
Who's it for This workflow is perfect for individuals who want to maintain detailed financial records without the overhead of complex budgeting apps. If you prefer natural language over data entry forms and want an AI assistant to handle the bookkeeping, this template is for you. It's especially useful for: People who want to track cash and online transactions separately Anyone who lends money to friends/family and needs debt tracking Users comfortable with Slack as their primary interface Those who prefer conversational interactions over manual spreadsheet updates What it does This AI-powered finance tracker transforms your Slack workspace into a personal finance command center. Simply mention your bot with transactions in plain English (e.g., "₹500 cash food, borrowed ₹1000 from John"), and the AI agent will: Parse transactions using natural language understanding via Google Gemini Calculate balance changes for cash and online accounts Show a preview of changes before saving anything Update Google Sheets only after you approve Track debts (who owes you, who you owe, repayments) Send daily reminders at 11 PM with current balances and active debts The workflow maintains conversational context using PostgreSQL memory, so you can say things like "yesterday's transactions" or "that payment to Sarah" and it understands the context. How it works Scheduled Daily Check-in (11 PM) Fetches current balances from Google Sheets Retrieves all active debts Formats and sends a Slack message with balance summary Prompts you to share the day's transactions AI Agent Transaction Processing When you mention the bot in Slack: Phase 1: Parse & Analyze Extracts amount, payment type (cash/online), category (food, travel, etc.) Identifies transaction type (expense, income, borrowed, lent, repaid) Stores conversation context in PostgreSQL memory Phase 2: Calculate & Preview Reads current balances from Google Sheets Calculates new balances based on transactions Shows formatted preview with projected changes Waits for your approval ("yes"/"no") Phase 3: Update Database (only after approval) Logs transactions with unique IDs and timestamps Updates debt records with person names and status Recalculates and stores new balances Handles debt lifecycle (Active → Settled) Phase 4: Confirmation Sends success message with updated balances Shows active debts summary Includes logging timestamp Requirements Essential Services: n8n instance (self-hosted or cloud) Slack workspace with admin access Google account Google Gemini API key PostgreSQL database Recommended: Claude AI model (mentioned in workflow notes as better alternative to Gemini) How to set up 1. Google Sheets Setup Create a new Google Sheet with three tabs named exactly: Balances Tab: | Date | Cash_Balance | Online_Balance | Total_Balance | |------|--------------|----------------|---------------| Transactions Tab: | Transaction_ID | Date | Time | Amount | Payment_Type | Category | Transaction_Type | Person_Name | Description | Added_At | |----------------|------|------|--------|--------------|----------|------------------|-------------|-------------|----------| Debts Tab: | Person_Name | Amount | Type | Date_created | Status | Notes | |-------------|--------|------|--------------|--------|-------| Add header rows and one initial balance row in the Balances tab with today's date and starting amounts. 2. Slack App Setup Go to api.slack.com/apps and create a new app Under OAuth & Permissions, add these Bot Token Scopes: app_mentions:read chat:write channels:read Install the app to your workspace Copy the Bot User OAuth Token Create a dedicated channel (e.g., #personal-finance-tracker) Invite your bot to the channel 3. Google Gemini API Visit ai.google.dev Create an API key Save it for n8n credentials setup 4. PostgreSQL Database Set up a PostgreSQL database (you can use Supabase free tier): Create a new project Note down connection details (host, port, database name, user, password) The workflow will auto-create the required table 5. n8n Workflow Configuration Import the workflow and configure: A. Credentials Google Sheets OAuth2**: Connect your Google account Slack API**: Add your Bot User OAuth Token Google Gemini API**: Add your API key PostgreSQL**: Add database connection details B. Update Node Parameters All Google Sheets nodes: Select your finance spreadsheet Slack nodes: Select your finance channel Schedule Trigger: Adjust time if you prefer a different check-in hour (default: 11 PM) Postgres Chat Memory: Change sessionKey to something unique (e.g., finance_tracker_your_name) Keep tableName as n8n_chat_history_finance or rename consistently C. Slack Trigger Setup Activate the "Bot Mention trigger" node Copy the webhook URL from n8n In Slack App settings, go to Event Subscriptions Enable events and paste the webhook URL Subscribe to bot event: app_mention Save changes 6. Test the Workflow Activate both workflow branches (scheduled and agent) In your Slack channel, mention the bot: @YourBot ₹100 cash snacks Bot should respond with a preview Reply "yes" to approve Verify Google Sheets are updated How to customize Change Transaction Categories Edit the AI Agent's system message to add/remove categories. Current categories: travel, food, entertainment, utilities, shopping, health, education, other Modify Daily Check-in Time Change the Schedule Trigger's triggerAtHour value (0-23 in 24-hour format). Add Currency Support Replace ₹ with your currency symbol in: Format Daily Message code node AI Agent system prompt examples Switch AI Models The workflow uses Google Gemini, but notes recommend Claude. To switch: Replace "Google Gemini Chat Model" node Add Claude credentials Connect to AI Agent node Customize Debt Types Modify AI Agent's system prompt to change debt handling logic: Currently: I_Owe and They_Owe_Me You can add more types or change naming Add More Payment Methods Current: cash, online To add more (e.g., credit card): Update AI Agent prompt Modify Balances sheet structure Update balance calculation logic Change Approval Keywords Edit AI Agent's Phase 2 approval logic to recognize different approval phrases. Add Spending Analytics Extend the daily check-in to calculate: Weekly/monthly spending summaries Category-wise breakdowns Use additional Code nodes to process transaction history Important Notes ⚠️ Never trigger with normal messages - Only use app mentions (@botname) to avoid infinite loops where the bot replies to its own messages. 💡 Context Awareness - The bot remembers conversation history, so you can reference "yesterday", "last week", or previous transactions naturally. 🔒 Data Privacy - All your financial data stays in your Google Sheets and PostgreSQL database. The AI only processes transaction text temporarily. 📊 Backup Regularly - Export your Google Sheets periodically as backup. Pro Tips: Start with small test transactions to ensure everything works Use consistent person names for debt tracking The bot understands various formats: "₹500 cash food" = "paid 500 rupees in cash for food" You can batch transactions in one message: "₹100 travel, ₹200 food, ₹50 snacks"
by Charles
🚀 Daily IndieHackers Reddit Trend Analysis to Slack > Transform Reddit chaos into actionable startup intelligence > Get AI-powered insights from r/indiehackers delivered to your Slack every morning 🎯 Who's It For This template is designed for startup founders, growth teams, and product managers who need to: Stay ahead of indie hacker trends without manual Reddit browsing Understand what's working in the entrepreneurial community Get actionable insights for product and marketing decisions Keep their team informed about emerging opportunities Perfect for teams building products for entrepreneurs or anyone wanting to leverage community intelligence for competitive advantage. ✨ What It Does Transform your morning routine with automated intelligence gathering that delivers structured, AI-powered summaries of the hottest r/indiehackers discussions directly to your Slack channel. 🧠 Smart Analysis Features | Feature | Description | |---------|-------------| | 🔥 Hotness Scoring | Calculates engagement scores using time-decay algorithms | | 📊 Topic Extraction | Identifies key themes and trending subjects | | 💰 Traction Signals | Spots revenue, metrics, and growth indicators | | 🎯 Theme Clustering | Groups posts into actionable categories | | ⚡ Action Items | Generates specific recommendations for your team | 📱 Slack Integration Receive beautifully formatted messages with: Executive summaries and key takeaways Top 3 hottest posts with engagement metrics Interactive buttons for deeper exploration Team discussion prompts ⚙️ How It Works graph LR A[🕐 Daily 8AM Trigger] --> B[📱 Fetch Reddit Posts] B --> C[🔄 Process Data] C --> D[🤖 Gemini AI Analysis] D --> E[✨ Groq Slack Formatting] E --> F[💬 Deliver to Slack] 🔄 The Complete Process Step 1: Automated Trigger Every morning at 8 AM, the workflow springs into action Step 2: Reddit Data Collection Fetches the latest 5 posts from r/indiehackers with full metadata Step 3: Data Processing Structures raw Reddit data for optimal AI analysis Step 4: AI-Powered Analysis Gemini AI performs deep analysis calculating hotness scores, extracting topics, and identifying patterns Step 5: Slack Formatting Groq AI Agent transforms insights into beautiful Slack Block Kit messages Step 6: Team Delivery Your designated Slack channel receives the formatted analysis 🛠️ Requirements You'll need API access for: Reddit (OAuth2), Google Gemini, Groq, and Slack (OAuth2). All have free tiers available. 🚀 Setup Guide 1️⃣ Configure Your Credentials Add these credentials in n8n: Reddit OAuth2, Google Gemini, Groq, and Slack OAuth2. The workflow will guide you through each setup. 2️⃣ Customize the Schedule Default: Daily at 8:00 AM To modify: Edit the "Daily Schedule" cron trigger node // Example: Run at 9:30 AM { "triggerTimes": { "item": [{ "hour": 9, "minute": 30 }] } } 3️⃣ Set Your Slack Destination Open the "Send to Slack" node Select your target channel Configure notification preferences 4️⃣ Adjust Analysis Parameters Post Limit: Change from default 5 posts // In "Get many posts" Reddit node "limit": 10 // Recommended: 3-10 posts Context Customization: { "channel_type": "team", "audience": "Growth, Product, and Founders", "cta_link": "https://your-dashboard.com", "timeframe_label": "This Week" } 🎨 Customization Options 🔍 Analysis Focus Areas Transform the workflow for different insights: SaaS-Focused Analysis Add to Gemini prompt: "Focus on SaaS and B2B insights, prioritizing recurring revenue and product-market fit signals" Geographic Targeting Add: "Prioritize posts relevant to [your region/market]" Stage-Specific Insights Add: "Focus on [early-stage/growth-stage] startup challenges" 📈 Hotness Algorithm Tweaking Default Formula: (ups + 2*num_comments) * freshness_decay Emphasize Comments: (ups + 3*num_comments) * freshness_decay Include Upvote Ratio: (ups * upvote_ratio + 2*num_comments) * freshness_decay 🌐 Multi-Subreddit Analysis Expand beyond r/indiehackers: Additional Communities: r/startups r/entrepreneur r/SideProject r/buildinpublic r/nocode 💾 Data Storage Extensions Enhance with historical tracking: | Node Type | Purpose | Benefit | |-----------|---------|---------| | Google Sheets | Trend storage | Historical analysis | | Airtable | Advanced data management | Rich analytics | | Webhook | External analytics | Custom dashboards | 📊 Expected Output 📱 Daily Slack Message Structure 🚀 IndieHackers Trends — This Week 📋 TL;DR: [One-sentence key insight] 🔥 Hot Posts (Top 3) [Post Title] (Hotness: 8.7) Topics: SaaS launch, pricing strategy 💬 23 comments | 👍 156 ups | 📅 Posted 4 hours ago [Open Reddit Button] 🧭 Themes Summary Go-to-market tactics — 3 posts, hotness: 24.1 Product launches — 2 posts, hotness: 18.3 ✅ What to Do Now Test pricing page variations based on community feedback Consider cold email strategies mentioned in hot posts Validate product ideas using discussed frameworks [Open Dashboard Button] 💡 Pro Tips for Success 🎯 Optimization Strategies Week 1-2: Baseline Monitor output quality and team engagement Note which insights generate the most discussion Week 3-4: Refinement Adjust AI prompts based on feedback Fine-tune hotness scoring for your needs Month 2+: Advanced Usage Add historical trend analysis Create custom dashboards with stored data Build feedback loops for continuous improvement 🚨 Common Pitfalls to Avoid | Issue | Solution | |-------|---------| | API Rate Limits | Reduce post count or increase time intervals | | Poor Insight Quality | Refine prompts with specific examples | | Team Engagement Drop | Rotate focus areas and encourage thread discussions | | Information Overload | Limit to top 3 posts and key themes only | 🔧 Troubleshooting ❌ Common Issues & Solutions "Model not found" Error Cause: Gemini regional availability Fix: Check supported regions or switch to alternative AI model Slack Formatting Broken Cause: Invalid Block Kit JSON Fix: Validate JSON structure in AI Agent output Missing Reddit Data Cause: API credentials or rate limits Fix: Verify OAuth2 setup and check usage quotas AI Timeouts Cause: Too much data or complex prompts Fix: Reduce post count or simplify analysis requests ⚡ Performance Optimization Keep analysis under 10 posts for optimal speed Monitor execution times in n8n logs Add error handling nodes for production reliability Use webhook timeouts for external API calls 🌟 Advanced Use Cases 📈 Competitive Intelligence Modify prompts to track specific competitors or market segments mentioned in discussions 🎯 Product Validation Focus analysis on posts related to your product category for market research 📝 Content Strategy Use trending topics to inform your content calendar and thought leadership 🤝 Community Engagement Identify opportunities to participate in discussions and build relationships Ready to transform your startup intelligence gathering? 🚀 Deploy this workflow and start receiving actionable insights tomorrow morning!
by franck fambou
Overview This intelligent chatbot workflow enables natural language conversations with your documents, supporting multiple file formats including PDFs, Word documents, Excel spreadsheets, and text files. Built with advanced RAG (Retrieval-Augmented Generation) technology, this chatbot can understand, analyze, and answer questions about your document content with contextual accuracy and intelligent responses. How It Works Intelligent Document Processing & Conversation Pipeline: Multi-Format Document Ingestion**: Automatically processes and indexes various document formats (PDF, DOCX, XLSX, TXT, etc.) Smart Content Chunking**: Breaks down documents into meaningful segments while preserving context and relationships Vector Database Storage**: Creates searchable embeddings for fast and accurate information retrieval Contextual Conversation Engine**: Uses AI to understand user queries and retrieve relevant document sections Natural Language Responses**: Generates human-like responses with citations and source references Multi-Turn Conversations**: Maintains conversation history and context across multiple interactions Real-Time Processing**: Instant responses with live document updates and dynamic content refresh Setup Instructions Estimated Setup Time: 15-20 minutes Prerequisites n8n instance (v0.200.0 or higher recommended) OpenAI/Gemini API key for embeddings and chat completion Vector database service (optional: Pinecone, Weaviate, or Qdrant) File storage service (optional: Google Drive, Dropbox, AWS S3) Web server for chatbot interface (optional) Configuration Steps Configure Document Input Sources Set up file upload webhook for direct document submission Configure cloud storage watchers for automatic document processing Add support for multiple file formats and size limits Set up document validation and security checks Setup Document Processing Pipeline Configure text extraction engines for different file types Set up intelligent chunking parameters (chunk size, overlap, boundaries) Add metadata extraction for document categorization Configure OCR for scanned documents (optional) Configure Vector Database Set up your chosen vector database credentials Configure embedding model settings (Gemini models/text-embedding-004 recommended) Set up collection/index structure for document storage Configure search parameters and similarity thresholds Setup AI Chat Engine Add your AI service API credentials (Gemini, Claude, etc.) Configure conversation prompts and system instructions Set up context window management and token optimization Add response formatting and citation rules Configure Chat Interface Set up webhook endpoints for chat API Configure session management and conversation history Add authentication and rate limiting (optional) Set up real-time updates and streaming responses Setup Monitoring & Analytics Configure conversation logging and analytics Set up performance monitoring for response times Add usage tracking and cost monitoring Configure error handling and failover mechanisms Use Cases Business & Enterprise Knowledge Base Queries**: Ask questions about company policies, procedures, and documentation Contract Analysis**: Query legal documents, contracts, and compliance materials Training Materials**: Interactive learning with training manuals and educational content Financial Reports**: Analyze and discuss financial statements, budgets, and forecasts Research & Academia Research Paper Analysis**: Discuss findings, methodologies, and citations from academic papers Literature Reviews**: Compare and contrast multiple research documents Thesis Support**: Get insights from reference materials and research data Grant Proposals**: Analyze requirements and optimize proposal content Legal & Compliance Legal Document Review**: Query contracts, agreements, and legal texts Regulatory Compliance**: Understand compliance requirements from regulatory documents Case Law Research**: Analyze legal precedents and court decisions Policy Analysis**: Interpret organizational policies and procedures Technical Documentation API Documentation**: Interactive queries about technical specifications User Manuals**: Get help and guidance from product documentation Code Documentation**: Understand codebases and technical implementations Troubleshooting Guides**: Interactive problem-solving with technical guides Personal Productivity Document Summarization**: Get quick summaries of long documents Information Extraction**: Find specific data points across multiple documents Content Research**: Research topics across your personal document library Meeting Notes**: Query and analyze meeting transcripts and notes Key Features Advanced Document Processing Multi-Format Support**: PDF, DOCX, XLSX, TXT, PPTX, and more Intelligent Chunking**: Context-aware document segmentation Metadata Extraction**: Automatic categorization and tagging OCR Integration**: Process scanned documents and images with text Intelligent Conversation Contextual Understanding**: Maintains conversation context and document relationships Source Attribution**: Provides citations and references for all answers Multi-Document Queries**: Compare and analyze across multiple documents Follow-up Questions**: Natural conversation flow with clarifying questions Performance & Scalability Fast Retrieval**: Vector-based semantic search for instant responses Scalable Architecture**: Handle large document collections efficiently Batch Processing**: Process multiple documents simultaneously Caching System**: Optimized response times with intelligent caching Security & Privacy Document Encryption**: Secure storage and transmission of sensitive documents Access Control**: User-based permissions and document access restrictions Audit Logging**: Complete conversation and access audit trails Data Retention**: Configurable data retention and deletion policies Technical Architecture Document Processing Flow File Upload → Format Detection → Text Extraction → Content Chunking Metadata Extraction → Embedding Generation → Vector Storage → Index Creation Conversation Flow User Query → Intent Analysis → Vector Search → Context Retrieval Response Generation → Source Attribution → Answer Formatting → Delivery Supported File Formats Documents**: PDF, DOC, DOCX, RTF, TXT, MD Spreadsheets**: XLS, XLSX, CSV Presentations**: PPT, PPTX Images**: PNG, JPG (with OCR) Archives**: ZIP (auto-extracts supported formats) Web**: HTML, XML Integration Options Chat Interfaces Web Widget**: Embeddable chat widget for websites API Endpoints**: RESTful API for custom integrations Slack/Teams**: Direct integration with team collaboration tools Mobile Apps**: API-first design for mobile application integration Data Sources Cloud Storage**: Google Drive, Dropbox, OneDrive, AWS S3 Document Systems**: SharePoint, Confluence, Notion Email**: Process attachments from email systems CRM/ERP**: Integration with business systems Performance Specifications Response Time**: < 3 seconds for typical queries Document Capacity**: Supports collections of 10,000+ documents Concurrent Users**: Scales to handle multiple simultaneous conversations Accuracy**: >90% relevance for domain-specific queries Advanced Configuration Options Customization Custom Prompts**: Tailor AI behavior for specific use cases Branding**: Customize chat interface with your company branding Language Support**: Multi-language document processing and responses Domain Expertise**: Fine-tune for specific industries or domains Analytics & Monitoring Usage Analytics**: Track popular queries and document usage Performance Metrics**: Monitor response times and accuracy User Feedback**: Collect ratings and improve responses A/B Testing**: Test different configurations and prompts Troubleshooting & Support Common Issues Slow Responses**: Check vector database performance and API limits Inaccurate Answers**: Review chunking strategy and embedding quality Format Errors**: Verify document formats and processing capabilities Memory Issues**: Monitor token usage and context window limits Optimization Tips Use clear, specific questions for best results Ensure documents are well-formatted with proper headers Regular vector database maintenance for optimal performance Monitor API usage to optimize costs and performance
by inderjeet Bhambra
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Who is this for? IT teams and support organizations looking to automate Level 1 support with AI-powered assistance while maintaining proper ticket management workflows. What problem does this solve? Eliminates repetitive manual support tasks by providing instant, context-aware assistance that references organizational knowledge and creates structured tickets when needed. What this workflow does RAG Pipeline**: Processes PDF/CSV documents into searchable vector database Intelligent Slack Bot**: This AI-helpdesk assistant handles support requests with thread-aware conversations Vector Knowledge Search**: Searches embedded knowledge base articles and historical case data JIRA Integration**: Creates, searches, and manages support tickets automatically Emoji Reactions**: Users can trigger actions (create tickets, escalate) via emoji reactions Requirements Required Accounts: n8n Cloud or self-hosted instance Slack workspace with admin access Supabase account (vector database) JIRA Cloud instance OpenAI API key Technical Prerequisites: Basic n8n workflow knowledge Slack app creation experience Understanding of vector databases Setup Steps 1. Slack App Configuration Create new Slack app with Bot Token Scopes: app_mentions:read, channels:history, channels:read, groups:history, groups:read, im:history, im:read, mpim:history, mpim:read, users:read Configure Event Subscriptions: app_mention, message.channels, message.groups, reaction_added Set Request URL to your n8n Slack Trigger webhook 2. Supabase Vector Database Setup Create new Supabase project Enable pgvector extension Create documents table with vector column (1536 dimensions for OpenAI embeddings) Configure RLS policies for secure access 3. JIRA Configuration Generate API token from JIRA Cloud Create helpdesk project with appropriate issue types Note project ID and issue type IDs for workflow configuration 4. n8n Workflow Configuration Import workflow and configure credentials Update Slack channel IDs in trigger nodes Set OpenAI API key in all OpenAI nodes Configure Supabase connection in vector store nodes Update JIRA project settings in MCP server nodes 5. Knowledge Base Data Format Supported file formats: PDF, CSV CSV Structure: Structure your data with columns, but not limited to, Ticket#, Issue Description, Issue Summary, Resolution Provided, Case Status, Contact User PDF Content: Technical documentation, troubleshooting guides, policy documents Upload documents via the form trigger to automatically embed in vector database. Customization Options AI Agent Behavior Modify system prompt in AIHelpdesk Agent node Adjust conversation memory window (default: 20 messages) Change AI model (GPT-4o, GPT-3.5-turbo, etc.) Reaction Mappings Customize emoji-to-action mappings in Reaction Handler code Add new reaction types for department-specific workflows Configure escalation rules and priority levels JIRA Integration Customize ticket templates and fields Add auto-assignment rules based on issue type Configure SLA and priority mappings Vector Search Adjust similarity thresholds for knowledge retrieval Modify search result limits and relevance scoring Add metadata filtering for departmental knowledge bases Advanced Features Thread-aware conversation memory Automatic bot loop prevention Context-preserving ticket creation Multi-modal file processing (PDF + CSV) Scalable MCP architecture for tool integration Use Cases Level 1 IT Support**: Automate common troubleshooting workflows Employee Onboarding**: Answer policy and procedure questions Internal Help Desk**: Route and track internal service requests Knowledge Management**: Make organizational knowledge searchable and actionable Template includes Complete Slack integration with thread support RAG pipeline for document processing Vector similarity search implementation JIRA ticket lifecycle management Emoji reaction-based user interactions Comprehensive error handling and validation