by siyad
This n8n workflow automates the process of monitoring inventory levels for Shopify products, ensuring timely updates and efficient stock management. It is designed to alert users when inventory levels are low or out of stock, integrating with Shopify's webhook system and providing notifications through Discord (can be changed to any messaging platform) with product images and details. Workflow Overview Webhook Node (Shopify Listener): This node is set up to listen for Shopify's inventory level webhook. It triggers the workflow whenever there is an update in the inventory levels. The webhook is configured in Shopify settings, where the n8n URL is specified to receive inventory level updates. Function Node (Inventory Check): This node processes the data received from the Shopify webhook. It extracts the available inventory and the inventory item ID, and determines whether the inventory is low (less than 4 items) or out of stock. Condition Nodes (Inventory Level Check): Two condition nodes follow the function node. One checks if the inventory is low (low_inventory equals true), and the other checks if the inventory is out of stock (out_of_stock equals true). GraphQL Node (Product Details Retrieval): Connected to the condition nodes, this node fetches detailed information about the product using Shopify's GraphQL API. It retrieves the product variant, title, current inventory quantity, and the first product image. HTTP Node (Discord Notification): The final node in the workflow sends a notification to Discord. It includes an embed with the product title, a warning message ("This product is running out of stock!"), the remaining inventory quantity, product variant details, and the product image. The notification ensures that relevant stakeholders are immediately informed about critical inventory levels.
by Daniel Shashko
How it Works This workflow accepts meeting transcripts via webhook (Zoom, Google Meet, Teams, Otter.ai, or manual notes), immediately processing them through an intelligent pipeline that eliminates post-meeting admin work. The system parses multiple input formats (JSON, form data, transcription outputs), extracting meeting metadata including title, date, attendees, transcript content, duration, and recording URLs. OpenAI analyzes the transcript to extract eight critical dimensions: executive summary, key decisions with ownership, action items with assigned owners and due dates, discussion topics, open questions, next steps, risks/blockers, and follow-up meeting requirements—all returned as structured JSON. The intelligence engine enriches each action item with unique IDs, priority scores (weighing urgency + owner assignment + due date), status initialization, and meeting context links, then calculates a completeness score (0-100) that penalizes missing owners and undefined deadlines. Multi-channel distribution ensures visibility: Slack receives formatted summaries with emoji categorization for decisions (✅), action items (🎯) with priority badges and owner assignments, and completeness scores (📊). Notion gets dual-database updates—meeting notes with formatted decisions and individual task cards in your action item database with full filtering and kanban capabilities. Task owners receive personalized HTML emails with priority color-coding and meeting context, while Google Calendar creates due-date reminders as calendar events. Every meeting logs to Google Sheets for analytics tracking: attendee count, duration, action items created, priority distribution, decision count, completeness score, and follow-up indicators. The workflow returns a JSON response confirming successful processing with meeting ID, action item count, and executive summary. The entire pipeline executes in 8-12 seconds from submission to full distribution. Who is this for? Product and engineering teams drowning in scattered action items across tools Remote-first companies where verbal commitments vanish after calls Executive teams needing auditable decision records without dedicated note-takers Startups juggling 10+ meetings daily without time for manual follow-up Operations teams tracking cross-functional initiatives requiring accountability Setup Steps Setup time:** 25-35 minutes Requirements:** OpenAI API key, Slack workspace, Notion account, Google Workspace (Calendar/Gmail/Sheets), optional transcription service Webhook Trigger: Automatically generates URL, configure as POST endpoint accepting JSON with title, date, attendees, transcript, duration, recording_url, organizer Transcription Integration: Connect Otter.ai/Fireflies.ai/Zoom webhooks, or create manual submission form OpenAI Analysis: Add API credentials, configure GPT-4 or GPT-3.5-turbo, temperature 0.3, max tokens 1500 Intelligence Synthesis: JavaScript calculates priority scores (0-40 range) and completeness metrics (0-100), customize thresholds Slack Integration: Create app with chat:write scope, get bot token, replace channel ID placeholder with your #meeting-summaries channel Notion Databases: Create "Meeting Notes" database (title, date, attendees, summary, action items, completeness, recording URL) and "Action Items" database (title, assigned to, due date, priority, status, meeting relation), share both with integration, add token Email Notifications: Configure Gmail OAuth2 or SMTP, customize HTML template with company branding Calendar Reminders: Enable Calendar API, creates events on due dates at 9 AM (adjustable), adds task owner as attendee Analytics Tracking: Create Google Sheet with columns for Meeting_ID, Title, Date, Attendees, Duration, Action_Items, High_Priority, Decisions, Completeness, Unassigned_Tasks, Follow_Up_Needed Test: POST sample transcript, verify Slack message, Notion entries, emails, calendar events, and Sheets logging Customization Guidance Meeting Types:** Daily standups (reduce tokens to 500, Slack-only), sprint planning (add Jira integration), client calls (add CRM logging), executive reviews (stricter completeness thresholds) Priority Scoring:** Add urgency multiplier for <48hr due dates, owner seniority weights, customer impact flags AI Prompt:** Customize to emphasize deadlines, blockers, or technical decisions; add date parsing for phrases like "by end of week" Notification Routing:** Critical priority (score >30) → Slack DM + email, High (20-30) → channel + email, Medium/Low → email only Tool Integrations:** Add Jira/Linear for ticket creation, Asana/Monday for project management, Salesforce/HubSpot for CRM logging, GitHub for issue creation Analytics:** Build dashboards for meeting effectiveness scores, action item velocity, recurring topic clustering, team productivity metrics Cost Optimization:** ~1,200 tokens/meeting × $0.002/1K (GPT-3.5) = $0.0024/meeting, use batch API for 50% discount, cache common patterns Once configured, this workflow becomes your team's institutional memory—capturing every commitment and decision while eliminating hours of weekly admin work, ensuring accountability is automatic and follow-through is guaranteed. Built by Daniel Shashko Connect on LinkedIn
by Jitesh Dugar
Automated Customer Statement Generator with Risk Analysis & Credit Monitoring Transform account statement management from hours to minutes - automatically compile transaction histories, calculate aging analysis, monitor credit limits, assess payment risk, and deliver professional PDF statements while syncing with accounting systems and alerting your team about high-risk accounts. What This Workflow Does Revolutionizes customer account management with intelligent statement generation, credit monitoring, and risk assessment: Webhook-Triggered Generation** - Automatically creates statements from accounting systems, CRM updates, or scheduled monthly triggers Smart Data Validation** - Verifies transaction data, validates account information, and ensures statement accuracy before generation Running Balance Calculation** - Automatically computes running balances through all transactions with opening and closing balance tracking Comprehensive Aging Analysis** - Calculates outstanding balances by age buckets (Current, 31-60 days, 61-90 days, 90+ days) Overdue Detection & Highlighting** - Automatically identifies overdue amounts with visual color-coded alerts on statements Professional HTML Design** - Creates beautifully branded statements with modern layouts, aging breakdowns, and payment information PDF Conversion** - Transforms HTML into print-ready, professional-quality PDF statements with preserved formatting Automated Email Delivery** - Sends branded emails to customers with PDF attachments and account summary details Google Drive Archival** - Automatically saves statements to organized folders with searchable filenames by account Credit Limit Monitoring** - Tracks credit utilization, detects over-limit accounts, and generates alerts at 75%, 90%, and 100%+ thresholds Risk Scoring System** - Calculates 0-100 risk scores based on payment behavior, aging, credit utilization, and overdue patterns Payment Behavior Analysis** - Tracks days since last payment, average payment time, and payment reliability trends Automated Recommendations** - Generates prioritized action items like "escalate to collections" or "suspend new credit" Accounting System Integration** - Syncs statement delivery, balance updates, and risk assessments to QuickBooks, Xero, or FreshBooks Conditional Team Notifications** - Different Slack alerts for overdue accounts (urgent) vs current accounts (standard) with risk metrics Transaction History Table** - Detailed itemization of all charges, payments, and running balances throughout statement period Multiple Payment Options** - Includes bank details, online payment links, and account manager contact information Key Features Automatic Statement Numbering**: Generates unique sequential statement numbers with format STMT-YYYYMM-AccountNumber for easy tracking and reference Aging Bucket Analysis**: Breaks down outstanding balances into current (0-30 days), 31-60 days, 61-90 days, and 90+ days overdue categories Credit Health Dashboard**: Visual indicators show credit utilization percentage, available credit, and over-limit warnings in statement Risk Assessment Engine**: Analyzes multiple factors including overdue amounts, credit utilization, payment frequency to calculate comprehensive risk score Payment Behavior Tracking**: Monitors days since last payment, identifies patterns like "Excellent - Pays on Time" or "Poor - Chronic Late Payment" Intelligent Recommendations**: Automatically generates prioritized action items based on account status, risk level, and payment history Transaction Running Balance**: Shows balance after each transaction so customers can verify accuracy and reconcile their records Over-Limit Detection**: Immediate alerts when accounts exceed credit limits with escalation recommendations to suspend new charges Good Standing Indicators**: Visual green checkmarks and positive messaging for accounts with no overdue balances Account Manager Details**: Includes dedicated contact person for questions, disputes, and payment arrangements Dispute Process Documentation**: Clear instructions on how customers can dispute transactions within required timeframe Multi-Currency Support**: Handles USD, EUR, GBP, INR with proper currency symbols and formatting throughout statement Accounting System Sync**: Logs statement delivery, balance updates, and risk assessments in QuickBooks, Xero, FreshBooks, or Wave Conditional Workflow Routing**: Different automation paths for high-risk overdue accounts vs healthy current accounts Activity Notes Generation**: Creates detailed CRM notes with account summary, recommendations, and delivery confirmation Print-Optimized PDFs**: A4 format with proper margins and color preservation for professional printing and digital distribution Perfect For B2B Companies with Trade Credit** - Manufacturing, wholesale, distribution businesses offering net-30 or net-60 payment terms Professional Services Firms** - Consulting, legal, accounting firms with monthly retainer clients and time-based billing Subscription Services (B2B)** - SaaS platforms, software companies, membership organizations with recurring monthly charges Equipment Rental Companies** - Construction equipment, party rentals, medical equipment with ongoing rental agreements Import/Export Businesses** - International traders managing accounts receivable across multiple customers and currencies Healthcare Billing Departments** - Medical practices, clinics, hospitals tracking patient account balances and payment plans Educational Institutions** - Private schools, universities, training centers with tuition payment plans and installments Telecommunications Providers** - Phone, internet, cable companies sending monthly account statements to business customers Utilities & Energy Companies** - Electric, gas, water utilities managing commercial account statements and collections Property Management Companies** - Real estate firms tracking tenant charges, rent payments, and maintenance fees Credit Card Companies & Lenders** - Financial institutions providing detailed account activity and payment due notifications Wholesale Suppliers** - Distributors supplying restaurants, retailers, contractors on credit terms with monthly settlements Commercial Insurance Agencies** - Agencies tracking premium payments, policy charges, and outstanding balances Construction Contractors** - General contractors billing for progress payments, change orders, and retention releases What You Will Need Required Integrations HTML to PDF API - PDF conversion service (API key required) - supports HTML/CSS to PDF API, PDFShift, or similar providers (approximately 1-5 cents per statement) Gmail or SMTP - Email delivery service for sending statements to customers (OAuth2 or SMTP credentials) Google Drive - Cloud storage for statement archival and compliance record-keeping (OAuth2 credentials required) Optional Integrations Slack Webhook** - Team notifications for overdue and high-risk accounts (free incoming webhook) Accounting Software Integration** - QuickBooks, Xero, FreshBooks, Zoho Books API for automatic statement logging and balance sync CRM Integration** - HubSpot, Salesforce, Pipedrive for customer activity tracking and collections workflow triggers Payment Gateway** - Stripe, PayPal, Square payment links for one-click online payment from statements Collections Software** - Integrate with collections management platforms for automatic escalation of high-risk accounts SMS Notifications** - Twilio integration for payment due reminders and overdue alerts via text message Quick Start Import Template - Copy JSON workflow and import into your n8n instance Configure PDF Service - Add HTML to PDF API credentials in the "HTML to PDF" node Setup Gmail - Connect Gmail OAuth2 credentials in "Send Email to Customer" node and update sender email Connect Google Drive - Add Google Drive OAuth2 credentials and set folder ID for statement archival Customize Company Info - Edit "Enrich with Company Data" node to add company name, address, contact details, bank information Configure Credit Limits - Set default credit limits and payment terms for your customer base Adjust Risk Thresholds - Modify risk scoring logic in "Credit Limit & Risk Analysis" node based on your policies Update Email Template - Customize email message in Gmail node with your branding and messaging Configure Slack - Add Slack webhook URLs in both notification nodes (overdue and current accounts) Connect Accounting System - Replace code in "Update Accounting System" node with actual API call to QuickBooks/Xero/FreshBooks Test Workflow - Submit sample transaction data via webhook to verify PDF generation, email delivery, and notifications Schedule Monthly Run - Set up scheduled trigger for automatic end-of-month statement generation for all customers Customization Options Custom Aging Buckets** - Modify aging periods to match your business (e.g., 0-15, 16-30, 31-45, 46-60, 60+ days) Industry-Specific Templates** - Create different statement designs for different customer segments or business units Multi-Language Support** - Translate statement templates for international customers (Spanish, French, German, Mandarin) Dynamic Credit Terms** - Configure different payment terms by customer type (VIP net-45, standard net-30, new customers due on receipt) Late Fee Calculation** - Add automatic late fee calculation and inclusion for overdue balances Payment Plan Tracking** - Track installment payment plans with remaining balance and next payment due Interest Charges** - Calculate and add interest charges on overdue balances based on configurable rates Partial Payment Allocation** - Show how partial payments were applied across multiple invoices Customer Portal Integration** - Generate secure links for customers to view statements and make payments online Batch Processing** - Process statements for hundreds of customers simultaneously with bulk email delivery White-Label Branding** - Create different branded templates for multiple companies or subsidiaries Custom Risk Models** - Adjust risk scoring weights based on your industry and historical payment patterns Collections Workflow Integration** - Automatically create tasks in collections software for high-risk accounts Early Payment Incentives** - Highlight early payment discounts or prompt payment benefits on statements Dispute Management** - Track disputed transactions and adjust balances accordingly with audit trail Expected Results 90% time savings** - Reduce statement creation from 2-3 hours to 5 minutes per customer 100% accuracy** - Eliminate calculation errors and missing transactions through automated processing 50% faster payment collection** - Professional statements with clear aging drive faster customer payments Zero filing time** - Automatic Google Drive organization with searchable filenames by account 30% reduction in overdue accounts** - Proactive credit monitoring and risk alerts prevent bad debt Real-time risk visibility** - Instant identification of high-risk accounts before they become uncollectible Automated compliance** - Complete audit trail with timestamped statement delivery and accounting sync Better customer communication** - Professional statements improve customer satisfaction and reduce disputes Reduced bad debt write-offs** - Early warning system catches payment issues before they escalate Improved cash flow** - Faster statement delivery and payment reminders accelerate cash collection Pro Tips Schedule Monthly Batch Generation** - Run workflow automatically on last day of month to generate statements for all customers simultaneously Customize Aging Thresholds** - Adjust credit alert levels (75%, 90%, 100%) based on your risk tolerance and industry norms Segment Customer Communications** - Use different email templates for VIP customers vs standard customers vs delinquent accounts Track Payment Patterns** - Monitor days-to-pay metrics by customer to identify chronic late payers proactively Integrate with Collections** - Connect workflow to collections software to automatically escalate 90+ day accounts Include Payment Portal Links** - Add unique payment links to each statement for one-click online payment Automate Follow-Up Reminders** - Build workflow extension to send payment reminders 7 days before due date Create Executive Dashboards** - Export risk scores and aging data to business intelligence tools for trend analysis Document Dispute Resolutions** - Log all disputed transactions in accounting system with resolution notes Test with Sample Data First** - Validate aging calculations with known test data before processing real customer accounts Archive Statements for Compliance** - Maintain 7-year archive in Google Drive organized by year and customer Monitor Credit Utilization Trends** - Track credit utilization changes month-over-month to predict cash flow needs Benchmark Against Industry** - Compare your DSO and bad debt ratios to industry averages to identify improvement areas Personalize Account Manager Info** - Assign dedicated contacts to customers and include their direct phone and email Use Descriptive Transaction Details** - Ensure transaction descriptions clearly explain charges to reduce disputes Business Impact Metrics Track these key metrics to measure workflow success: Statement Generation Time** - Measure average minutes from trigger to delivered statement (target: under 5 minutes) Statement Volume Capacity** - Count monthly statements generated through automation (expect 10-20x increase in capacity) Aging Calculation Accuracy** - Track statements with aging errors (target: 0% error rate) Days Sales Outstanding (DSO)** - Monitor average days to collect payment (expect 15-30% reduction) Bad Debt Write-Offs** - Track uncollectible accounts as percentage of revenue (expect 30-50% reduction) Collection Rate** - Monitor percentage of invoices collected within terms (expect 10-20% improvement) Customer Disputes** - Count statement disputes and billing inquiries (expect 50-70% reduction) Over-Limit Accounts** - Track number of accounts exceeding credit limits (early detection prevents losses) High-Risk Account Identification** - Measure days between risk detection and collection action (target: within 48 hours) Cash Flow Improvement** - Calculate working capital improvement from faster collections (typical: 20-35% improvement) Template Compatibility Compatible with n8n version 1.0 and above Works with n8n Cloud and Self-Hosted instances Requires HTML to PDF API service subscription (1-5 cents per statement) No coding required for basic setup Fully customizable for industry-specific requirements Integrates with major accounting platforms via API Multi-currency and multi-language ready Supports batch processing for large customer bases Compliant with financial record-keeping regulations Ready to transform your account receivables management? Import this template and start generating professional statements with credit monitoring, risk assessment, and automated collections alerts - improving your cash flow, reducing bad debt, and freeing your accounting team to focus on strategic financial management!
by vinci-king-01
Competitor Price Monitoring Dashboard with AI and Real-time Alerts 🎯 Target Audience E-commerce managers and pricing analysts Retail business owners monitoring competitor pricing Marketing teams tracking market positioning Product managers analyzing competitive landscape Data analysts conducting pricing intelligence Business strategists making pricing decisions 🚀 Problem Statement Manual competitor price monitoring is inefficient and often leads to missed opportunities or delayed responses to market changes. This template solves the challenge of automatically tracking competitor prices, detecting significant changes, and providing actionable insights for strategic pricing decisions. 🔧 How it Works This workflow automatically monitors competitor product prices using AI-powered web scraping, analyzes price trends, and sends real-time alerts when significant changes are detected. Key Components Scheduled Trigger - Runs the workflow at specified intervals to maintain up-to-date price data AI-Powered Scraping - Uses ScrapeGraphAI to intelligently extract pricing information from competitor websites Price Analysis Engine - Processes historical data to detect trends and anomalies Alert System - Sends notifications via Slack and email when price changes exceed thresholds Dashboard Integration - Stores all data in Google Sheets for comprehensive analysis and reporting 📊 Google Sheets Column Specifications The template creates the following columns in your Google Sheets: | Column | Data Type | Description | Example | |--------|-----------|-------------|---------| | timestamp | DateTime | When the price was recorded | "2024-01-15T10:30:00Z" | | competitor_name | String | Name of the competitor | "Amazon" | | product_name | String | Product name and model | "iPhone 15 Pro 128GB" | | current_price | Number | Current price in USD | 999.00 | | previous_price | Number | Previous recorded price | 1099.00 | | price_change | Number | Absolute price difference | -100.00 | | price_change_percent | Number | Percentage change | -9.09 | | product_url | URL | Direct link to product page | "https://amazon.com/iphone15" | | alert_triggered | Boolean | Whether alert was sent | true | | trend_direction | String | Price trend analysis | "Decreasing" | 🛠️ Setup Instructions Estimated setup time: 15-20 minutes Prerequisites n8n instance with community nodes enabled ScrapeGraphAI API account and credentials Google Sheets account with API access Slack workspace for notifications (optional) Email service for alerts (optional) Step-by-Step Configuration 1. Install Community Nodes Install required community nodes npm install n8n-nodes-scrapegraphai npm install n8n-nodes-slack 2. Configure ScrapeGraphAI Credentials Navigate to Credentials in your n8n instance Add new ScrapeGraphAI API credentials Enter your API key from ScrapeGraphAI dashboard Test the connection to ensure it's working 3. Set up Google Sheets Connection Add Google Sheets OAuth2 credentials Grant necessary permissions for spreadsheet access Create a new spreadsheet for price monitoring data Configure the sheet name (default: "Price Monitoring") 4. Configure Competitor URLs Update the websiteUrl parameters in ScrapeGraphAI nodes Add URLs for each competitor you want to monitor Customize the user prompt to extract specific pricing data Set appropriate price thresholds for alerts 5. Set up Notification Channels Configure Slack webhook or API credentials Set up email service credentials (SendGrid, SMTP, etc.) Define alert thresholds and notification preferences Test notification delivery 6. Configure Schedule Trigger Set monitoring frequency (hourly, daily, etc.) Choose appropriate time zones for your business hours Consider competitor website rate limits 7. Test and Validate Run the workflow manually to verify all connections Check Google Sheets for proper data formatting Test alert notifications with sample data 🔄 Workflow Customization Options Modify Monitoring Targets Add or remove competitor websites Change product categories or specific products Adjust monitoring frequency based on market volatility Extend Price Analysis Add more sophisticated trend analysis algorithms Implement price prediction models Include competitor inventory and availability tracking Customize Alert System Set different thresholds for different product categories Create tiered alert systems (info, warning, critical) Add SMS notifications for urgent price changes Output Customization Add data visualization and reporting features Implement price history charts and graphs Create executive dashboards with key metrics 📈 Use Cases Dynamic Pricing**: Adjust your prices based on competitor movements Market Intelligence**: Understand competitor pricing strategies Promotion Planning**: Time your promotions based on competitor actions Inventory Management**: Optimize stock levels based on market conditions Customer Communication**: Proactively inform customers about price changes 🚨 Important Notes Respect competitor websites' terms of service and robots.txt Implement appropriate delays between requests to avoid rate limiting Regularly review and update your monitoring parameters Monitor API usage to manage costs effectively Keep your credentials secure and rotate them regularly Consider legal implications of automated price monitoring 🔧 Troubleshooting Common Issues: ScrapeGraphAI connection errors: Verify API key and account status Google Sheets permission errors: Check OAuth2 scope and permissions Price parsing errors: Review the Code node's JavaScript logic Rate limiting: Adjust monitoring frequency and implement delays Alert delivery failures: Check notification service credentials Support Resources: ScrapeGraphAI documentation and API reference n8n community forums for workflow assistance Google Sheets API documentation for advanced configurations Slack API documentation for notification setup
by Vigh Sandor
Workflow Overview This advanced n8n workflow provides intelligent email automation with AI-generated responses. It combines four core functions: Monitors incoming emails via IMAP (e.g., SOGo) Sends instant Telegram notifications for all new emails Uses AI (Ollama LLM) to generate contextual, personalized auto-replies Sends confirmation notifications when auto-replies are sent Unlike traditional auto-responders, this workflow analyzes email content and creates unique, relevant responses for each message. Setup Instructions Prerequisites Before setting up this workflow, ensure you have: An n8n instance (self-hosted or cloud) with AI/LangChain nodes enabled IMAP email account credentials (e.g., SOGo, Gmail, Outlook) SMTP server access for sending emails Telegram Bot API credentials Telegram Chat ID where notifications will be sent Ollama installed locally or accessible via network (for AI model) The llama3.1 model downloaded in Ollama Step 1: Install and Configure Ollama Local Installation Install Ollama on your system: Visit https://ollama.ai and download the installer for your OS Follow installation instructions for your platform Download the llama3.1 model: ollama pull llama3.1 Verify the model is available: ollama list Start Ollama service (if not already running): ollama serve Test the model: ollama run llama3.1 "Hello, world!" Remote Ollama Instance If using a remote Ollama server: Note the server URL (e.g., http://192.168.1.100:11434) Ensure network connectivity between n8n and Ollama server Verify firewall allows connections on port 11434 Step 2: Configure IMAP Credentials Navigate to n8n Credentials section Create a new IMAP credential with the following information: Host: Your IMAP server address Port: Usually 993 for SSL/TLS Username: Your email address Password: Your email password or app-specific password Enable SSL/TLS: Yes (recommended) Security: Use STARTTLS or SSL/TLS Step 3: Configure SMTP Credentials Create a new SMTP credential in n8n Enter the following details: Host: Your SMTP server address (e.g., Postfix server) Port: Usually 587 (STARTTLS) or 465 (SSL) Username: Your email address Password: Your email password or app-specific password Secure connection: Enable based on your server configuration Allow unauthorized certificates: Enable if using self-signed certificates Step 4: Configure Telegram Bot Create a Telegram bot via BotFather: Open Telegram and search for @BotFather Send /newbot command Follow instructions to create your bot Save the API token provided by BotFather Obtain your Chat ID: Method 1: Send a message to your bot, then visit: https://api.telegram.org/bot<YOUR_BOT_TOKEN>/getUpdates Method 2: Use a Telegram Chat ID bot like @userinfobot Method 3: For group chats, add the bot to the group and check the updates Note: Group chat IDs are negative numbers (e.g., -1234567890123) Add Telegram API credential in n8n: Credential Type: Telegram API Access Token: Your bot token from BotFather Step 5: Configure Ollama API Credential In n8n Credentials section, create a new Ollama API credential Configure based on your setup: For local Ollama: Base URL is usually http://localhost:11434 For remote Ollama: Enter the server URL (e.g., http://192.168.1.100:11434) Test the connection to ensure n8n can reach Ollama Step 6: Import and Configure Workflow Import the workflow JSON into your n8n instance Update the following nodes with your specific information: Check Incoming Emails Node Verify IMAP credentials are connected Configure polling interval (optional): Default behavior checks on workflow trigger schedule Can be set to check every N minutes Set mailbox folder if needed (default is INBOX) Send Notification from Incoming Email Node Update chatId parameter with your Telegram Chat ID Replace -1234567890123 with your actual chat ID Customize notification message template if desired Current format includes: Sender, Subject, Date-Time Dedicate Filtering As No-Response Node Review spam filter conditions: Blocks emails from addresses containing "noreply" or "no-reply" Blocks emails with "newsletter" in subject line (case-insensitive) Add additional filtering rules as needed: Block specific domains Filter by keywords Whitelist/blacklist specific senders Ollama Model Node Verify Ollama API credential is connected Confirm model name: llama3.1:bf230501 (or adjust to your installed version) Context window set to 4096 tokens (sufficient for most emails) Can be adjusted based on your needs and hardware capabilities Basic LLM Chain Node Review the AI prompt engineering (pre-configured but customizable) Current prompt instructs the AI to: Read the email content Identify main topic in 2-4 words Generate a professional acknowledgment response Keep responses consistent and concise Modify prompt if you want different response styles Send Auto-Response in SMTP Node Verify SMTP credentials are connected Check fromEmail uses correct email address: Currently set to {{ $('Check Incoming Emails - IMAP (example: SOGo)').item.json.to }} This automatically uses the recipient address (your mailbox) Subject automatically includes "Re: " prefix with original subject Message text comes from AI-generated content Send Notification from Response Node Update chatId parameter (same as first notification node) This sends confirmation that auto-reply was sent Includes original email details and the AI-generated response text Step 7: Test the Workflow Perform initial configuration test: Test Ollama connectivity: curl http://localhost:11434/api/tags Verify all credentials are properly configured Check n8n has access to required network endpoints Execute a test run: Click "Execute Workflow" button in n8n Send a test email to your monitored inbox Use a clear subject and body for better AI response Verify workflow execution: First Telegram notification received (incoming email alert) AI processes the email content Auto-reply is sent to the original sender Second Telegram notification received (confirmation with AI response) Check n8n execution log for any errors Verify email delivery: Check if auto-reply arrived at sender's inbox Verify it's not marked as spam Review AI-generated content for appropriateness Step 8: Fine-Tune AI Responses Send various types of test emails: Different topics (inquiry, complaint, information request) Various email lengths (short, medium, long) Different languages if applicable Review AI-generated responses: Check if topic identification is accurate Verify response appropriateness Ensure tone is professional Adjust the prompt if needed: Modify topic word count (currently 2-4 words) Change response template Add language-specific instructions Include custom sign-offs or branding Step 9: Activate the Workflow Once testing is successful and AI responses are satisfactory: Toggle the workflow to "Active" state The workflow will now run automatically on the configured schedule Monitor initial production runs: Review first few auto-replies carefully Check Telegram notifications for any issues Verify SMTP delivery rates Set up monitoring: Enable n8n workflow error notifications Monitor Ollama resource usage Check email server logs periodically How to Use Normal Operation Once activated, the workflow operates fully automatically: Email Monitoring: The workflow continuously checks your IMAP inbox for new messages based on the configured polling interval or trigger schedule. Immediate Incoming Notification: When a new email arrives, you receive an instant Telegram notification containing: Sender's email address Email subject line Date and time received Note indicating it's from IMAP mailbox Intelligent Filtering: The workflow evaluates each email against spam filter criteria: Emails from "noreply" or "no-reply" addresses are filtered out Emails with "newsletter" in the subject line are filtered out Filtered emails receive notification but no auto-reply Legitimate emails proceed to AI response generation AI Response Generation: For emails that pass the filter: The AI reads the full email content Analyzes the main topic or purpose Generates a personalized acknowledgment Creates a professional response that: Thanks the sender References the specific topic Promises a personal follow-up Maintains professional tone Automatic Reply Delivery: The AI-generated response is sent via SMTP to the original sender with: Subject line: "Re: [Original Subject]" From address: Your monitored mailbox Body: AI-generated contextual message Response Confirmation: After the auto-reply is sent, you receive a second Telegram notification showing: Original email details (sender, subject, date) The complete AI-generated response text Confirmation of successful delivery Understanding AI Response Generation The AI analyzes emails intelligently: Example 1: Business Inquiry Incoming Email: "I'm interested in your consulting services for our Q4 project..." AI Topic Identification: "consulting services" Generated Response: "Dear Correspondent! Thank you for your message regarding consulting services. I will respond with a personal message as soon as possible. Have a nice day!" Example 2: Technical Support Incoming Email: "We're experiencing issues with the API integration..." AI Topic Identification: "API integration issues" Generated Response: "Dear Correspondent! Thank you for your message regarding API integration issues. I will respond with a personal message as soon as possible. Have a nice day!" Example 3: General Question Incoming Email: "Could you provide more information about pricing?" AI Topic Identification: "pricing information" Generated Response: "Dear Correspondent! Thank you for your message regarding pricing information. I will respond with a personal message as soon as possible. Have a nice day!" Customizing Filter Rules To modify which emails receive AI-generated auto-replies: Open the "Dedicate Filtering As No-Response" node Modify existing conditions or add new ones: Block specific domains: {{ $json.from.value[0].address }} Operation: does not contain Value: @spam-domain.com Whitelist VIP senders (only respond to specific people): {{ $json.from.value[0].address }} Operation: contains Value: @important-client.com Filter by subject keywords: {{ $json.subject.toLowerCase() }} Operation: does not contain Value: unsubscribe Combine multiple conditions: Use AND logic (all must be true) for stricter filtering Use OR logic (any can be true) for more permissive filtering Customizing AI Prompt To change how the AI generates responses: Open the "Basic LLM Chain" node Modify the prompt text in the "text" parameter Current structure: Context setting (read email, identify topic) Output format specification Rules for AI behavior Example modifications: Add company branding: Return only this response, filling in the [TOPIC]: Dear Correspondent! Thank you for reaching out to [Your Company Name] regarding [TOPIC]. I will respond with a personal message as soon as possible. Best regards, [Your Name] [Your Company Name] Make it more casual: Return only this response, filling in the [TOPIC]: Hi there! Thanks for your email about [TOPIC]. I'll get back to you personally soon. Cheers! Add urgency classification: Read the email and classify urgency (Low/Medium/High). Identify the main topic. Return: Dear Correspondent! Thank you for your message regarding [TOPIC]. Priority: [URGENCY] I will respond with a personal message as soon as possible. Customizing Telegram Notifications Incoming Email Notification: Open "Send Notification from Incoming Email" node Modify the "text" parameter Available variables: {{ $json.from }} - Full sender info {{ $json.from.value[0].address }} - Sender email only {{ $json.from.value[0].name }} - Sender name (if available) {{ $json.subject }} - Email subject {{ $json.date }} - Date received {{ $json.textPlain }} - Email body (use cautiously for privacy) {{ $json.to }} - Recipient address Response Confirmation Notification: Open "Send Notification from Response" node Modify to include additional information Reference AI response: {{ $('Basic LLM Chain').item.json.text }} Monitoring and Maintenance Daily Monitoring Check Telegram Notifications**: Review incoming email alerts and response confirmations Verify AI Quality**: Spot-check AI-generated responses for appropriateness Email Delivery**: Confirm auto-replies are being delivered (not caught in spam) Weekly Maintenance Review Execution Logs**: Check n8n execution history for errors or warnings Ollama Performance**: Monitor resource usage (CPU, RAM, disk space) Filter Effectiveness**: Assess if spam filters are working correctly Response Quality**: Review multiple AI responses for consistency Monthly Maintenance Update Ollama Model**: Check for new llama3.1 versions or alternative models Prompt Optimization**: Refine AI prompt based on response quality observations Credential Rotation**: Update passwords and API tokens for security Backup Configuration**: Export workflow and credentials (securely) Advanced Usage Multi-Language Support If you receive emails in multiple languages: Modify the AI prompt to detect language: Detect the email language. Generate response in the SAME language as the email. If English: [English template] If Hungarian: [Hungarian template] If German: [German template] Or use language-specific conditions in the filtering node Priority-Based Responses Generate different responses based on sender importance: Add an IF node after filtering to check sender domain Route VIP emails to a different LLM chain with priority messaging Standard emails use the normal AI chain Response Logging To maintain a record of all AI interactions: Add a database node (PostgreSQL, MySQL, etc.) after the auto-reply node Store: timestamp, sender, subject, AI response, delivery status Use for compliance, analytics, or training data A/B Testing AI Prompts Test different prompt variations: Create multiple LLM Chain nodes with different prompts Use a randomizer or round-robin approach Compare response quality and user feedback Optimize based on results Troubleshooting Notifications Not Received Problem: Telegram notifications not appearing Solutions: Verify Chat ID is correct (positive for personal chats, negative for groups) Check if bot has permissions to send messages Ensure bot wasn't blocked or removed from group Test Telegram API credential independently Review n8n execution logs for Telegram API errors AI Responses Not Generated Problem: Auto-replies sent but content is empty or error messages Solutions: Check Ollama service is running: ollama list Verify llama3.1 model is downloaded: ollama list Test Ollama directly: ollama run llama3.1 "Test message" Review Ollama API credential URL in n8n Check network connectivity between n8n and Ollama Increase context window if emails are very long Monitor Ollama logs for errors Poor Quality AI Responses Problem: AI generates irrelevant or inappropriate responses Solutions: Review and refine the prompt engineering Add more specific rules and constraints Provide examples in the prompt of good vs bad responses Adjust topic word count (increase from 2-4 to 3-6 words) Test with different Ollama models (e.g., llama3.1:70b for better quality) Ensure email content is being passed correctly to AI Auto-Replies Not Sent Problem: Workflow executes but emails not delivered Solutions: Verify SMTP credentials and server connectivity Check fromEmail address is correct Review SMTP server logs for errors Test SMTP sending independently Ensure "Allow unauthorized certificates" is enabled if needed Check if emails are being filtered by spam filters Verify SPF/DKIM records for your domain High Resource Usage Problem: Ollama consuming excessive CPU/RAM Solutions: Reduce context window size (from 4096 to 2048) Use a smaller model variant (llama3.1:8b instead of default) Limit concurrent workflow executions in n8n Add delay/throttling between email processing Consider using a remote Ollama instance with better hardware Monitor email volume and processing time IMAP Connection Failures Problem: Workflow can't connect to email server Solutions: Verify IMAP credentials are correct Check if IMAP is enabled on email account Ensure SSL/TLS settings match server requirements For Gmail: enable "Less secure app access" or use App Passwords Check firewall allows outbound connections on IMAP port (993) Test IMAP connection using email client (Thunderbird, Outlook) Workflow Not Triggering Problem: Workflow doesn't execute automatically Solutions: Verify workflow is in "Active" state Check trigger node configuration and schedule Review n8n system logs for scheduler issues Ensure n8n instance has sufficient resources Test manual execution to isolate trigger issues Check if n8n workflow execution queue is backed up Workflow Architecture Node Descriptions Check Incoming Emails - IMAP: Polls email server at regular intervals to retrieve new messages from the configured mailbox. Send Notification from Incoming Email: Immediately sends formatted notification to Telegram for every new email detected, regardless of spam status. Dedicate Filtering As No-Response: Evaluates emails against spam filter criteria to determine if AI processing should occur. No Operation: Placeholder node for filtered emails that should not receive an auto-reply (spam, newsletters, automated messages). Ollama Model: Provides the AI language model (llama3.1) used for natural language processing and response generation. Basic LLM Chain: Executes the AI prompt against the email content to generate contextual auto-reply text. Send Auto-Response in SMTP: Sends the AI-generated acknowledgment email back to the original sender via SMTP server. Send Notification from Response: Sends confirmation to Telegram showing the auto-reply was successfully sent, including the AI-generated content. AI Processing Pipeline Email Content Extraction: Email body text is extracted from IMAP data Context Loading: Email content is passed to LLM with prompt instructions Topic Analysis: AI identifies main subject or purpose in 2-4 words Template Population: AI fills response template with identified topic Output Formatting: Response is formatted and cleaned for email delivery Quality Assurance: n8n validates response before sending
by vinci-king-01
Customer Support Analysis Dashboard with AI and Automated Insights 🎯 Target Audience Customer support managers and team leads Customer success teams monitoring satisfaction Product managers analyzing user feedback Business analysts measuring support metrics Operations managers optimizing support processes Quality assurance teams monitoring support quality Customer experience (CX) professionals 🚀 Problem Statement Manual analysis of customer support tickets and feedback is time-consuming and often misses critical patterns or emerging issues. This template solves the challenge of automatically collecting, analyzing, and visualizing customer support data to identify trends, improve response times, and enhance overall customer satisfaction. 🔧 How it Works This workflow automatically monitors customer support channels using AI-powered analysis, processes tickets and feedback, and provides actionable insights for improving customer support operations. Key Components Scheduled Trigger - Runs the workflow at specified intervals to maintain real-time monitoring AI-Powered Ticket Analysis - Uses advanced NLP to categorize, prioritize, and analyze support tickets Multi-Channel Integration - Monitors email, chat, help desk systems, and social media Automated Insights - Generates reports on trends, response times, and satisfaction scores Dashboard Integration - Stores all data in Google Sheets for comprehensive analysis and reporting 📊 Google Sheets Column Specifications The template creates the following columns in your Google Sheets: | Column | Data Type | Description | Example | |--------|-----------|-------------|---------| | timestamp | DateTime | When the ticket was processed | "2024-01-15T10:30:00Z" | | ticket_id | String | Unique ticket identifier | "SUP-2024-001234" | | customer_email | String | Customer contact information | "john@example.com" | | subject | String | Ticket subject line | "Login issues with new app" | | description | String | Full ticket description | "I can't log into the mobile app..." | | category | String | AI-categorized ticket type | "Technical Issue" | | priority | String | Calculated priority level | "High" | | sentiment_score | Number | Customer sentiment (-1 to 1) | -0.3 | | urgency_indicator | String | Urgency classification | "Immediate" | | response_time | Number | Time to first response (hours) | 2.5 | | resolution_time | Number | Time to resolution (hours) | 8.0 | | satisfaction_score | Number | Customer satisfaction rating | 4.2 | | agent_assigned | String | Support agent name | "Sarah Johnson" | | status | String | Current ticket status | "Resolved" | 🛠️ Setup Instructions Estimated setup time: 20-25 minutes Prerequisites n8n instance with community nodes enabled ScrapeGraphAI API account and credentials Google Sheets account with API access Help desk system API access (Zendesk, Freshdesk, etc.) Email service integration (optional) Step-by-Step Configuration 1. Install Community Nodes Install required community nodes npm install n8n-nodes-scrapegraphai npm install n8n-nodes-slack 2. Configure ScrapeGraphAI Credentials Navigate to Credentials in your n8n instance Add new ScrapeGraphAI API credentials Enter your API key from ScrapeGraphAI dashboard Test the connection to ensure it's working 3. Set up Google Sheets Connection Add Google Sheets OAuth2 credentials Grant necessary permissions for spreadsheet access Create a new spreadsheet for customer support analysis Configure the sheet name (default: "Support Analysis") 4. Configure Support System Integration Update the websiteUrl parameters in ScrapeGraphAI nodes Add URLs for your help desk system or support portal Customize the user prompt to extract specific ticket data Set up categories and priority thresholds 5. Set up Notification Channels Configure Slack webhook or API credentials for alerts Set up email service credentials for critical issues Define alert thresholds for different priority levels Test notification delivery 6. Configure Schedule Trigger Set analysis frequency (hourly, daily, etc.) Choose appropriate time zones for your business hours Consider support system rate limits 7. Test and Validate Run the workflow manually to verify all connections Check Google Sheets for proper data formatting Test ticket analysis with sample data 🔄 Workflow Customization Options Modify Analysis Targets Add or remove support channels (email, chat, social media) Change ticket categories and priority criteria Adjust analysis frequency based on ticket volume Extend Analysis Capabilities Add more sophisticated sentiment analysis Implement customer churn prediction models Include agent performance analytics Add automated response suggestions Customize Alert System Set different thresholds for different ticket types Create tiered alert systems (info, warning, critical) Add SLA breach notifications Include trend analysis alerts Output Customization Add data visualization and reporting features Implement support trend charts and graphs Create executive dashboards with key metrics Add customer satisfaction trend analysis 📈 Use Cases Support Ticket Management**: Automatically categorize and prioritize tickets Response Time Optimization**: Identify bottlenecks in support processes Customer Satisfaction Monitoring**: Track and improve satisfaction scores Agent Performance Analysis**: Monitor and improve agent productivity Product Issue Detection**: Identify recurring problems and feature requests SLA Compliance**: Ensure support teams meet service level agreements 🚨 Important Notes Respect support system API rate limits and terms of service Implement appropriate delays between requests to avoid rate limiting Regularly review and update your analysis parameters Monitor API usage to manage costs effectively Keep your credentials secure and rotate them regularly Consider data privacy and GDPR compliance for customer data 🔧 Troubleshooting Common Issues: ScrapeGraphAI connection errors: Verify API key and account status Google Sheets permission errors: Check OAuth2 scope and permissions Ticket parsing errors: Review the Code node's JavaScript logic Rate limiting: Adjust analysis frequency and implement delays Alert delivery failures: Check notification service credentials Support Resources: ScrapeGraphAI documentation and API reference n8n community forums for workflow assistance Google Sheets API documentation for advanced configurations Help desk system API documentation Customer support analytics best practices
by WeblineIndia
IPA Size Tracker with Trend Alerts – Automated iOS Apps Size Monitoring This workflow runs on a daily schedule and monitors IPA file sizes from configured URLs. It stores historical size data in Google Sheets, compares current vs. previous builds and sends email alerts only when significant size changes occur (default: ±10%). A DRY_RUN toggle allows safe testing before real notifications go out. Who’s it for iOS developers tracking app binary size growth over time. DevOps teams monitoring build artifacts and deployment sizes. Product managers ensuring app size budgets remain acceptable. QA teams detecting unexpected size changes in release builds. Mobile app teams optimizing user experience by keeping apps lightweight. How it works Schedule Trigger (daily at 09:00 UTC) kicks off the workflow. Configuration: Define monitored apps with {name, version, build, ipa_url}. HTTP Request downloads the IPA file from its URL. Size Calculation: Compute file sizes in bytes, KB, MB and attach timestamp metadata. Google Sheets: Append size data to the IPA Size History sheet. Trend Analysis: Compare current vs. previous build sizes. Alert Logic: Evaluate thresholds (>10% increase or >10% decrease). Email Notification: Send formatted alerts with comparisons and trend indicators. Rate Limit: Space out notifications to avoid spamming recipients. How to set up 1. Spreadsheet Create a Google Sheet with a tab named IPA Size History containing: Date, Timestamp, App_Name, Version, Build_Number, Size_Bytes, Size_KB, Size_MB, IPA_URL 2. Credentials Google Sheets (OAuth)** → for reading/writing size history. Gmail** → for sending alert emails (use App Password if 2FA is enabled). 3. Open “Set: Configuration” node Define your workflow variables: APP_CONFIGS = array of monitored apps ({name, version, build, ipa_url}) SPREADSHEET_ID = Google Sheet ID SHEET_NAME = IPA Size History SMTP_FROM = sender email (e.g., devops@company.com) ALERT_RECIPIENTS = comma-separated emails SIZE_INCREASE_THRESHOLD = 0.10 (10%) SIZE_DECREASE_THRESHOLD = 0.10 (10%) LARGE_APP_WARNING = 300 (MB) SCHEDULE_TIME = 09:00 TIMEZONE = UTC DRY_RUN = false (set true to test without sending emails) 4. File Hosting Host IPA files on Google Drive, Dropbox or a web server. Ensure direct download URLs are used (not preview links). 5. Activate the workflow Once configured, it will run automatically at the scheduled time. Requirements Google Sheet with the IPA Size History tab. Accessible IPA file URLs. SMTP / gmail account (Gmail recommended). n8n (cloud or self-hosted) with Google Sheets + Email nodes. Sufficient local storage for IPA file downloads. How to customize the workflow Multiple apps**: Add more configs to APP_CONFIGS. Thresholds**: Adjust SIZE_INCREASE_THRESHOLD / SIZE_DECREASE_THRESHOLD. Notification templates**: Customize subject/body with variables: {{app_name}}, {{current_size}}, {{previous_size}}, {{change_percent}}, {{trend_status}}. Schedule**: Change Cron from daily to hourly, weekly, etc. Large app warnings**: Adjust LARGE_APP_WARNING. Trend analysis**: Extend beyond one build (7-day, 30-day averages). Storage backend**: Swap Google Sheets for CSV, DB or S3. Add-ons to level up Slack Notifications**: Add Slack webhook alerts with emojis & formatting. Size History Charts**: Generate trend graphs with Chart.js or Google Charts API. Environment separation**: Monitor dev/staging/prod builds separately. Regression detection**: Statistical anomaly checks. Build metadata**: Log bundle ID, SDK versions, architectures. Archive management**: Auto-clean old records to save space. Dashboards**: Connect to Grafana, DataDog or custom BI. CI/CD triggers**: Integrate with pipelines via webhook trigger. Common Troubleshooting No size data** → check URLs return binary IPA (not HTML error). Download failures** → confirm hosting permissions & direct links. Missing alerts** → ensure thresholds & prior history exist. Google Sheets errors** → check sheet/tab names & OAuth credentials. Email issues** → validate SMTP credentials, spam folder, sender reputation. Large file timeouts** → raise HTTP timeout for >100MB files. Trend errors** → make sure at least 2 builds exist. No runs** → confirm workflow is active and timezone is correct. Need Help? If you’d like this to customize this workflow to suit your app development process, then simply reach out to us here and we’ll help you customize the template to your exact use case.
by Growth AI
Who's it for Marketing teams, business intelligence professionals, competitive analysts, and executives who need consistent industry monitoring with AI-powered analysis and automated team distribution via Discord. What it does This intelligent workflow automatically monitors multiple industry topics, scrapes and analyzes relevant news articles using Claude AI, and delivers professionally formatted intelligence reports to your Discord channel. The system provides weekly automated monitoring cycles with personalized bot communication and comprehensive content analysis. How it works The workflow follows a sophisticated 7-phase automation process: Scheduled Activation: Triggers weekly monitoring cycles (default: Mondays at 9 AM) Query Management: Retrieves monitoring topics from centralized Google Sheets configuration News Discovery: Executes comprehensive Google News searches using SerpAPI for each configured topic Content Extraction: Scrapes full article content from top 3 sources per topic using Firecrawl AI Analysis: Processes scraped content using Claude 4 Sonnet for intelligent synthesis and formatting Discord Optimization: Automatically segments content to comply with Discord's 2000-character message limits Automated Delivery: Posts formatted intelligence reports to Discord channel with branded "Claptrap" bot personality Requirements Google Sheets account for query management SerpAPI account for Google News access Firecrawl account for article content extraction Anthropic API access for Claude 4 Sonnet Discord bot with proper channel permissions Scheduled execution capability (cron-based trigger) How to set up Step 1: Configure Google Sheets query management Create monitoring sheet: Set up Google Sheets document with "Query" sheet Add search topics: Include industry keywords, competitor names, and relevant search terms Sheet structure: Simple column format with "Query" header containing search terms Access permissions: Ensure n8n has read access to the Google Sheets document Step 2: Configure API credentials Set up the following credentials in n8n: Google Sheets OAuth2: For accessing query configuration sheet SerpAPI: For Google News search functionality with proper rate limits Firecrawl API: For reliable article content extraction across various websites Anthropic API: For Claude 4 Sonnet access with sufficient token limits Discord Bot API: With message posting permissions in target channel Step 3: Customize scheduling settings Cron expression: Default set to "0 9 * * 1" (Mondays at 9 AM) Frequency options: Adjust for daily, weekly, or custom monitoring cycles Timezone considerations: Configure according to team's working hours Execution timing: Ensure adequate processing time for multiple topics Step 4: Configure Discord integration Set up Discord delivery settings: Guild ID: Target Discord server (currently: 919951151888236595) Channel ID: Specific monitoring channel (currently: 1334455789284364309) Bot permissions: Message posting, embed suppression capabilities Brand personality: Customize "Claptrap" bot messaging style and tone Step 5: Customize content analysis Configure AI analysis parameters: Analysis depth: Currently processes top 3 articles per topic Content format: Structured markdown format with consistent styling Language settings: Currently configured for French output (easily customizable) Quality controls: Error handling for inaccessible articles and content How to customize the workflow Query management expansion Topic categories: Organize queries by industry, competitor, or strategic focus areas Keyword optimization: Refine search terms based on result quality and relevance Dynamic queries: Implement time-based or event-triggered query modifications Multi-language support: Add international keyword variations for global monitoring Advanced content processing Article quantity: Modify from 3 to more articles per topic based on analysis needs Content filtering: Add quality scoring and relevance filtering for article selection Source preferences: Implement preferred publisher lists or source quality weighting Content enrichment: Add sentiment analysis, trend identification, or competitive positioning Discord delivery enhancements Rich formatting: Implement Discord embeds, reactions, or interactive elements Multi-channel distribution: Route different topics to specialized Discord channels Alert levels: Add priority-based messaging for urgent industry developments Archive functionality: Create searchable message threads or database storage Integration expansions Slack compatibility: Replace or supplement Discord with Slack notifications Email reports: Add formatted email distribution for executive summaries Database storage: Implement persistent storage for historical analysis and trending API endpoints: Create webhook endpoints for third-party system integration AI analysis customization Analysis templates: Create topic-specific analysis frameworks and formatting Competitive focus: Enhance competitor mention detection and analysis depth Trend identification: Implement cross-topic trend analysis and strategic insights Summary levels: Create executive summaries alongside detailed technical analysis Advanced monitoring features Intelligent content curation The system provides sophisticated content management: Relevance scoring: Automatic ranking of articles by topic relevance and publication authority Duplicate detection: Prevents redundant coverage of the same story across different sources Content quality assessment: Filters low-quality or promotional content automatically Source diversity: Ensures coverage from multiple perspectives and publication types Error handling and reliability Graceful degradation: Continues processing even if individual articles fail to scrape Retry mechanisms: Automatic retry logic for temporary API failures or network issues Content fallbacks: Uses article snippets when full content extraction fails Notification continuity: Ensures Discord delivery even with partial content processing Results interpretation Intelligence report structure Each monitoring cycle delivers: Topic-specific summaries: Individual analysis for each configured search query Source attribution: Complete citation with publication date, source, and URL Structured formatting: Consistent presentation optimized for quick scanning Professional analysis: AI-generated insights maintaining factual accuracy and business context Performance analytics Monitor system effectiveness through: Processing metrics: Track successful article extraction and analysis rates Content quality: Assess relevance and usefulness of delivered intelligence Team engagement: Monitor Discord channel activity and report utilization System reliability: Track execution success rates and error patterns Use cases Competitive intelligence Market monitoring: Track competitor announcements, product launches, and strategic moves Industry trends: Identify emerging technologies, regulatory changes, and market shifts Partnership tracking: Monitor alliance formations, acquisitions, and strategic partnerships Leadership changes: Track executive movements and organizational restructuring Strategic planning support Market research: Continuous intelligence gathering for strategic decision-making Risk assessment: Early warning system for industry disruptions and regulatory changes Opportunity identification: Spot emerging markets, technologies, and business opportunities Brand monitoring: Track industry perception and competitive positioning Team collaboration enhancement Knowledge sharing: Centralized distribution of relevant industry intelligence Discussion facilitation: Provide common information baseline for strategic discussions Decision support: Deliver timely intelligence for business planning and strategy sessions Competitive awareness: Keep teams informed about competitive landscape changes Workflow limitations Language dependency: Currently optimized for French analysis output (easily customizable) Processing capacity: Limited to 3 articles per query (configurable based on API limits) Platform specificity: Configured for Discord delivery (adaptable to other platforms) Scheduling constraints: Fixed weekly schedule (customizable via cron expressions) Content access: Dependent on article accessibility and website compatibility with Firecrawl API dependencies: Requires active subscriptions and proper rate limit management for all integrated services
by Cheng Siong Chin
How It Works This workflow automates end-to-end contract and invoice management using AI intelligence. It processes proposals through intelligent contract generation, approval workflows, and automated invoicing. OpenAI analyzes proposal content while the system routes approvals intelligently. Upon approval, contracts are generated, invoices created, and notifications sent. The workflow also monitors payment status, generates financial forecasts, and manages follow-up tasks, eliminating manual contract generation delays and approval bottlenecks while ensuring accurate financial record-keeping. Setup Steps Configure OpenAI API credentials in n8n credentials manager. Connect Google Sheets account for invoice and forecast storage. Set up Gmail for approval notifications and client communications. Input Stripe/payment processor credentials for payment tracking. Map proposal form inputs to workflow start node. Prerequisites OpenAI API key, Google Sheets account, Gmail account, Stripe/payment processor access Use Cases Multi-stage approval workflows, SaaS contract management, professional services invoicing Customization Modify approval logic in conditional nodes, replace OpenAI with Anthropic API Benefits Reduces contract processing time by 80%, eliminates approval delays, prevents invoicing errors
by Jay Emp0
Overview Fetch Multiple Google Analytics GA4 metrics daily, post to Discord, update previous day’s entry as GA data finalizes over seven days. Benefits Automates daily traffic reporting Maintains single message per day, avoids channel clutter Provides near–real-time updates by editing prior messages Use Case Teams tracking website performance via Discord (or any chat tool) without manual copy–paste. Marketing managers, community moderators, growth hackers. If your manager asks you for daily marketing report every morning, you can now automate it Notes google analytics node in n8n does not provide real time data. The node updates previous values for the next 7 days discord node on n8n does not have features to update an exisiting message by message id. So we have used the discord api for this most businesses use multiple google analytics properties across their digital platforms Core Logic Schedule trigger fires once a day. Google Analytics node retrieves metrics for date ranges (past 7 days) Aggregate node collates all records. Discord node fetches the last 10 messages in the broadcast channel Code node maps existing Discord messages by to the google analytics data using the date fields For each GA record: If no message exists → send new POST to the discord channel If message exists and metrics changed, send an update patch to the existing discord message Batch loops + wait nodes prevent rate-limit. Setup Instructions Import workflow JSON into n8n. Follow the n8n guide to Create Google Analytics OAuth2 credential with access to all required GA accounts. Follow the n8n guide to Create Discord OAuth2 credential for “Get Messages” operations. Follow the Discord guide to Create HTTP Header Auth credential named “Discord-Bot” with header Key: Authorization Value: Bot <your-bot-token> In the two Set nodes in the beginning of the flow, assign discord_channel_id and google_analytics_id. Get your discord channel id by sending a text on your discord channel and then copy message link Paste the text below and you will see your message link in the form of https://discord.com/channels/server_id/channel_id/message_id , you will want to get the channel_id which is the number in the middle Find your google analytics id by going to google analytics dashboard, seeing the properties in the top right and copy paste that number to the flow Adjust schedule trigger times to your preferred report hour. Activate workflow. Customization Replace Discord HTTP Request nodes with Slack, ClickUp, WhatsApp, Telegram integrations by swapping POST/PATCH endpoints and authentication.
by Airtop
Extract Facebook Group Posts with Airtop Use Case Extracting content from Facebook Groups allows community managers, marketers, and researchers to gather insights, monitor discussions, and collect engagement metrics efficiently. This automation streamlines the process of retrieving non-sponsored post data from group feeds. What This Automation Does This automation extracts key post details from a Facebook Group feed using the following input parameters: Facebook Group URL**: The URL of the Facebook Group feed you want to scrape. Airtop Profile**: The name of your Airtop Profile authenticated to Facebook. It returns up to 5 non-sponsored posts with the following attributes for each: Post text Post URL Page/profile URL Timestamp Number of likes Number of shares Number of comments Page or profile details Post thumbnail How It Works Form Trigger: Collects the Facebook Group URL and Airtop Profile via a form. Browser Automation: Initiates a new browser session using Airtop. Navigates to the provided Facebook Group feed. Uses an AI prompt to extract post data, including interaction metrics and profile information. Structured Output: The results are returned in a defined JSON schema, ready for downstream use. Setup Requirements Airtop API Key — Free to generate. An Airtop Profile logged into Facebook. Next Steps Integrate With Analytics Tools**: Feed the output into dashboards or analytics platforms to monitor community engagement. Automate Alerts**: Trigger notifications for posts matching certain criteria (e.g., high engagement, keywords). Combine With Comment Automation**: Extend this to reply to posts or engage with users using other Airtop automations. Let me know if you’d like this saved as a .md file or included in your Airtop automation library. Read more about how to extract posts from Facebook groups
by Anna Bui
Automatically monitor LinkedIn posts from your community members and create AI-powered content digests for efficient social media curation. This template is perfect for community managers, content creators, and social media teams who need to track LinkedIn activity from their network without spending hours manually checking profiles. It fetches recent posts, extracts key information, and creates digestible summaries using AI. Good to know API costs apply** - LinkedIn API calls ($0.01-0.05 per profile check) and OpenAI processing ($0.001-0.01 per post) Rate limiting included** - Built-in random delays prevent API throttling issues Flexible scheduling** - Easy to switch from daily schedule to webhook triggers for real-time processing Requires API setup** - Need RapidAPI access for LinkedIn data and OpenAI for content processing How it works Daily profile scanning** - Automatically checks each LinkedIn profile in your Airtable for posts from yesterday Smart data extraction** - Pulls post content, engagement metrics, author information, and timestamps AI-powered summarization** - Creates 30-character previews of posts for quick content scanning Duplicate prevention** - Checks existing records to avoid storing the same post multiple times Structured storage** - Saves all processed data to Airtable with clean formatting and metadata Batch processing** - Handles multiple profiles efficiently with proper error handling and delays How to use Set up Airtable base** - Create tables for LinkedIn profiles and processed posts using the provided structure Configure API credentials** - Add your RapidAPI LinkedIn access and OpenAI API key to n8n credentials Import LinkedIn profiles** - Add community members' LinkedIn URLs and URNs to your profiles table Test the workflow** - Run manually with a few profiles to ensure everything works correctly Activate schedule** - Enable daily automation or switch to webhook triggers for real-time processing Requirements Airtable account** - For storing profile lists and managing processed posts with proper field structure RapidAPI Professional Network Data API** - Access to LinkedIn post data (requires subscription) OpenAI API account** - For intelligent content summarization and preview generation LinkedIn profile URNs** - Properly formatted LinkedIn profile identifiers for API calls Customising this workflow Change monitoring frequency** - Switch from daily to hourly checks or use webhook triggers for real-time updates Expand data extraction** - Add company information, hashtag analysis, or engagement trending Integrate notification systems** - Add Slack, email, or Discord alerts for high-engagement posts Connect content tools** - Link to Buffer, Hootsuite, or other social media management platforms for direct publishing Add filtering logic** - Set up conditions to only process posts with minimum engagement thresholds Scale with multiple communities** - Duplicate workflow for different LinkedIn communities or industry segments