by Evoort Solutions
Job Search Automation with Job Search Global API & Google Sheet Logging Description: Automate your job search process by querying the Job Search Global API via RapidAPI every 6 hours for a specified keyword like “Web Developer.” This workflow extracts job listings and saves them directly to Google Sheets, with alerts sent for any API failures. Workflow Overview Schedule Trigger Runs the workflow automatically every 6 hours to ensure up-to-date job listings. Set Search Term Defines the dynamic job keyword, e.g., "Web Developer," used in API requests. Fetch Job Listings Sends a POST request to the Job Search Global API (via RapidAPI) to retrieve job listings with pagination. Check API Response Validates the API response status, branching workflow on success or failure. Extract Job Data Parses the job listings array from the API response for processing. Save to Google Sheet Appends or updates job listings in Google Sheets, avoiding duplicates by matching job titles. Send Failure Notification Email Sends an alert email if the API response fails or returns an error. How to Obtain Your RapidAPI Key (Quick Steps) Go to RapidAPI Job Search Global API. Sign up or log in to your RapidAPI account. Subscribe to the API plan that suits your needs. Copy your unique X-RapidAPI-Key from the dashboard. Insert this key into your workflow’s HTTP Request node headers. How to Configure Google Sheets Create a new Google Sheet for job listings. Share the sheet with your Google Service Account email to enable API access. Use the sheet URL in the Google Sheets node within your workflow. Map columns correctly based on the job data fields. Google Sheet Columns Used | Column Name | Description | | ----------- | ----------------------------------- | | title | Job title | | url | Job posting URL | | company | Company name | | postDate | Date job was posted | | jobSource | Source of the job listing | | slug | Unique job identifier or slug | | sentiment | Sentiment analysis score (if any) | | dateAdded | Date the job was added to the sheet | | tags | Associated tags or keywords | | viewCount | Number of views for the job post | Use Cases & Benefits Automated Job Tracking:** Get fresh job listings without manual searching by automatically querying the Job Search Global API multiple times per day. Centralized Job Data:** Save and update listings in Google Sheets for easy filtering, sharing, and tracking. Failure Alerts:** Get notified immediately if API calls fail, helping maintain workflow reliability. Customizable Search:** Change keywords anytime to tailor job searches for different roles or industries. Who Is This Workflow For? Recruiters** looking to monitor job market trends in real-time. Job Seekers** who want to automate job discovery for specific roles like “Web Developer.” HR Teams** managing talent pipelines and job postings. Data Analysts** needing structured job market data for research or reporting. Create your free n8n account and set up the workflow in just a few minutes using the link below: 👉 Start Automating with n8n Save time, stay consistent, and grow your LinkedIn presence effortlessly!
by Avkash Kakdiya
How it works This workflow creates a Slack-based CRM assistant that allows users to query HubSpot data using natural language. When a user mentions the bot in Slack, the message is cleaned and processed to remove Slack-specific formatting. The workflow then retrieves and filters relevant data from HubSpot (deals, companies, and contacts). Finally, an AI agent formats the response and sends a structured reply back to Slack. Step-by-step Trigger on Slack mention** Slack Trigger – Listens for app mentions in Slack channels. Code in JavaScript – Cleans the message by removing Slack IDs and formatting. Fetch and filter CRM data** Get Deals – Retrieves deals from HubSpot. Filter Deals – Filters deals based on the user query. Get many companies – Fetches company records from HubSpot. Filter Companies – Matches companies against the query. Get Contacts – Retrieves contact data from HubSpot. Filter Contacts – Filters contacts using name-based matching. Merge – Combines filtered deals, companies, and contacts into one dataset. Generate and send AI response** AI Agent – Uses AI to format and structure the CRM data into a readable response. Google Gemini Chat Model – Provides the language model for the AI agent. Send a message – Sends the final response back to the Slack channel. Why use this? Enables instant CRM access directly from Slack without logging into HubSpot Simplifies data lookup using natural language queries Combines multiple CRM objects into a single intelligent response Improves team productivity with faster decision-making Easily customizable for additional fields, filters, or AI formatting
by Rahul Joshi
📘 Description: This workflow automates the incident response lifecycle — from creation to communication and archival. It instantly creates Jira tickets for new incidents, alerts the on-call Slack team, generates timeline reports, logs the status in Google Sheets, and archives documentation to Google Drive — all automatically. It helps engineering and DevOps teams respond faster, maintain audit trails, and ensure no incident details are lost, even after Slack or Jira history expires. ⚙️ What This Workflow Does (Step-by-Step) 🟢 Manual Trigger – Start the incident creation and alerting process manually on demand. 🏷️ Define Incident Metadata – Sets up standardized incident data (Service, Severity, Description) used across Jira, Slack, and Sheets for consistent processing. 🎫 Create Jira Incident Ticket – Automatically creates a Jira task with service, severity, and description fields. Returns a unique Jira key and link for tracking. ✅ Validate Jira Ticket Creation Success – Confirms the Jira ticket was successfully created before continuing. True Path: Proceeds to Slack alerts and documentation flow. False Path: Logs the failure details to Google Sheets for debugging. 🚨 Log Jira Creation Failures to Error Sheet – Records any Jira API errors, permission issues, or timeouts to an error log sheet for reliability monitoring. 🔗 Combine Incident & Jira Data – Merges incident context with Jira ticket data to ensure all details are unified for downstream notifications. 💬 Format Incident Alert for Slack – Generates a rich Slack message containing Jira key, service, severity, and description with clickable Jira links. 📢 Alert On-Call Team in Slack – Posts the formatted message directly to the #oncall Slack channel to instantly notify engineers. 📋 Generate Incident Timeline Report – Parses Slack message content to create a detailed incident timeline including timestamps, service, severity, and placeholders for postmortem tracking. 📄 Convert Timeline to Text File – Converts the generated timeline into a structured .txt file for archival and compliance. ☁️ Archive Incident Timeline to Drive – Uploads the finalized incident report to Google Drive (“Incident Reports” folder) with timestamped filenames for traceability. 📊 Log Incident to Status Tracking Sheet – Appends Jira key, service, severity, and timestamp to the “status update” Google Sheet to build a live incident dashboard and enable SLA tracking. 🧩 Prerequisites Jira account with API access Google Sheets for “status update” and “error log” tracking Slack workspace connected via API credentials Google Drive access for archival 💡 Key Benefits ✅ Instant Slack alerts for new incidents ✅ Centralized Jira ticketing and tracking ✅ Automated timeline documentation for audits ✅ Seamless Google Drive archival and status logging ✅ Reduced MTTR through faster communication 👥 Perfect For DevOps and SRE teams managing production incidents Engineering managers overseeing uptime and reliability Organizations needing automated post-incident documentation Teams focused on SLA adherence and compliance reporting
by Prueba
Think of this workflow as your personal news assistant that: Monitors multiple technology websites 24/7 Uses AI to read and understand each article Filters out low-quality content automatically. Saves the best articles to Notion with summaries. Sends you Telegram alerts for must-read content. Prerequisites (What You Need Before Starting) 1. Notion Account (Free) Sign up at notion.so You'll store all curated articles here 2. OpenAI Account (Paid) Sign up at platform.openai.com Cost: $0.01-0.02 per article ($15-30/month for 50 articles/day) Needed for AI content analysis 3. Telegram Account (Free) Download Telegram app Needed for instant notifications 4. RSS Feed URLs (Free) Included: TechCrunch, Dev.to, The Verge Optional: Add your own favorite tech blogs Step-by-Step Configuration STEP 1: Create Your Notion Database 1.1 Create New Database Open Notion (notion.so) Click "+ New Page" Select "Table" view Name it "Content Curator" 1.2 Add Required Properties Click "+ New Property" and add each: Title** (Rich Text) - Already exists by default URL** (URL type) Summary** (Rich Text) Category** (Select) - Add options: Technology, AI, Business, DevOps, Other Tags** (Rich Text) Sentiment** (Select) - Add options: Positive, Neutral, Negative Priority** (Number) Source** (Rich Text) Published** (Date) Added Date** (Date) Status** (Select) - Add options: New, Read, Archived 1.3 Get Database ID Open your Notion database Click "Share" button (top right) Click "Copy Link" Extract the ID from URL: URL: https://www.notion.so/28e3a42420b2801e8ef5c680e49afc2e ID: 28e3a42420b2801e8ef5c680e49afc2e Save this ID - you'll need it multiple times STEP 2: Create Notion Integration 2.1 Create Integration Go to: notion.so/my-integrations Click "+ New Integration" Name: "RSS Content Curator" Select your workspace Click "Submit" Copy the "Internal Integration Token" (starts with secret_) IMPORTANT: Save this token safely! 2.2 Connect Integration to Database Open your "Content Curator" database in Notion Click "•••" (three dots, top right) Scroll to "Add connections" Select "RSS Content Curator" Click "Confirm" STEP 3: Set Up OpenAI Go to: platform.openai.com Sign up or log in Add payment method (required) Navigate to: platform.openai.com/api-keys Click "+ Create new secret key" Name: "n8n-content-curator" Copy the key (starts with sk-) IMPORTANT: You can't see this key again - save it now! STEP 4: Create Telegram Bot 4.1 Create the Bot Open Telegram app Search for "@BotFather" Start chat with BotFather Send: /newbot Follow prompts: Bot name: "My Content Curator" Username: "my_content_curator_bot" Copy the HTTP API token Save this token 4.2 Get Your Chat ID Search for your bot in Telegram Start a chat and send any message Open in browser: https://api.telegram.org/bot<YOUR_BOT_TOKEN>/getUpdates (Replace <YOUR_BOT_TOKEN> with your actual token) Find "chat":{"id": followed by a number Copy that number (e.g., 8346316847) Save this Chat ID STEP 5: Import Workflow to n8n Log in to n8n Click "Workflows" → "Add Workflow" Click three dots (⋮) → "Import from File" Select RSS_AI_Content_Curator_Enhanced.json Click "Import" STEP 6: Configure Notion Credentials Four nodes need Notion connection: Get Existing Articles Get Old Articles (>30 days) Save to Notion Archive Old Articles For EACH node: Click the node Find "Credential to connect with" Click "Create New" Select "Notion API" Name: "My Notion Curator" Paste your Integration Token from Step 2.1 Click "Save" Update Database IDs: In each Notion node, find "Database ID" field Click refresh icon next to it Select "From list" Choose your "Content Curator" database Test: Click "Execute Node" Success = green checkmark ✓ STEP 7: Configure OpenAI Credentials Click on "AI Content Analysis" node Find "Authentication" section Under "HTTP Header Auth", click "Create New" Fill in: Name: Authorization Value: Bearer sk-YOUR_API_KEY_HERE (Replace with your actual OpenAI key from Step 3) Click "Save" STEP 8: Configure Telegram Credentials Click on "Telegram Notification" node Under "Credential to connect with", click "Create New" Select "Telegram API" Name: "My Telegram Bot" Paste Bot Token from Step 4.1 Click "Save" Update Chat ID: In "Telegram Notification" node Find "Chat ID" field Paste your Chat ID from Step 4.2 Test: Click "Execute Node" STEP 9: Test the Workflow Manual Test Run Verify all nodes show green (no errors) Click "Execute Workflow" (top right) Watch nodes light up in sequence Check for red error nodes If all green: Open Notion - should see new articles Check Telegram - may see notification if high-priority article found Expected Results: Workflow fetches from 3 RSS feeds Removes duplicate articles Checks which are new (not already in Notion) AI analyzes each article (summary, category, priority) Articles with priority ≥60 saved to Notion High-priority articles trigger Telegram notification STEP 10: Activate Automatic Execution Replace Manual Trigger Click "Manual Trigger" node Press Delete Click "+" to add node Search "Schedule Trigger" Select it Configure Schedule Choose one: Every 4 hours:** More updates, higher cost Every 6 hours:** Balanced (recommended) Every 12 hours:** Lower cost Daily at specific time:** Minimal cost Example for every 6 hours: Mode: "Every X" Value: 6 Unit: Hours Activate Click toggle switch (top right) Should turn green: "Active" Workflow now runs automatically!
by Jitesh Dugar
Enterprise Legal Vault: AI Classification & Multi-Jurisdictional Security 🎯 Description This enterprise-grade legal document management system provides end-to-end automation for securing, classifying, and managing the lifecycle of sensitive documents. It is designed to handle high-stakes data across multiple jurisdictions (GDPR, CCPA, HIPAA) with AI-powered intelligence. ✨ What This Workflow Does Intelligent Intake & Deduplication - Monitors Google Drive, Email, and Webhooks. It uses SHA-256 fingerprinting to ensure every document is unique and to prevent redundant processing. AI-Powered Analysis Engine - Utilizes OpenAI/Claude to classify documents (NDA, MSA, Employment Agreements), detect jurisdiction, and assign a composite Risk Score (1-100). Dynamic Security Enforcement - Translates business rules from Airtable into technical enforcement using the HTML to PDF (Lock) node: Ultra-High Risk (90+): AES-256 encryption with disabled Print, Copy, and Modify permissions. High Risk (50-89): AES-128 encryption with dynamic recipient-specific watermarking. Public Filings: Timestamped watermarking with no encryption. Lifecycle & Retention Automation - Automatically calculates retention expiry dates based on legal requirements. It sets Google Calendar renewal reminders at 90, 60, and 30-day intervals. Intelligent Distribution Matrix - Verified: Attached to HubSpot Deals and synced to Dropbox client folders. Review: Flagged items are sent to a manual review queue with a Slack preview. Failed: Isolated in a Quarantine folder with incident logging in Airtable. 💡 Key Features Jurisdictional Logic:** Automatically adapts security and retention rules based on regional laws fetched from Airtable. Forensic Watermarking:** Embeds tracking data (User ID, Timestamp) to deter and identify data leaks. Fail-Safe Quarantine:** Prevents unverified or risky documents from ever entering the primary company storage. 📦 Requirements HTML to PDF Node* - Essential for tiered *Lock** operations. Airtable** - Acts as the central "Business Rules Engine." Google Drive & HubSpot** - For storage and CRM synchronization. OpenAI/Claude API** - For document classification and risk assessment. 🚀 Benefits ✅ Zero Manual Security Setup - Documents are locked according to corporate policy the moment they are uploaded. ✅ Regulatory Compliance - Meets strict GDPR "Right to be Forgotten" and data localization standards automatically. ✅ Proactive Risk Management - High-liability clauses and auto-renewals are flagged instantly to the legal team. Tags: #legal #compliance #pdf-lock #encryption #gdpr #sec-ops #hubspot #airtable Category: Legal & Compliance Difficulty: Advanced
by Sergei Byvshev
Automatically classify and route DevOps requests from your team chat using LLM + on-call calendar lookup. What it does This workflow turns your Mattermost channel into a smart DevOps intake system. When someone mentions @devops-duty, the workflow: Receives the message via Mattermost outgoing webhook Classifies the request into one of 8 categories using an LLM Looks up the current on-call engineer from Google Calendar Routes the request through a Switch node based on category Acknowledges in a Mattermost thread with the classification result Categories create_resource - Provision new databases, secrets, services, DNS records incident - Something is broken — production or staging issues question - Information requests, status checks, clarifications ci_cd_error - Build failures, deployment issues, GitHub Actions problems limits - Billing limits, quotas exceeded change_request - Modify existing infrastructure or configuration `code_approve - Code review and merge request approvals other - Anything that doesn't fit above Extending Each Switch output is an independent branch — connect sub-workflows or additional nodes per category. For example: incident → trigger an AI investigation sub-workflow with MCP tools (Kubernetes, Grafana, etc.) create_resource → run a provisioning playbook ci_cd_error → fetch GitHub Actions logs and analyze failures
by Gaetano Abbaticchio
Automatically monitor billable Kimai projects every weekday morning and receive a formatted HTML email when a project deadline is approaching or its hour budget is running low. If nothing requires attention, no email is sent keeping your inbox clean and focused. Who it's for Teams and freelancers using Kimai to track billable hours who want to stay on top of project deadlines and budget consumption without checking manually every day. Particularly useful for agencies managing multiple concurrent projects with fixed-hour contracts or purchase orders with expiry dates. How it works The workflow runs Monday–Friday at 9 AM via a Schedule Trigger. It fetches all visible billable projects from Kimai, then, in parallel, retrieves full project details (end date, time budget, customer name) and all timesheet records for each project. Total logged hours are calculated and merged with project data. Each project is then evaluated: it gets flagged if its end date falls within the next 10 days, or if logged hours have exceeded 80% of the allocated budget. Flagged projects are assigned a color-coded urgency level (expired, urgent, warning, on track, or missing data) and sorted by days remaining. A rich HTML email is generated with one card per project, showing the deadline status and a visual progress bar for hour consumption. The email is sent only if at least one project qualifies, otherwise the workflow exits silently. How to set up Add your Kimai Bearer Token as an HTTP Bearer Auth credential in n8n Add your SMTP credentials for outgoing email Replace https://kimai with your actual Kimai instance URL in the three HTTP Request nodes and in the email button link inside the Build Email HTML - Report node Update fromEmail and toEmail in the Send an Email node Requirements Self-hosted or cloud Kimai instance with API access Kimai service account Bearer Token SMTP account for outgoing email How to customize | What | Where | |---|---| | Days threshold (default: 10) | Calculate expiration → line 1 | | Budget alert % (default: 80%) | Calculate expiration → getBudgetInfo() | | Schedule | Every Day at 9:00 trigger node | | Sender / recipient email | Send an Email node |
by Cheng Siong Chin
How It Works This workflow automates legal case tracking, deadline management, and exception handling for law firms, corporate legal departments, and court systems managing complex litigation portfolios. Designed for attorneys, paralegals, and legal operations teams, it solves the challenge of monitoring court filings, tracking critical deadlines, identifying case exceptions, and coordinating multi-stakeholder responses while preventing costly missed deadlines and procedural violations. The system schedules regular monitoring (every 15 minutes for time-sensitive matters), fetches court case data from legal databases, validates filings through AI agents (Classifier categorizes case types and urgency, Validation confirms data accuracy), checks for exceptions requiring immediate attention, and orchestrates specialized responses through Administration Orchestration Agent coordinating multiple sub-agents: Admin Agent manages administrative tasks, Deadline Tracking monitors critical dates, Exception Escalation handles urgent matters with Gmail and Slack alerts. Routes findings by validation status—validated cases store normally while exceptions trigger multi-channel notifications and specialized handling. Organizations reduce missed deadline risk by 95%, automate routine case administration, ensure consistent procedural compliance, and enable attorneys to focus on legal strategy rather than docket management. Setup Steps Connect Schedule Trigger for monitoring frequency Configure court data sources with API credentials Add AI model API keys to Classifier Validation Agent and Administration Orchestration Agent nodes Define case classification rules and exception criteria in agent prompts based on jurisdiction requirements Set deadline thresholds for alert triggers Link Gmail credentials for attorney and client notifications with templated messages Configure Slack webhooks for urgent exception alerts to legal team channels Prerequisites Court system API access (PACER, state portals), case management system integration Use Cases Litigation deadline tracking, court filing monitoring, statute of limitations management Customization Modify classification rules for practice area specializations (patent, corporate, criminal) Benefits Reduces missed deadline risk by 95%, automates routine case administration tasks
by Rahul Joshi
📊 Description This workflow automatically classifies new Stack Overflow questions by topic, generates structured FAQ content using GPT-4o-mini, logs each entry in Google Sheets, saves formatted FAQs in Notion, and notifies your team on Slack — ensuring your product and support teams stay aligned with real-world developer discussions. 🤖💬📚 ⚙️ What This Template Does Step 1: Monitors Stack Overflow RSS feeds for new questions related to your selected tags. ⏱️ Step 2: Filters out irrelevant or incomplete questions before processing. 🧹 Step 3: Uses OpenAI GPT-4o-mini to classify each question into a topic category (Frontend, Backend, DevOps, etc.). 🧠 Step 4: Generates structured FAQ content including summaries, technical insights, and internal guidance. 📄 Step 5: Saves formatted entries into your Notion knowledge-base database. 📚 Step 6: Logs all FAQ data into a connected Google Sheet for analytics and tracking. 📊 Step 7: Sends real-time Slack notifications with quick links to the new FAQ and the original Stack Overflow post. 🔔 Step 8: Provides automatic error detection — any failed AI or Notion step triggers an instant Slack alert. 🚨 💡 Key Benefits ✅ Builds a continuously updated, AI-driven knowledge base ✅ Reduces repetitive support and documentation work ✅ Keeps product and dev teams aware of trending community issues ✅ Enhances internal docs with verified Stack Overflow insights ✅ Maintains an audit trail via Google Sheets ✅ Alerts your team instantly on errors or new FAQs 🧩 Features Automatic Stack Overflow RSS monitoring Dual-layer OpenAI integration (Topic Classification + FAQ Generation) Structured Notion database integration Google Sheets logging for analytics Slack notifications for new FAQs and error alerts Custom tag-based question filtering Near real-time updates (every minute) Built-in error handling for reliability 🔐 Requirements OpenAI API Key (GPT-4o-mini access) Notion API credentials with database access Google Sheets OAuth2 credentials Slack bot token with chat:write permissions Stack Overflow RSS feed URL for your preferred tags 👥 Target Audience SaaS or product teams building internal FAQ and knowledge systems Developer relations and documentation teams Customer-support teams automating knowledge reuse Technical communities curating content from Stack Overflow 🧭 Setup Instructions Add your OpenAI API credentials in n8n. Connect your Notion database and update the page or database ID. Connect Google Sheets credentials and select your tracking sheet. Connect your Slack account and specify your notification channel. Update the RSS Feed URL with your chosen Stack Overflow tags. Run the workflow manually once to test connectivity, then enable automation.
by WeblineIndia
WooCommerce Failed Order Fetch, Airtable Logging & Slack Alerts This workflow automatically checks WooCommerce for failed orders on a schedule, processes each order individually, prevents duplicate entries using Airtable, stores new failed orders centrally, and sends clear AI-generated Slack alerts. It ensures clean data, avoids duplicate records and helps teams act quickly on failed payments. Quick Implementation Steps Set your WooCommerce domain in the Set WooCommerce Domain node. Add WooCommerce API Key + Secret in the Fetch Failed Orders From WooCommerce node. Connect your Airtable Base/Table in the Search Records and Save Failed Order to Airtable nodes. Add your OpenAI API key to the AI node. Connect your Slack account + target channel. Enable the workflow and let it run automatically. What It Does This workflow continuously monitors your WooCommerce store for failed orders without relying on webhooks. On every scheduled run, it fetches all orders marked as failed, processes them one by one, and checks Airtable using the order\_id to see whether the order has already been logged. If the order already exists, the workflow safely stops processing for that order and optionally sends an informational Slack message. If the order is new, the workflow formats the data, saves it into Airtable, generates a clean AI-written summary, and sends a Slack alert to the team. This approach ensures data accuracy and prevents duplicate records. Who’s It For WooCommerce store owners needing reliable failed-payment tracking Finance teams monitoring recovery opportunities Support teams requiring instant alerts Developers building reusable, idempotent workflows Agencies managing multiple WooCommerce stores Ops teams using Airtable for reporting and audits Requirements to Use This Workflow Active n8n instance (cloud or self-hosted) WooCommerce store with REST API access Airtable account with Base and Table Slack workspace with API access OpenAI API key (for AI-generated messages) Permission to write data to Airtable and Slack How It Works & How To Set Up Step 1: Configure the Scheduler Set how often the workflow runs in Check Failed Orders (Scheduler) (e.g., every 5 minutes, 15 minutes, or hourly). Step 2: Set Your WooCommerce Domain In Set WooCommerce Domain, enter your store domain: Plain textANTLR4BashCC#CSSCoffeeScriptCMakeDartDjangoDockerEJSErlangGitGoGraphQLGroovyHTMLJavaJavaScriptJSONJSXKotlinLaTeXLessLuaMakefileMarkdownMATLABMarkupObjective-CPerlPHPPowerShell.propertiesProtocol BuffersPythonRRubySass (Sass)Sass (Scss)SchemeSQLShellSwiftSVGTSXTypeScriptWebAssemblyYAMLXML yourstore.com This value is reused across the workflow. Step 3: Fetch Failed Orders In Fetch Failed Orders From WooCommerce, configure Basic Authentication using: Consumer Key Consumer Secret The workflow fetches: Plain textANTLR4BashCC#CSSCoffeeScriptCMakeDartDjangoDockerEJSErlangGitGoGraphQLGroovyHTMLJavaJavaScriptJSONJSXKotlinLaTeXLessLuaMakefileMarkdownMATLABMarkupObjective-CPerlPHPPowerShell.propertiesProtocol BuffersPythonRRubySass (Sass)Sass (Scss)SchemeSQLShellSwiftSVGTSXTypeScriptWebAssemblyYAMLXML https://{{wc_domain}}/wp-json/wc/v3/orders?status=failed Step 4: Loop & Duplicate Check Each failed order is processed individually using Loop Over Items. The workflow searches Airtable using Search Records to check whether the order\_id already exists. A Merge node ensures safe data handling, and the IF node decides whether the order is a duplicate or a new entry. Step 5: Format New Order Data The Format Order Data node normalizes WooCommerce data, maps failure reasons, builds admin and retry URLs, and prepares the data for storage. Step 6: Save to Airtable New failed orders are saved in Airtable using Save Failed Order to Airtable. Duplicate orders are skipped to prevent data duplication. Step 7: Generate & Send Slack Alerts For new failed orders, the workflow generates a concise AI-based summary and sends it to Slack. Duplicate orders can optionally trigger an informational Slack message. How To Customize Polling Frequency:** Change scheduler interval Duplicate Logic:** Modify Airtable search or IF condition Stored Fields:** Adjust Airtable field mappings Formatting Rules:* Edit JavaScript in *Format Order Data** Slack Message Style:** Update AI prompt Optional Enhancements Retry-payment tracking with attempts count Customer notification via email or SMS Jira/Trello ticket creation Google Sheets or BI dashboard sync Multi-store WooCommerce support Example Use Cases Centralized failed-payment tracking in Airtable Instant Slack alerts for support and finance teams Clean reporting without duplicate records Faster issue resolution with AI summaries Scalable foundation for recovery automation Troubleshooting Guide | Issue | Possible Cause | Solution | | --------------------------- | ----------------------------------- | ------------------------------------------------------ | | No orders fetched | Wrong WooCommerce domain or API URL | Check Set WooCommerce Domain and HTTP Request URL | | 401 Unauthorized | Invalid API key/secret | Regenerate keys from WooCommerce → REST API | | Airtable record not created | Field mismatch | Confirm column names and types in Airtable | | Slack message empty | AI node prompt or path mismatch | Confirm output path: $json.output[0].content[0].text | | Workflow not running | Scheduler disabled | Ensure workflow is Active | | API timeout | Store too slow or blocked | Whitelist server IP or increase timeout in HTTP node | Need Help? If you need assistance customizing this workflow, adding new features or integrating more systems, feel free to reach out. The n8n automation team at WeblineIndia can help with: Advanced WooCommerce automations Multi-store workflows Airtable/Slack/OpenAI integrations Custom logic, validations and data pipelines And many such advanced automation solutions. We’re here to support you in scaling your automation journey.
by WeblineIndia
Weekly WooCommerce Finance KPI Automation with HTTP APIs & Slack This workflow automatically gathers weekly WooCommerce order and refund data, calculates essential financial KPIs, detects potential refund-related risks and sends a clear weekly finance summary to Slack. Once configured, it runs on a schedule and delivers leadership-ready insights without any manual reporting. Quick Implementation Steps Import the workflow into your automation platform. Update the WooCommerce store domain in the configuration step. Add WooCommerce Consumer Key and Consumer Secret for API access. Connect your Slack account and choose a destination channel. Enable the workflow to receive weekly finance updates automatically. What It Does This workflow automates the weekly finance reporting process for WooCommerce stores by combining sales and refund data into a single, structured summary. It collects completed orders, cleans and standardizes the data and processes refund records to ensure accurate totals and counts. Using this data, the workflow calculates key metrics such as total sales amount, number of orders, total refunds and refund ratios. These KPIs help teams quickly assess store performance and identify refund patterns that may require attention. The workflow concludes by sending a well-formatted, executive-friendly digest to Slack, ensuring that finance and leadership teams always have timely and reliable insights. Who’s It For This workflow is designed for: Finance and accounting teams CFOs and business leaders WooCommerce store owners Operations and revenue managers Agencies managing WooCommerce stores Requirements to Use This Workflow To use this workflow, you need: A workflow automation platform A WooCommerce store with REST API access enabled WooCommerce Consumer Key and Consumer Secret Access to a Slack workspace Permission to configure API credentials and Slack integrations How It Works & How To Set Up 1. Weekly Schedule Trigger Automatically runs the workflow once every week. Controls when KPI data is generated. 2. WooCommerce Store Configuration Defines the WooCommerce domain used for all API calls. Makes it easy to reuse or update the workflow for another store. 3. Fetch WooCommerce Orders Retrieves order data using the WooCommerce Orders API. Pulls data relevant to the weekly reporting period. Uses HTTP Basic Authentication. 4. Filter Completed Orders Keeps only orders with a completed status. Ensures only successful sales are included. 5. Normalize Order Data Extracts essential finance fields: Order ID Order date Order total Line items Creates a clean data structure for KPI calculations. 6. Fetch WooCommerce Refunds Retrieves refund records using the WooCommerce Refunds API. Ensures refunds are analyzed alongside sales data. 7. Normalize Refund Data Extracts refund ID, parent order ID and refund amount. Standardizes refund information for accurate aggregation. 8. Combine Orders & Refunds Merges sales and refund datasets into a single input. Prepares the data for KPI calculations. 9. Calculate Finance KPIs Calculates: Total sales amount Total order count Total refund amount Total refund count Refund-to-sales ratio Refund-to-order ratio Removes duplicate refunds. Adds automatic risk flags when thresholds are exceeded. 10. Send Weekly KPI Digest to Slack Posts a formatted summary message to Slack. Users can select any Slack channel for delivery. Designed for quick review by leadership teams. How To Customize Nodes Schedule**: Change the weekly run day or time. Order Filters**: Include additional order statuses if required. KPI Logic**: Modify ratios, thresholds or calculations. Slack Message**: Adjust formatting, wording or emojis. Store Setup**: Reuse the workflow for different WooCommerce stores. Add-Ons (Optional Enhancements) This workflow can be extended with: Explicit weekly date filters Spreadsheet or database exports Email delivery in addition to Slack Multi-store KPI reporting Product-level or category-level metrics Automated alerts for unusual refund activity Use Case Examples Common use cases include: Weekly WooCommerce finance performance reporting Refund trend monitoring for leadership teams Automated CFO-level summaries Operations and revenue review meetings Agency reporting for managed WooCommerce stores There are many additional business-specific use cases where this workflow can be applied. Troubleshooting Guide | Issue | Possible Cause | Solution | | --------------------------- | ---------------------------------- | -------------------------------------------- | | Slack message not received | Slack integration not configured | Reconnect Slack account and select a channel | | Sales totals appear as zero | No completed orders for the period | Verify order status and store activity | | Refund data missing | API permission issue | Confirm WooCommerce API access | | Authentication error | Invalid credentials | Regenerate Consumer Key and Secret | | Workflow not running | Automation not activated | Enable the workflow | Need Help? If you need assistance setting up this workflow, customizing KPIs or extending it with advanced reporting features? WeblineIndia can help you: Configure and deploy automation workflows Customize finance and reporting logic Integrate WooCommerce with Slack and other tools Build similar workflows tailored to your business 👉 Reach out to our n8n automation experts at WeblineIndia for expert support and custom automation solutions.
by Vigh Sandor
Setup Instructions Overview This n8n workflow monitors your Proxmox VE server and sends automated reports to Telegram every 15 minutes. It tracks VM status, host resource usage, temperature sensors, and detects recently stopped VMs. Prerequisites Required Software n8n instance (self-hosted or cloud) Proxmox VE server with API access Telegram account with bot created via BotFather lm-sensors package installed on Proxmox host Required Access Proxmox admin credentials (username and password) SSH access to Proxmox server Telegram Bot API token Telegram Chat ID Installation Steps Step 1: Install Temperature Sensors on Proxmox SSH into your Proxmox server and run: apt-get update apt-get install -y lm-sensors sensors-detect Press ENTER to accept default answers during sensors-detect setup. Test that sensors work: sensors | grep -E 'Package|Core' Step 2: Create Telegram Bot Open Telegram and search for BotFather Send /newbot command Follow prompts to create your bot Save the API token provided Get your Chat ID by sending a message to your bot, then visiting: https://api.telegram.org/bot<YOUR_TOKEN>/getUpdates Look for "chat":{"id": YOUR_CHAT_ID in the response Step 3: Configure n8n Credentials SSH Password Credential In n8n, go to Credentials menu Create new credential: SSH Password Enter: Host: Your Proxmox IP address Port: 22 Username: root (or your admin user) Password: Your Proxmox password Telegram API Credential Create new credential: Telegram API Enter the Bot Token from BotFather Step 4: Import and Configure Workflow Import the JSON workflow into n8n Open the "Set Variables" node Update the following values: PROXMOX_IP: Your Proxmox server IP address PROXMOX_PORT: API port (default: 8006) PROXMOX_NODE: Node name (default: pve) TELEGRAM_CHAT_ID: Your Telegram chat ID PROXMOX_USER: Proxmox username with realm (e.g., root@pam) PROXMOX_PASSWORD: Proxmox password Connect credentials: SSH - Get Sensors node: Select your SSH credential Send Telegram Report node: Select your Telegram credential Save the workflow Activate the workflow Configuration Options Adjust Monitoring Interval Edit the "Schedule Every 15min" node: Change minutesInterval value to desired interval (in minutes) Recommended: 5-30 minutes Adjust Recently Stopped VM Detection Window Edit the "Process Data" node: Find line: const fifteenMinutesAgo = now - 900; Change 900 to desired seconds (900 = 15 minutes) Modify Temperature Warning Threshold The workflow uses the "high" threshold defined by sensors. To manually set threshold, edit "Process Data" node: Modify the temperature parsing logic Change comparison: if (current >= high) to use custom value Testing Test Individual Components Execute "Set Variables" node manually - verify output Execute "Proxmox Login" node - check for valid ticket Execute "API - VM List" - confirm VM data received Execute complete workflow - check Telegram for message Troubleshooting Login fails: Verify PROXMOX_USER format includes realm (e.g., root@pam) Check password is correct Ensure allowUnauthorizedCerts is enabled for self-signed certificates No temperature data: Verify lm-sensors is installed on Proxmox Run sensors command manually via SSH Check SSH credentials are correct Recently stopped VMs not detected: Check task log API endpoint returns data Verify VM was stopped within detection window Ensure task types qmstop or qmshutdown are logged Telegram not receiving messages: Verify bot token is correct Confirm chat ID is accurate Check bot was started (send /start to bot) Verify parse_mode is set to HTML in Telegram node How It Works Workflow Architecture The workflow executes in a sequential chain of nodes that gather data from multiple sources, process it, and deliver a formatted report. Execution Flow Schedule Trigger (15min) Set Variables Proxmox Login (get authentication ticket) Prepare Auth (prepare credentials for API calls) API - VM List (get all VMs and their status) API - Node Tasks (get recent task log) API - Node Status (get host CPU, memory, uptime) SSH - Get Sensors (get temperature data) Process Data (analyze and structure all data) Generate Formatted Message (create Telegram message) Send Telegram Report (deliver via Telegram) Data Collection VM Information (Proxmox API) Endpoint: /api2/json/nodes/{node}/qemu Retrieves: Total VM count Running VM count Stopped VM count VM names and IDs Task Log (Proxmox API) Endpoint: /api2/json/nodes/{node}/tasks?limit=100 Retrieves recent tasks to detect: qmstop operations (VM stop commands) qmshutdown operations (VM shutdown commands) Task timestamps Task status Host Status (Proxmox API) Endpoint: /api2/json/nodes/{node}/status Retrieves: CPU usage percentage Memory total and used (in GB) System uptime (in seconds) Temperature Data (SSH) Command: sensors | grep -E 'Package|Core' Retrieves: CPU package temperature Individual core temperatures High and critical thresholds Data Processing VM Status Analysis Counts total, running, and stopped VMs Queries task log for stop/shutdown operations Filters tasks within 15-minute window Extracts VM ID from task UPID string Matches VM ID to VM name from VM list Calculates time elapsed since stop operation Temperature Intelligence The workflow implements smart temperature reporting: Normal Operation (all temps below high threshold): Calculates average temperature across all cores Displays min, max, and average values Example: "Average: 47.5 C (Min: 44.0 C, Max: 52.0 C)" Warning State (any temp at or above high threshold): Displays all temperature readings in detail Shows full sensor output with thresholds Changes section title to "Temperature Warning" Adds fire emoji indicator Resource Calculation CPU Usage: API returns decimal (0.0 to 1.0) Converted to percentage: cpu * 100 Memory: API returns bytes Converted to GB: bytes / (1024^3) Calculates percentage: (used / total) * 100 Uptime: API returns seconds Converted to days and hours: days = seconds / 86400, hours = (seconds % 86400) / 3600 Report Generation Message Structure The Telegram message uses HTML formatting for structure: Header Section Report title Generation timestamp Virtual Machines Section Total VM count Running VMs with checkmark Stopped VMs with stop sign Recently stopped count with warning Detailed list if VMs stopped in last 15 minutes Host Resources Section CPU usage percentage Memory used/total with percentage Host uptime in days and hours Temperature Section Smart display (summary or detailed) Warning indicator if thresholds exceeded Monospace formatting for sensor output HTML Formatting Features Bold tags for headers and labels Italic for timestamps Code blocks for temperature data Unicode separators for visual structure Emoji indicators for status (checkmark, stop, warning, fire) Security Considerations Credential Storage Passwords stored in n8n Set node (encrypted in database) Alternative: Use n8n environment variables Recommendation: Use Proxmox API tokens instead of passwords API Communication HTTPS with self-signed certificate acceptance Authentication via session tickets (15-minute validity) CSRF token validation for API requests SSH Access Password-based authentication (can use key-based) Commands limited to read-only operations No privilege escalation required Performance Impact API Load 3 API calls per execution (VM list, tasks, status) Lightweight endpoints with minimal data 15-minute interval reduces server load Execution Time Typical workflow execution: 5-10 seconds Login: 1-2 seconds API calls: 2-3 seconds SSH command: 1-2 seconds Processing: less than 1 second Resource Usage Minimal CPU impact on Proxmox Small memory footprint Negligible network bandwidth Extensibility Adding Additional Metrics To monitor additional data points: Add new API call node after "Prepare Auth" Update "Process Data" node to include new data Modify "Generate Formatted Message" for display Integration with Other Services The workflow can be extended to: Send to Discord, Slack, or email Write to database or log file Trigger alerts based on thresholds Generate charts or graphs Multi-Node Monitoring To monitor multiple Proxmox nodes: Duplicate API call nodes Update node names in URLs Merge data in processing step Generate combined report