by noda
Price Anomaly Detection & News Alert (Marketstack + HN + DeepL + Slack) Overview This workflow monitors a stock’s closing price via Marketstack. It computes a 20-day moving average and standard deviation (±2σ). If the latest close is outside ±2σ, it flags an anomaly, fetches related headlines from Hacker News, translates them to Japanese with DeepL, and posts both original and translated text to Slack. When no anomaly is detected, it sends a concise “normal” report. How it works 1) Daily trigger at 09:00 JST 2) Marketstack: fetch EOD data 3) Code: compute mean/σ and classify (normal/high/low) 4) IF: anomaly? → yes = news path / no = normal report 5) Hacker News: search related items 6) DeepL: translate EN → JA 7) Slack: send bilingual notification Requirements Marketstack API key DeepL API key Slack OAuth2 (bot token / channel permission) Notes Edit the ticker in Get Stock Data. Adjust N (days) and k (sigma multiplier) in Calculate Deviation. Keep credentials out of HTTP nodes (use n8n Credentials).
by Rahul Joshi
Description Automatically detect customer churn risks from Zendesk tickets, log them into Google Sheets for tracking, and send instant Slack alerts to your customer success team. This workflow helps you spot unhappy customers early and take proactive action to reduce churn. 🚨📊💬 What This Template Does Fetches Zendesk tickets daily on schedule (8:00 PM). ⏰ Processes and formats ticket data into clean JSON (priority, age, urgency). 🧠 Identifies churn risks based on negative satisfaction ratings. ⚠️ Logs churn risk tickets into Google Sheets for analysis and reporting. 📈 Sends formatted Slack alerts with ticket details to the CS team channel. 📢 Key Benefits Detects unhappy customers before they churn. 🚨 Centralized churn tracking for reporting and team reviews. 🧾 Proactive alerts to reduce response delays. ⏱️ Clean, structured ticket data for analytics and filtering. 🔄 Strengthens customer success strategy with real-time visibility. 🌐 Features Schedule Trigger – Runs every weekday at 8:00 PM. 🗓️ Zendesk Integration – Fetches all tickets automatically. 🎫 Smart Data Processing – Adds ticket age, urgency, and priority mapping. 🧮 Churn Risk Filter – Flags tickets with negative satisfaction scores. 🚩 Google Sheets Logging – Saves churn risk details with metadata. 📊 Slack Alerts – Sends formatted messages with ID, subject, rating, and action steps. 💬 Requirements n8n instance (cloud or self-hosted). Zendesk API credentials with ticket read access. Google Sheets OAuth2 credentials with write permissions. Slack Bot API credentials with channel posting permissions. Pre-configured Google Sheet for churn risk logging. Target Audience Customer Success teams monitoring churn risk. 👩💻 SaaS companies tracking customer health. 🚀 Support managers who want proactive churn alerts. 🛠️ SMBs improving retention through automation. 🏢 Remote CS teams needing instant notifications. 🌐 Step-by-Step Setup Instructions Connect your Zendesk, Google Sheets, and Slack credentials in n8n. 🔑 Update the Schedule Trigger (default: daily at 8:00 PM) if needed. ⏰ Replace the Google Sheet ID with your churn risk tracking sheet. 📊 Confirm the Slack channel ID for alerts (default: zendesk-churn-alerts). 💬 Adjust churn filter logic (default: satisfaction_score = "bad"). 🎯 Run a test to fetch Zendesk tickets and validate Sheets + Slack outputs. ✅
by 福壽一貴
Who is this for? Dream journaling enthusiasts** who want to visualize and record their dreams Self-improvement practitioners** interested in dream analysis and psychology Content creators** looking for unique, AI-generated dream-based content Wellness coaches and therapists** who use dream work with clients What it does Receives dream descriptions via Telegram bot commands Parses visual style selection from 8 options (cinematic, ghibli, surreal, vintage, horror, abstract, watercolor, cyberpunk) Analyzes the dream using AI to extract themes, symbols, and psychological meaning Generates optimized video prompt tailored to the selected style with audio descriptions Creates AI video with native audio using Google Veo3 (single API call) Logs to Google Sheets as a searchable dream journal Sends video + analysis back to user via Telegram How to set up Estimated setup time: 15 minutes Step 1: Create Telegram Bot Message @BotFather on Telegram Send /newbot and follow prompts Copy the API token Step 2: Get fal.ai API Key Sign up at fal.ai Generate API key from dashboard In n8n, create Header Auth credential: Name: Authorization Value: Key YOUR_FAL_API_KEY Step 3: Get OpenRouter API Key Sign up at openrouter.ai Generate API key Add to n8n as OpenRouter credential Step 4: Set up Google Sheets (Optional) Create new spreadsheet with columns: Timestamp, Username, Style, Dream, Theme, Emotion, Type, Meaning, Video URL Connect Google Sheets credential in n8n Select your document and sheet in the "Log to Google Sheets" node Step 5: Connect Credentials Add Telegram credential to all Telegram nodes Add fal.ai Header Auth to both HTTP Request nodes Add OpenRouter credential to the LLM node Requirements | Service | Purpose | Cost | |---------|---------|------| | Telegram Bot | User interface | Free | | fal.ai (Veo3) | Video + audio generation | ~$0.10-0.15/video | | OpenRouter | LLM for dream analysis | ~$0.01-0.03/request | | Google Sheets | Dream journal storage | Free | How to customize Change LLM**: Replace OpenRouter with OpenAI, Anthropic, or other providers Add styles**: Edit the STYLES object in "Parse Dream Command" node Modify analysis**: Edit system prompt in "AI Dream Analyzer Agent" node Change video model**: Replace Veo3 URL with Kling, Luma, or other fal.ai models Skip logging**: Remove or disable the Google Sheets node Commands | Command | Description | |---------|-------------| | /dream [text] | Generate video in cinematic style | | /dream [style] [text] | Generate with specific style | | /styles | Show all available styles | Example Input: /dream ghibli I was flying over a forest where the trees had glowing leaves Output: 8-second AI video with magical Ghibli-style visuals, ambient soundtrack, plus psychological analysis of flight symbolism and nature connection themes.
by Rahul Joshi
📘 Description: This workflow automates the incident response lifecycle — from creation to communication and archival. It instantly creates Jira tickets for new incidents, alerts the on-call Slack team, generates timeline reports, logs the status in Google Sheets, and archives documentation to Google Drive — all automatically. It helps engineering and DevOps teams respond faster, maintain audit trails, and ensure no incident details are lost, even after Slack or Jira history expires. ⚙️ What This Workflow Does (Step-by-Step) 🟢 Manual Trigger – Start the incident creation and alerting process manually on demand. 🏷️ Define Incident Metadata – Sets up standardized incident data (Service, Severity, Description) used across Jira, Slack, and Sheets for consistent processing. 🎫 Create Jira Incident Ticket – Automatically creates a Jira task with service, severity, and description fields. Returns a unique Jira key and link for tracking. ✅ Validate Jira Ticket Creation Success – Confirms the Jira ticket was successfully created before continuing. True Path: Proceeds to Slack alerts and documentation flow. False Path: Logs the failure details to Google Sheets for debugging. 🚨 Log Jira Creation Failures to Error Sheet – Records any Jira API errors, permission issues, or timeouts to an error log sheet for reliability monitoring. 🔗 Combine Incident & Jira Data – Merges incident context with Jira ticket data to ensure all details are unified for downstream notifications. 💬 Format Incident Alert for Slack – Generates a rich Slack message containing Jira key, service, severity, and description with clickable Jira links. 📢 Alert On-Call Team in Slack – Posts the formatted message directly to the #oncall Slack channel to instantly notify engineers. 📋 Generate Incident Timeline Report – Parses Slack message content to create a detailed incident timeline including timestamps, service, severity, and placeholders for postmortem tracking. 📄 Convert Timeline to Text File – Converts the generated timeline into a structured .txt file for archival and compliance. ☁️ Archive Incident Timeline to Drive – Uploads the finalized incident report to Google Drive (“Incident Reports” folder) with timestamped filenames for traceability. 📊 Log Incident to Status Tracking Sheet – Appends Jira key, service, severity, and timestamp to the “status update” Google Sheet to build a live incident dashboard and enable SLA tracking. 🧩 Prerequisites Jira account with API access Google Sheets for “status update” and “error log” tracking Slack workspace connected via API credentials Google Drive access for archival 💡 Key Benefits ✅ Instant Slack alerts for new incidents ✅ Centralized Jira ticketing and tracking ✅ Automated timeline documentation for audits ✅ Seamless Google Drive archival and status logging ✅ Reduced MTTR through faster communication 👥 Perfect For DevOps and SRE teams managing production incidents Engineering managers overseeing uptime and reliability Organizations needing automated post-incident documentation Teams focused on SLA adherence and compliance reporting
by Evoort Solutions
Job Search Automation with Job Search Global API & Google Sheet Logging Description: Automate your job search process by querying the Job Search Global API via RapidAPI every 6 hours for a specified keyword like “Web Developer.” This workflow extracts job listings and saves them directly to Google Sheets, with alerts sent for any API failures. Workflow Overview Schedule Trigger Runs the workflow automatically every 6 hours to ensure up-to-date job listings. Set Search Term Defines the dynamic job keyword, e.g., "Web Developer," used in API requests. Fetch Job Listings Sends a POST request to the Job Search Global API (via RapidAPI) to retrieve job listings with pagination. Check API Response Validates the API response status, branching workflow on success or failure. Extract Job Data Parses the job listings array from the API response for processing. Save to Google Sheet Appends or updates job listings in Google Sheets, avoiding duplicates by matching job titles. Send Failure Notification Email Sends an alert email if the API response fails or returns an error. How to Obtain Your RapidAPI Key (Quick Steps) Go to RapidAPI Job Search Global API. Sign up or log in to your RapidAPI account. Subscribe to the API plan that suits your needs. Copy your unique X-RapidAPI-Key from the dashboard. Insert this key into your workflow’s HTTP Request node headers. How to Configure Google Sheets Create a new Google Sheet for job listings. Share the sheet with your Google Service Account email to enable API access. Use the sheet URL in the Google Sheets node within your workflow. Map columns correctly based on the job data fields. Google Sheet Columns Used | Column Name | Description | | ----------- | ----------------------------------- | | title | Job title | | url | Job posting URL | | company | Company name | | postDate | Date job was posted | | jobSource | Source of the job listing | | slug | Unique job identifier or slug | | sentiment | Sentiment analysis score (if any) | | dateAdded | Date the job was added to the sheet | | tags | Associated tags or keywords | | viewCount | Number of views for the job post | Use Cases & Benefits Automated Job Tracking:** Get fresh job listings without manual searching by automatically querying the Job Search Global API multiple times per day. Centralized Job Data:** Save and update listings in Google Sheets for easy filtering, sharing, and tracking. Failure Alerts:** Get notified immediately if API calls fail, helping maintain workflow reliability. Customizable Search:** Change keywords anytime to tailor job searches for different roles or industries. Who Is This Workflow For? Recruiters** looking to monitor job market trends in real-time. Job Seekers** who want to automate job discovery for specific roles like “Web Developer.” HR Teams** managing talent pipelines and job postings. Data Analysts** needing structured job market data for research or reporting. Create your free n8n account and set up the workflow in just a few minutes using the link below: 👉 Start Automating with n8n Save time, stay consistent, and grow your LinkedIn presence effortlessly!
by malcolm
Inspiration & Notes This workflow was born out of a very real problem. While writing a book, I found the process of discovering suitable literary agents and managing outreach to be manual, and surprisingly difficult to scale. Researching agents, checking submission rules, personalizing emails, tracking submissions, and staying organized quickly became a full-time job on its own. So instead of doing it manually, I automated it. I built this entire workflow in 3 days — and the goal of publishing it is to show that you can do the same. With the right structure and intent, complex sales and marketing workflows don’t have to take months to build. Contact & Collaboration If you have questions, business inquiries, or would like help setting up automation workflows, feel free to reach out: 📩 malcolm95authoring@gmail.com I genuinely enjoy designing workflows and automation systems, especially when they support meaningful projects. I work primarily from interest and impact rather than purely financial motivation. Whether I take on a project for FREE or paid for the following reasons: I LOVE setting up workflows and automation. I work for meaningfulness, not for money. I may do the work for free**, depending on how meaningful the project is. If the problem statement matters, the motivation follows. It also depends on the value I bring to the table** -- If I can contribute significant value through system design, I’m more inclined to get involved. If you’re building something thoughtful and need help automating it, I’m always happy to have a conversation. Enjoy~! 0. Overview Automates the end-to-end literary agent outreach pipeline, from data ingestion and eligibility filtering to deep agent research, personalized email generation, submission tracking, and analytics. Architecture The system is organized into four logical domains: The system is modular and is divided into four domains: --> Data Engineering --> Marketing & Research --> Sales (Outreach) --> Data Analysis Each domain operates independently and passes structured data downstream. 1. Data Engineering Purpose: Ingest and normalize agent data from multiple sources into a single source of truth. Inputs Google BigQuery Azure Blob Storage AWS S3 Google Sheets (Optional) HTTP sources Key Steps Scheduled ingestion trigger Merge and normalize heterogeneous data formats (CSV, tables) Deduplication and validation AI-assisted enrichment for missing metadata Append-only writes to a central Google Sheet Output Clean, normalized agent records ready for eligibility evaluation 2. Marketing & Research Purpose: Decide who to contact and how to personalize outreach. Eligibility Evaluation An AI agent evaluates each record against strict rules: Email submissions enabled Not QueryTracker-only or QueryManager-only Genre fit (e.g. Memoir, Spiritual, Self-help, Psychology, Relationships, Family) Outputs send_email (boolean) reason (auditable explanation) Deep Research For eligible agents only: Public research from agency sites, interviews, Manuscript Wish List, and LinkedIn (if public) Extracts: Professional background Editorial interests Genres represented Notable clients/books (if publicly listed) Public statements Source-backed personalization angles Strict Rule: All claims must be explicitly cited; no inference or hallucination is allowed. 3. Sales (Outreach) Purpose: Execute personalized email outreach and maintain clean submission tracking. Steps AI generates agent-specific email copy Copy is normalized for tone and clarity Email is sent (e.g. Gmail) Submission metadata is logged: Submission Completed Submission Timestamp Channel used Result Consistent, traceable outreach with CRM-style hygiene 4. Data Analysis Purpose: Measure pipeline health and outreach effectiveness. Features Append-only decision and submission logs QuickChart visualizations for fast validation (e.g. TRUE vs FALSE completion rates) Optional integration with: Power BI Google Analytics 4 Supports Completion rate analysis Funnel tracking Source/platform performance Decision auditing Design Principles Separation of concerns** (ingestion ≠ decision ≠ outreach ≠ analytics) AI with hard guardrails** (strict schemas, source-only facts) Append-only logging** (analytics-safe, debuggable) Modular & extensible** (plug-and-play data sources) Human-readable + machine-usable outputs** Constraints & Notes Only public, professional information is used No private or speculative data HTTP scraping avoided unless necessary Power BI Embedded is not required Workflow designed and implemented end-to-end in ~3 days Use Cases Marketing Audience discovery Agent segmentation Personalization at scale Campaign readiness Funnel automation Sales Lead qualification Deduplication Outreach execution Status tracking Pipeline hygiene Tech Stack Automation:** n8n AI:** OpenAI (GPT) Scripting:** JavaScript Data Stores:** Google Sheets Email:** Gmail Visualization:** QuickChart BI (optional):** Power BI, Google Analytics 4 Cloud Sources:** AWS S3, Azure Blob, BigQuery Status This workflow is production-ready, modular, and designed for extension into other sales or marketing domains beyond literary outreach.
by Rahul Joshi
📊 Description This workflow automatically classifies new Stack Overflow questions by topic, generates structured FAQ content using GPT-4o-mini, logs each entry in Google Sheets, saves formatted FAQs in Notion, and notifies your team on Slack — ensuring your product and support teams stay aligned with real-world developer discussions. 🤖💬📚 ⚙️ What This Template Does Step 1: Monitors Stack Overflow RSS feeds for new questions related to your selected tags. ⏱️ Step 2: Filters out irrelevant or incomplete questions before processing. 🧹 Step 3: Uses OpenAI GPT-4o-mini to classify each question into a topic category (Frontend, Backend, DevOps, etc.). 🧠 Step 4: Generates structured FAQ content including summaries, technical insights, and internal guidance. 📄 Step 5: Saves formatted entries into your Notion knowledge-base database. 📚 Step 6: Logs all FAQ data into a connected Google Sheet for analytics and tracking. 📊 Step 7: Sends real-time Slack notifications with quick links to the new FAQ and the original Stack Overflow post. 🔔 Step 8: Provides automatic error detection — any failed AI or Notion step triggers an instant Slack alert. 🚨 💡 Key Benefits ✅ Builds a continuously updated, AI-driven knowledge base ✅ Reduces repetitive support and documentation work ✅ Keeps product and dev teams aware of trending community issues ✅ Enhances internal docs with verified Stack Overflow insights ✅ Maintains an audit trail via Google Sheets ✅ Alerts your team instantly on errors or new FAQs 🧩 Features Automatic Stack Overflow RSS monitoring Dual-layer OpenAI integration (Topic Classification + FAQ Generation) Structured Notion database integration Google Sheets logging for analytics Slack notifications for new FAQs and error alerts Custom tag-based question filtering Near real-time updates (every minute) Built-in error handling for reliability 🔐 Requirements OpenAI API Key (GPT-4o-mini access) Notion API credentials with database access Google Sheets OAuth2 credentials Slack bot token with chat:write permissions Stack Overflow RSS feed URL for your preferred tags 👥 Target Audience SaaS or product teams building internal FAQ and knowledge systems Developer relations and documentation teams Customer-support teams automating knowledge reuse Technical communities curating content from Stack Overflow 🧭 Setup Instructions Add your OpenAI API credentials in n8n. Connect your Notion database and update the page or database ID. Connect Google Sheets credentials and select your tracking sheet. Connect your Slack account and specify your notification channel. Update the RSS Feed URL with your chosen Stack Overflow tags. Run the workflow manually once to test connectivity, then enable automation.
by Vigh Sandor
Setup Instructions Overview This n8n workflow monitors your Proxmox VE server and sends automated reports to Telegram every 15 minutes. It tracks VM status, host resource usage, temperature sensors, and detects recently stopped VMs. Prerequisites Required Software n8n instance (self-hosted or cloud) Proxmox VE server with API access Telegram account with bot created via BotFather lm-sensors package installed on Proxmox host Required Access Proxmox admin credentials (username and password) SSH access to Proxmox server Telegram Bot API token Telegram Chat ID Installation Steps Step 1: Install Temperature Sensors on Proxmox SSH into your Proxmox server and run: apt-get update apt-get install -y lm-sensors sensors-detect Press ENTER to accept default answers during sensors-detect setup. Test that sensors work: sensors | grep -E 'Package|Core' Step 2: Create Telegram Bot Open Telegram and search for BotFather Send /newbot command Follow prompts to create your bot Save the API token provided Get your Chat ID by sending a message to your bot, then visiting: https://api.telegram.org/bot<YOUR_TOKEN>/getUpdates Look for "chat":{"id": YOUR_CHAT_ID in the response Step 3: Configure n8n Credentials SSH Password Credential In n8n, go to Credentials menu Create new credential: SSH Password Enter: Host: Your Proxmox IP address Port: 22 Username: root (or your admin user) Password: Your Proxmox password Telegram API Credential Create new credential: Telegram API Enter the Bot Token from BotFather Step 4: Import and Configure Workflow Import the JSON workflow into n8n Open the "Set Variables" node Update the following values: PROXMOX_IP: Your Proxmox server IP address PROXMOX_PORT: API port (default: 8006) PROXMOX_NODE: Node name (default: pve) TELEGRAM_CHAT_ID: Your Telegram chat ID PROXMOX_USER: Proxmox username with realm (e.g., root@pam) PROXMOX_PASSWORD: Proxmox password Connect credentials: SSH - Get Sensors node: Select your SSH credential Send Telegram Report node: Select your Telegram credential Save the workflow Activate the workflow Configuration Options Adjust Monitoring Interval Edit the "Schedule Every 15min" node: Change minutesInterval value to desired interval (in minutes) Recommended: 5-30 minutes Adjust Recently Stopped VM Detection Window Edit the "Process Data" node: Find line: const fifteenMinutesAgo = now - 900; Change 900 to desired seconds (900 = 15 minutes) Modify Temperature Warning Threshold The workflow uses the "high" threshold defined by sensors. To manually set threshold, edit "Process Data" node: Modify the temperature parsing logic Change comparison: if (current >= high) to use custom value Testing Test Individual Components Execute "Set Variables" node manually - verify output Execute "Proxmox Login" node - check for valid ticket Execute "API - VM List" - confirm VM data received Execute complete workflow - check Telegram for message Troubleshooting Login fails: Verify PROXMOX_USER format includes realm (e.g., root@pam) Check password is correct Ensure allowUnauthorizedCerts is enabled for self-signed certificates No temperature data: Verify lm-sensors is installed on Proxmox Run sensors command manually via SSH Check SSH credentials are correct Recently stopped VMs not detected: Check task log API endpoint returns data Verify VM was stopped within detection window Ensure task types qmstop or qmshutdown are logged Telegram not receiving messages: Verify bot token is correct Confirm chat ID is accurate Check bot was started (send /start to bot) Verify parse_mode is set to HTML in Telegram node How It Works Workflow Architecture The workflow executes in a sequential chain of nodes that gather data from multiple sources, process it, and deliver a formatted report. Execution Flow Schedule Trigger (15min) Set Variables Proxmox Login (get authentication ticket) Prepare Auth (prepare credentials for API calls) API - VM List (get all VMs and their status) API - Node Tasks (get recent task log) API - Node Status (get host CPU, memory, uptime) SSH - Get Sensors (get temperature data) Process Data (analyze and structure all data) Generate Formatted Message (create Telegram message) Send Telegram Report (deliver via Telegram) Data Collection VM Information (Proxmox API) Endpoint: /api2/json/nodes/{node}/qemu Retrieves: Total VM count Running VM count Stopped VM count VM names and IDs Task Log (Proxmox API) Endpoint: /api2/json/nodes/{node}/tasks?limit=100 Retrieves recent tasks to detect: qmstop operations (VM stop commands) qmshutdown operations (VM shutdown commands) Task timestamps Task status Host Status (Proxmox API) Endpoint: /api2/json/nodes/{node}/status Retrieves: CPU usage percentage Memory total and used (in GB) System uptime (in seconds) Temperature Data (SSH) Command: sensors | grep -E 'Package|Core' Retrieves: CPU package temperature Individual core temperatures High and critical thresholds Data Processing VM Status Analysis Counts total, running, and stopped VMs Queries task log for stop/shutdown operations Filters tasks within 15-minute window Extracts VM ID from task UPID string Matches VM ID to VM name from VM list Calculates time elapsed since stop operation Temperature Intelligence The workflow implements smart temperature reporting: Normal Operation (all temps below high threshold): Calculates average temperature across all cores Displays min, max, and average values Example: "Average: 47.5 C (Min: 44.0 C, Max: 52.0 C)" Warning State (any temp at or above high threshold): Displays all temperature readings in detail Shows full sensor output with thresholds Changes section title to "Temperature Warning" Adds fire emoji indicator Resource Calculation CPU Usage: API returns decimal (0.0 to 1.0) Converted to percentage: cpu * 100 Memory: API returns bytes Converted to GB: bytes / (1024^3) Calculates percentage: (used / total) * 100 Uptime: API returns seconds Converted to days and hours: days = seconds / 86400, hours = (seconds % 86400) / 3600 Report Generation Message Structure The Telegram message uses HTML formatting for structure: Header Section Report title Generation timestamp Virtual Machines Section Total VM count Running VMs with checkmark Stopped VMs with stop sign Recently stopped count with warning Detailed list if VMs stopped in last 15 minutes Host Resources Section CPU usage percentage Memory used/total with percentage Host uptime in days and hours Temperature Section Smart display (summary or detailed) Warning indicator if thresholds exceeded Monospace formatting for sensor output HTML Formatting Features Bold tags for headers and labels Italic for timestamps Code blocks for temperature data Unicode separators for visual structure Emoji indicators for status (checkmark, stop, warning, fire) Security Considerations Credential Storage Passwords stored in n8n Set node (encrypted in database) Alternative: Use n8n environment variables Recommendation: Use Proxmox API tokens instead of passwords API Communication HTTPS with self-signed certificate acceptance Authentication via session tickets (15-minute validity) CSRF token validation for API requests SSH Access Password-based authentication (can use key-based) Commands limited to read-only operations No privilege escalation required Performance Impact API Load 3 API calls per execution (VM list, tasks, status) Lightweight endpoints with minimal data 15-minute interval reduces server load Execution Time Typical workflow execution: 5-10 seconds Login: 1-2 seconds API calls: 2-3 seconds SSH command: 1-2 seconds Processing: less than 1 second Resource Usage Minimal CPU impact on Proxmox Small memory footprint Negligible network bandwidth Extensibility Adding Additional Metrics To monitor additional data points: Add new API call node after "Prepare Auth" Update "Process Data" node to include new data Modify "Generate Formatted Message" for display Integration with Other Services The workflow can be extended to: Send to Discord, Slack, or email Write to database or log file Trigger alerts based on thresholds Generate charts or graphs Multi-Node Monitoring To monitor multiple Proxmox nodes: Duplicate API call nodes Update node names in URLs Merge data in processing step Generate combined report
by Rahul Joshi
Description This workflow automates the evaluation of interviewer feedback using AI. It retrieves raw notes from Google Sheets, processes them through GPT-4o-mini for structured scoring, validates outputs, and calculates weighted quality scores. The system provides real-time Slack feedback to interviewers, logs AI errors for transparency, and recommends training if the feedback quality is low. What This Template Does (Step-by-Step) ⚡ Manual Trigger – Runs the workflow manually to start evaluation. 📋 Fetch Raw Feedback Data (Google Sheets) – Reads all feedback entries (Role, Stage, Interviewer Email, Feedback Text, row_number). 🧠 AI Quality Evaluator (Azure GPT-4o-mini) – Processes feedback into structured JSON across 5 dimensions. 🔍 Analyze Feedback Quality (LLM Chain) – Applies scoring rules (Specificity, STAR, Bias-Free, Actionability, Depth) and outputs structured JSON. ✅ Validate AI Response – Ensures AI output isn’t undefined or malformed. 🚨 Log AI Errors (Google Sheets) – Records invalid AI responses for debugging and auditing. 🔄 Parse AI JSON Output (Code Node) – Converts AI JSON text into structured n8n objects with error handling. 🧮 Calculate Weighted Quality Score (Code Node) – Computes final weighted score (0–100), generates flags, formats vague phrases, and preserves context. 💾 Save Scores to Spreadsheet (Google Sheets) – Updates the original feedback row with Score, Flags, and AI JSON. 💬 Send Feedback Summary to Interviewer (Slack) – Sends interviewers a structured Slack report (score, flags, vague phrases, STAR improvement tips). 🎯 Check if Training Needed – Applies threshold logic: if score < 50, route to training recommendations. 📚 Send Training Recommendations (Slack) – Delivers STAR method guides and bias-free interviewing resources to low scorers. Prerequisites Google Sheets (Raw_Feedback + Error Log Sheet) Azure OpenAI API credentials (for GPT-4o-mini) Slack API credentials (for sending feedback & training notifications) n8n instance (cloud or self-hosted) Key Benefits ✅ Automated interview feedback quality scoring ✅ Bias detection and vague feedback flagging ✅ Real-time Slack feedback to interviewers ✅ Error logging for AI reliability tracking ✅ Training recommendations for low scorers ✅ Audit trail maintained in Google Sheets Perfect For HR & Recruitment teams ensuring structured interviewer feedback Organizations enforcing STAR method & bias-free hiring Teams seeking continuous interviewer coaching Companies needing audit-ready records of interview quality
by n8n Automation Expert | Template Creator | 2+ Years Experience
🎯 What This Workflow Does Transform your digital payment business with a fully-featured Telegram bot that handles everything from product listings to transaction processing. Perfect for entrepreneurs looking to automate their PPOB (mobile credit, data packages, bill payments) business operations without coding expertise. ✨ Key Features 📱 Complete Transaction Management Prepaid Services**: Mobile credit, data packages, PLN tokens Gaming**: Game vouchers for popular platforms E-Wallet**: OVO, DANA, GoPay, ShopeePay top-ups Bill Payments**: PLN postpaid, Telkom, cable TV, internet, credit cards 💰 Smart Business Operations Real-time balance checking with low-balance alerts Automated transaction processing with MD5 security Interactive product catalog with categorized browsing Transaction history and status tracking Deposit request management 🤖 User-Friendly Interface Intuitive inline keyboard navigation Multi-step transaction flows with validation Comprehensive error handling and user feedback Professional messaging with emojis and formatting 🛠️ Technical Highlights Robust Architecture Switch-based routing** for efficient command handling MD5 signature authentication** for secure API communications Session management** for multi-step user interactions Comprehensive error handling** with user-friendly messages API Integrations Digiflazz API**: Balance checking, product listings, transactions, bill inquiries Telegram Bot API**: Message handling, inline keyboards, callback queries Secure credential management** with environment variables 📋 Setup Requirements Prerequisites Active Digiflazz account with API credentials Telegram Bot Token from @BotFather n8n instance (cloud or self-hosted) Environment Variables DIGIFLAZZ_USERNAME=your_digiflazz_username DIGIFLAZZ_API_KEY=your_digiflazz_api_key 🎮 How to Use Customer Commands /start - Welcome message and main menu /menu - Access main navigation /balance - Check account balance /products - Browse product catalog /topup - Process prepaid transactions /checkbill - Inquiry postpaid bills /paybill - Pay postpaid services /deposit - Request balance deposit /history - View transaction history Business Features Automated balance monitoring** with threshold alerts Product categorization** for easy browsing Transaction confirmation** with detailed receipts Multi-payment type support** across various service providers 🔒 Security & Compliance MD5 signature verification** for all API calls Input validation** and sanitization Session timeout management** Error logging** and monitoring HTTPS-only communications** 💡 Business Benefits For PPOB Entrepreneurs Reduce manual work** by 90% through automation 24/7 customer service** without human intervention Professional presentation** builds customer trust Scalable operations** handle unlimited transactions For Customers Instant transactions** with real-time confirmations Easy navigation** through intuitive menus Multiple service options** in one convenient bot Reliable service** with comprehensive error handling 📊 Performance Features Sub-second response times** for balance checks Concurrent transaction processing** Automatic retry logic** for failed operations Detailed logging** for business analytics 🎯 Perfect For Digital payment entrepreneurs** starting PPOB businesses Existing businesses** looking to automate customer service Resellers** wanting professional transaction interfaces Developers** seeking proven automation templates 📱 Supported Services Prepaid Products Mobile credit (all Indonesian operators) Data packages and internet vouchers PLN electricity tokens Game vouchers (Mobile Legends, Free Fire, PUBG, etc.) Postpaid Services PLN electricity bills Telkom phone bills Cable TV subscriptions (First Media, MNC, etc.) Internet service providers Credit card payments Multifinance installments 🚀 Getting Started Import the workflow JSON into your n8n instance Configure Telegram and Digiflazz credentials Set up environment variables Activate the workflow Test with your Telegram bot Start serving customers immediately! 💎 Premium Features Comprehensive documentation** with setup guides Error handling** for all edge cases Professional UI/UX** design Scalable architecture** for business growth Community support** and updates Transform your digital payment business today with this production-ready Telegram bot automation. No coding required – just configure and launch! Perfect for the Indonesian PPOB market with full Digiflazz integration and professional customer experience.
by Alexandra Spalato
Short Description This LinkedIn automation workflow monitors post comments for specific trigger words and automatically sends direct messages with lead magnets to engaged users. The system checks connection status, handles non-connected users with connection requests, and prevents duplicate outreach by tracking all interactions in a database. Key Features Comment Monitoring**: Scans LinkedIn post comments for customizable trigger words Connection Status Check**: Determines if users are 1st-degree connections Automated DMs**: Sends personalized messages with lead magnet links to connected users Connection Requests**: Asks non-connected users to connect via comment replies Duplicate Prevention**: Tracks interactions in NocoDB to avoid repeat messages Message Rotation**: Uses different comment reply variations for authenticity Batch Processing**: Handles multiple comments with built-in delays Who This Workflow Is For Content creators looking to convert post engagement into leads Coaches and consultants sharing valuable LinkedIn content Anyone wanting to automate lead capture from LinkedIn posts How It Works Setup: Configure post ID, trigger word, and lead magnet link via form Comment Extraction: Retrieves all comments from the specified post using Unipile Trigger Detection: Filters comments containing the specified trigger word Connection Check: Determines if commenters are 1st-degree connections Smart Routing: Connected users receive DMs, others get connection requests Database Logging: Records all interactions to prevent duplicates Setup Requirements Required Credentials Unipile API Key**: For LinkedIn API access NocoDB API Token**: For database tracking Database Structure **Table: leads linkedin_id: LinkedIn user ID name: User's full name headline: LinkedIn headline url: Profile URL date: Interaction date posts_id: Post reference connection_status: Network distance dm_status: Interaction type (sent/connection request) Customization Options Message Templates**: Modify DM and connection request messages Trigger Words**: Change the words that activate the workflow Timing**: Adjust delays between messages (8-12 seconds default) Reply Variations**: Add more comment reply options for authenticity Installation Instructions Import the workflow into your n8n instance Set up NocoDB database with required table structure Configure Unipile and NocoDB credentials Set environment variables for Unipile root URL and LinkedIn account ID Test with a sample post before full use
by Growth AI
Google Ads automated reporting to spreadsheets with Airtable Who's it for Digital marketing agencies, PPC managers, and marketing teams who manage multiple Google Ads accounts and need automated monthly performance reporting organized by campaign types and conversion metrics. What it does This workflow automatically retrieves Google Ads performance data from multiple client accounts and populates organized spreadsheets with campaign metrics. It differentiates between e-commerce (conversion value) and lead generation (conversion count) campaigns, then organizes data by advertising channel (Performance Max, Search, Display, etc.) with monthly tracking for budget and performance analysis. How it works The workflow follows an automated data collection and reporting process: Account Retrieval: Fetches client information from Airtable (project names, Google Ads IDs, campaign types) Active Filter: Processes only accounts marked as "Actif" for budget reporting Campaign Classification: Routes accounts through e-commerce or lead generation workflows based on "Typologie ADS" Google Ads Queries: Executes different API calls depending on campaign type (conversion value vs. conversion count) Data Processing: Organizes metrics by advertising channel (Performance Max, Search, Display, Video, Shopping, Demand Gen) Dynamic Spreadsheet Updates: Automatically fills the correct monthly column in client spreadsheets Sequential Processing: Handles multiple accounts with wait periods to avoid API rate limits Requirements Airtable account with client database Google Ads API access with developer token Google Sheets API access Client-specific spreadsheet templates (provided) How to set up Step 1: Prepare your reporting template Copy the Google Sheets reporting template Create individual copies for each client Ensure proper column structure (months B-M for January-December) Link template URLs in your Airtable database Step 2: Configure your Airtable database Set up the following fields in your Airtable: Project names: Client project identifiers ID GADS: Google Ads customer IDs Typologie ADS: Campaign classification ("Ecommerce" or "Lead") Status - Prévisionnel budgétaire: Account status ("Actif" for active accounts) Automation budget: URLs to client-specific reporting spreadsheets Step 3: Set up API credentials Configure the following authentication: Airtable Personal Access Token: For client database access Google Ads OAuth2: For advertising data retrieval Google Sheets OAuth2: For spreadsheet updates Developer Token: Required for Google Ads API access Login Customer ID: Manager account identifier Step 4: Configure Google Ads API settings Update the HTTP request nodes with your credentials: Developer Token: Replace "[Your token]" with your actual developer token Login Customer ID: Replace "[Your customer id]" with your manager account ID API Version: Currently using v18 (update as needed) Step 5: Set up scheduling Default schedule: Runs on the 3rd of each month at 5 AM Cron expression: 0 5 3 * * Recommended timing: Early month execution for complete previous month data Processing delay: 1-minute waits between accounts to respect API limits How to customize the workflow Campaign type customization E-commerce campaigns: Tracks: Cost and conversion value metrics Query: metrics.conversions_value for revenue tracking Use case: Online stores, retail businesses Lead generation campaigns: Tracks: Cost and conversion count metrics Query: metrics.conversions for lead quantity Use case: Service businesses, B2B companies Advertising channel expansion Current channels tracked: Performance Max: Automated campaign type Search: Text ads on search results Display: Visual ads on partner sites Video: YouTube and video partner ads Shopping: Product listing ads Demand Gen: Audience-focused campaigns Add new channels by modifying the data processing code nodes. Reporting period adjustment Current setting: Last month data (DURING LAST_MONTH) Alternative periods: Last 30 days, specific date ranges, quarterly reports Custom timeframes: Modify the Google Ads query date parameters Multi-account management Sequential processing: Handles multiple accounts automatically Error handling: Continues processing if individual accounts fail Rate limiting: Built-in waits prevent API quota issues Batch size: No limit on number of accounts processed Data organization features Dynamic monthly columns Automatic detection: Determines previous month column (B-M) Column mapping: January=B, February=C, ..., December=M Data placement: Updates correct month automatically Multi-year support: Handles year transitions seamlessly Campaign performance breakdown Each account populates 10 rows of data: Performance Max Cost (Row 2) Performance Max Conversions/Value (Row 3) Demand Gen Cost (Row 4) Demand Gen Conversions/Value (Row 5) Search Cost (Row 6) Search Conversions/Value (Row 7) Video Cost (Row 8) Video Conversions/Value (Row 9) Shopping Cost (Row 10) Shopping Conversions/Value (Row 11) Data processing logic Cost conversion: Automatically converts micros to euros (÷1,000,000) Precision rounding: Rounds to 2 decimal places for clean presentation Zero handling: Shows 0 for campaign types with no activity Data validation: Handles missing or null values gracefully Results interpretation Monthly performance tracking Historical data: Year-over-year comparison across all channels Channel performance: Identify best-performing advertising types Budget allocation: Data-driven decisions for campaign investments Trend analysis: Month-over-month growth or decline patterns Account-level insights Multi-client view: Consolidated reporting across all managed accounts Campaign diversity: Understanding which channels clients use most Performance benchmarks: Compare similar account types and industries Resource allocation: Focus on high-performing accounts and channels Use cases Agency reporting automation Client dashboards: Automated population of monthly performance reports Budget planning: Historical data for next month's budget recommendations Performance reviews: Ready-to-present data for client meetings Trend identification: Spot patterns across multiple client accounts Internal performance tracking Team productivity: Track account management efficiency Campaign optimization: Identify underperforming channels for improvement Growth analysis: Monitor client account growth and expansion Forecasting: Use historical data for future performance predictions Strategic planning Budget allocation: Data-driven distribution across advertising channels Channel strategy: Determine which campaign types to emphasize Client retention: Proactive identification of declining accounts New business: Performance data to support proposals and pitches Workflow limitations Monthly execution: Designed for monthly reporting (not real-time) API dependencies: Requires stable Google Ads and Sheets API access Rate limiting: Sequential processing prevents parallel account handling Template dependency: Requires specific spreadsheet structure for proper data placement Previous month focus: Optimized for completed month data (run early in new month) Manual credential setup: Requires individual configuration of API tokens and customer IDs