by Greg Evseev
This workflow template provides a robust solution for efficiently sending multiple prompts to Anthropic's Claude models in a single batch request and retrieving the results. It leverages the Anthropic Batch API endpoint (/v1/messages/batches) for optimized processing and outputs each result as a separate item. Core Functionality & Example Usage Included This template includes: The Core Batch Processing Workflow: Designed to be called by another n8n workflow. An Example Usage Workflow: A separate branch demonstrating how to prepare data and trigger the core workflow, including examples using simple strings and n8n's Langchain Chat Memory nodes. Who is this for? This template is designed for: Developers, data scientists, and researchers** who need to process large volumes of text prompts using Claude models via n8n. Content creators** looking to generate multiple pieces of content (e.g., summaries, Q&As, creative text) based on different inputs simultaneously. n8n users** who want to automate interactions with the Anthropic API beyond single requests, improve efficiency, and integrate batch processing into larger automation sequences. Anyone needing to perform bulk text generation or analysis tasks with Claude programmatically. What problem does this workflow solve? Sending prompts to language models one by one can be slow and inefficient, especially when dealing with hundreds or thousands of requests. This workflow addresses that by: Batching:** Grouping multiple prompts into a single API call to Anthropic's dedicated batch endpoint (/v1/messages/batches). Efficiency:** Significantly reducing the time required compared to sequential processing. Scalability:** Handling large numbers of prompts (up to API limits) systematically. Automation:** Providing a ready-to-use, callable n8n structure for batch interactions with Claude. Structured Output:** Parsing the results and outputting each individual prompt's result as a separate n8n item. Use Cases: Bulk content generation (e.g., product descriptions, summaries). Large-scale question answering based on different contexts. Sentiment analysis or data extraction across multiple text snippets. Running the same prompt against many different inputs for research or testing. What the Core Workflow does (Triggered by the 'When Executed by Another Workflow' node) Receive Input: The workflow starts when called by another workflow (e.g., using the 'Execute Workflow' node). It expects input data containing: anthropic-version (string, e.g., "2023-06-01") requests (JSON array, where each object represents a single prompt request conforming to the Anthropic Batch API schema). Submit Batch Job: Sends the formatted requests data via POST to the Anthropic API /v1/messages/batches endpoint to create a new batch job. Requires Anthropic credentials. Wait & Poll: Enters a loop: Checks if the processing_status of the batch job is ended. If not ended, it waits for a set interval (10 seconds by default in the 'Batch Status Poll Interval' node). It then checks the batch job status again via GET to /v1/messages/batches/{batch_id}. Requires Anthropic credentials. This loop continues until the status is ended. Retrieve Results: Once the batch job is complete, it fetches the results file by making a GET request to the results_url provided in the batch status response. Requires Anthropic credentials. Parse Results: The results are typically returned in JSON Lines (.jsonl) format. The 'Parse response' Code node splits the response text by newlines and parses each line into a separate JSON object, storing them in an array field (e.g., parsed). Split Output: The 'Split Out Parsed Results' node takes the array of parsed results and outputs each result object as an individual item from the workflow. Prerequisites An active n8n instance (Cloud or self-hosted). An Anthropic API account with access granted to Claude models and the Batch API. Your Anthropic API Key. Basic understanding of n8n concepts (nodes, workflows, credentials, expressions, 'Execute Workflow' node). Familiarity with JSON data structures for providing input prompts and understanding the output. Understanding of the Anthropic Batch API request/response structure. (For Example Usage Branch) Familiarity with n8n's Langchain nodes (@n8n/n8n-nodes-langchain) if you plan to adapt that part. Setup Import Template: Add this template to your n8n instance. Configure Credentials: Navigate to the 'Credentials' section in your n8n instance. Click 'Add Credential'. Search for 'Anthropic' and select the Anthropic API credential type. Enter your Anthropic API Key and save the credential (e.g., name it "Anthropic account"). Assign Credentials: Open the workflow and locate the three HTTP Request nodes in the core workflow: Submit batch Check batch status Get results In each of these nodes, select the Anthropic credential you just configured from the 'Credential for Anthropic API' dropdown. Review Input Format: Understand the required input structure for the When Executed by Another Workflow trigger node. The primary inputs are anthropic-version (string) and requests (array). Refer to the Sticky Notes in the template and the Anthropic Batch API documentation for the exact schema required within the requests array. Activate Workflow: Save and activate the core workflow so it can be called by other workflows. ➡️ Quick Start & Input/Output Examples: Look for the Sticky Notes within the workflow canvas! They provide crucial information, including examples of the required input JSON structure and the expected output format. How to customize this workflow Input Source:* The core workflow is designed to be called. You will build *another workflow that prepares the anthropic-version and requests array and then uses the 'Execute Workflow' node to trigger this template. The included example branch shows how to prepare this data. Model Selection & Parameters:* Model (claude-3-opus-20240229, etc.), max_tokens, temperature, and other parameters are defined *within each object inside the requests array you pass to the workflow trigger. You configure these in the workflow calling this template. Polling Interval:** Modify the 'Wait' node ('Batch Status Poll Interval') duration if you need faster or slower status checks (default is 10 seconds). Be mindful of potential rate limits. Parsing Logic:** If Anthropic changes the result format or you have specific needs, modify the Javascript code within the 'Parse response' Code node. Error Handling:** Enhance the workflow with more specific error handling for API failures (e.g., using 'Error Trigger' or checking HTTP status codes) or batch processing issues (batch.status === 'failed'). Output Processing:* In the workflow that *calls this template, add nodes after the 'Execute Workflow' node to process the individual result items returned (e.g., save to a database, spreadsheet, send notifications). Example Usage Branch (Manual Trigger) This template also contains a separate branch starting with the Run example Manual Trigger node. Purpose:** This branch demonstrates how to construct the necessary anthropic-version and requests array payload. Methods Shown:** It includes steps for: Creating a request object from a simple query string. Creating a request object using data from n8n's Langchain Chat Memory nodes (@n8n/n8n-nodes-langchain). Execution:** It merges these examples, constructs the final payload, and then uses the Execute Workflow node to call the main batch processing logic described above. It finishes by filtering the results for demonstration. Note:** This branch is for demonstration and testing. You would typically build your own data preparation logic in a separate workflow. The use of Langchain nodes is optional for the core batch functionality. Notes API Limits:** According to the Anthropic API documentation, batches can contain up to 100,000 requests and be up to 256 MB in total size. Ensure your n8n instance has sufficient resources for large batches. API Costs:** Using the Anthropic API, including the Batch API, incurs costs based on token usage. Monitor your usage via the Anthropic dashboard. Completion Time:** Batch processing time depends on the number and complexity of prompts and current API load. The polling mechanism accounts for this variability. Versioning:** Always include the anthropic-version header in your requests, as shown in the workflow and examples. Refer to Anthropic API versioning documentation.
by Romain Jouhannet
Linear Project/Issue Status and End Date to Productboard feature Sync Sync project and issue data between Linear and Productboard to keep teams aligned. This workflow updates Productboard features with the status and end date from Linear projects or due date from Linear issues. It ensures consistent data and sends a Slack notification whenever changes are made. Features Listens for updates in Linear projects/issues. Maps Linear statuses to Productboard feature statuses. Updates Productboard feature details including timeframe. Sends a Slack notification summarizing the updates. Setup Linear Credentials: Add your Linear API credentials in n8n. Productboard Credentials: Configure the Productboard API credentials in n8n. Linear Projects or Issues: Select the Linear project(s) or Issue(s) you want to monitor for updates. Productboard Custom Field: Create a custom field in Productboard named "Linear". This field should store the URL of the Linear project or issue you want to sync. Retrieve the UUID of the custom field in Productboard and set it up in the "Get Productboard Feature ID" node. Slack Notification: Update the Slack node with the desired Slack channel ID. Activate the Workflow: Enable the workflow to automatically sync data when triggered by updates in Linear.
by Frank Chen
Automatically fetch existing domains from Notion's Database and verify the validity of SSL certificates through SSL-Checker. If the validity period is less than 14 days, send a Telegram message notification and trigger SSH remote automatic refresh. Successful refresh notification will be sent through Telegram. This can prevent problems with the server-side automatic refresh program, which may cause unexpected service interruptions. Main use cases: Notion store domain. Telegram receives warning messages. Remotely trigger Certbot to refresh SSL. How it works: Record who triggered this workflow, because if there is a credential that is about to expire, this workflow will be triggered repeatedly. After getting all the domains from Notion, send an http request to SSL-Checker. After getting all the SSL-Checker results, add the validity label. And use the IF node to check if there are any certificates that are about to expire. Then there are two workflows: If there is a certificate that is about to expire: send an SSH command to the remote control server to refresh the certificate, notify through Telegram, and call this workflow again to re-verify the validity of the SSL certificate. If the validity period of SSL is normal: then refresh the data on Notion, and if a re-called workflow is detected, Telegram will be used to notify that the SSL has been updated.
by Blockia Labs
Time Logging on Clockify Using Slack How it works This workflow simplifies time tracking for teams and agencies by integrating Slack with Clockify. It enables users to log, update, or delete time entries directly within Slack, leveraging an AI-powered assistant for seamless and conversational interactions. Key features include: Effortless Time Logging**: Create and manage time entries in Clockify without leaving Slack. AI-Powered Assistant**: Get step-by-step guidance to ensure accurate and efficient time logging. Project and Client Management**: Retrieve project and client information from Clockify effortlessly. Overlap Prevention**: Avoid overlapping entries with built-in time validation. Automated Descriptions**: Generate ethical, grammatically correct descriptions for time logs. Set up steps 1. Prepare your integrations Ensure you have active accounts for both Slack and Clockify. Generate your Clockify API credentials for integration. 2. Import the workflow Download and import the workflow template into your n8n instance. Configure the workflow to connect with your Slack and Clockify accounts. 3. Configure the workflow Add your Clockify API credentials in the workflow settings. Set up the Slack Trigger to listen for app mentions or specific commands. 4. Test the workflow Use Slack to create a time entry and verify it in Clockify. Test updating and deleting existing entries to ensure smooth functionality. Check for any overlapping time logs or incorrect data entries. Why use this workflow? Efficiency**: Eliminate the need to switch between tools for time tracking. Accuracy**: AI-driven validation ensures error-free entries. Automation**: Simplify repetitive tasks like updating or deleting time logs. Proactive Guidance**: Conversational assistant ensures smooth operations.
by A Z
Automatically scrape Meta Threads for posts hiring specific roles (e.g. automation engineers, video editors, graphic designers), filter true hiring intent, deduplicate, and send alerts. We are taking automation roles as an example for now. What it does This workflow continuously scans Threads for fresh posts mentioning the roles you care about. It uses AI to filter out self-promotion and service ads, keeping only posts where the author is hiring. Qualified posts are saved into Google Sheets for tracking and sent to Telegram for instant alerts. It’s ideal for freelancers, agencies, and job seekers who want a steady radar of opportunities. How it works (Step by Step) Schedule trigger – Runs on a set interval (e.g. every 12 hours). Scrape Threads posts – Fetches recent posts from multiple keywords (e.g., “n8n expert”, “hire video editor”, “graphic designer”, etc.) via Apify. Merge results – Combines posts into a single stream. Normalize fields – Maps raw data into clean fields: text, author, URL, timestamp, profile link. AI filter – Uses an AI Agent to: Accept only posts where someone is hiring (rejects “hire me” style self-promo). Apply simple geography rules (e.g., allow US, UK, UAE, CA; pass unknowns). Exclude roles outside your scope. Deduplication – Checks Google Sheets to skip posts already seen. Save to Google Sheets – Writes qualified posts with full details. Telegram alerts – Sends you the matched post instantly so you can act. Who it’s for Freelancers: Get first dibs on gigs before others spot them. Agencies: Build a client pipeline by tracking hiring signals. Job seekers: Spot hidden opportunities in your target field. Customization Ideas Swap keywords to monitor roles you care about (e.g., “UI/UX designer”, “motion graphics editor”, “copywriter”). Add Slack or Discord notifications instead of Telegram. Expand geo rules to match your region. Use Sheets as a CRM—add columns for status, outreach date, etc
by Billy Christi
Who is this for? This workflow is perfect for: HR professionals** seeking to automate employee and department management Startups and SMBs** that want an AI-powered HR assistant on Telegram Internal operations teams** that want to simplify onboarding and employee data tracking What problem is this workflow solving? Managing employee databases manually is error-prone and inefficient—especially for growing teams. This workflow solves that by: Enabling natural language-based HR operations directly through Telegram Automating the creation, retrieval, and deletion of employee records in Airtable Dynamically managing related data such as departments and job titles Handling data consistency and linking across relational tables automatically Providing a conversational interface backed by OpenAI for smart decision-making What this workflow does Using Telegram as the interface and Airtable as the backend database, this intelligent HR workflow allows users to: Chat in natural language (e.g. “Show me all employees” or “Create employee: Sarah, Marketing…”) Interpret and route requests via an AI Agent that acts as the orchestrator Query employee, department, and job title data from Airtable Create or update records as needed: Add new departments and job titles automatically if they don’t exist Create new employees and link them to the correct department and job title Delete employees based on ID Respond directly in Telegram, providing user-friendly feedback Setup View & Copy the Airtable base here: 👉 Employee Database Management – Airtable Base Template Telegram Bot: Set up a Telegram bot and connect it to the Telegram Trigger node Airtable: Prepare three Airtable tables: Employees with links to Departments and Job Titles Departments with Name & Description Job Titles with Title & Description Connect your Airtable API key and base/table IDs into the appropriate Airtable nodes Add your OpenAI API key to the AI Agent nodes Deploy both workflows: the main chatbot workflow and the employee creation sub-workflow Test with sample messages like: “Create employee: John Doe, john@company.com, Engineering, Software Engineer” “Remove employee ID rec123xyz” How to customize this workflow to your needs Switch databases**: Replace Airtable with Notion, PostgreSQL, or Google Sheets if desired Enhance security**: Add authentication and validation before allowing deletion Add approval flows**: Integrate Telegram button-based approvals for sensitive actions Multi-language support**: Expand system prompts to support multiple languages Add logging**: Store every user action in a log table for auditability Expand capabilities**: Integrate payroll, time tracking, or Slack notifications Extra Tips This is a two-workflow setup. Make sure the sub-workflow is deployed and accessible from the main agent. Use Simple Memory per chat ID to preserve context across user queries. You can expand the orchestration logic by adding more tools to the main agent—such as “Get active employees only” or “List employees by job title.”
by Joey D’Anna
This template is an error handler that will log n8n workflow errors to a Monday.com board for troubleshooting and tracking. Prerequisites Monday account and Monday credential Create a board on Monday for error logging, with the following columns and types: Timestamp (text) Error Message (text) Stack Trace (long text) Determine the column IDs using Monday's instructions Setup Edit the Monday nodes to use your credential Edit the node labeled CREATE ERROR ITEM to point to your error log board and group name Edit the column IDs in the "Column Values" field of the UPDATE node to match the IDs of the fields on your error log board To trigger error logging, select this automation as the error workflow on any automation For more detailed logging, add Stop and Error nodes in your workflow to send specific error messages to your board.
by Kumar Shivam
Complete AI Product Description Generator Transforms product images into high-converting copy with GPT-4o Vision + Claude 3.5 The Shopify AI Product Description Factory is a production-grade n8n workflow that converts product images and metadata into refined, SEO-aware descriptions—fully automated and region-agnostic. It blends GPT-4o vision for visible attribute extraction, Claude 3.5 Sonnet for premium copy, Perplexity research for verified brand context, Google Sheets for orchestration and audit trails, plus automated daily sales analytics enrichment. Link-header pagination and structured output enforcement ensure reliable scale. To refine according to your usecase connect via my profile @connect Key Advantages Vision-first copywriting Uses gpt-4o to identify only visible physical attributes (closure, heel, materials, sole) from product images—no guesses. Premium copy generation anthropic/claude-3.5-sonnet crafts concise, benefit-led descriptions with consistent tone, length control, and clean formatting. Research-assisted accuracy perplexityTool verifies vendor/brand context from official sources to avoid speculation or fabricated claims. Pagination you can trust Automates Shopify REST pagination via Link headers and persists page_info for resumable runs. Google Sheets orchestration Centralized staging, status tracking, and QA in Products, with ProcessingState for batch/page markers, and Error_log for diagnostics. Bulletproof error feedback errorTrigger + AI diagnosis logs clear, non-technical and technical explanations to Error_log for fast recovery. Automated sales analytics Daily sales tracking automatically captures and enriches total sales data for comprehensive business intelligence and performance monitoring. How It Works Intake and filtering httpRequest fetches /admin/api/2024-04/products.json?limit=200&{page_info} code filters only items with: Image present Empty body_html The currSeas:SS2025 tag Extracts tag metadata such as x-styleCode, country_of_origin, and gender when available Pagination controller code parses Link headers for rel="next" and extracts page_info googleSheets updates ProcessingState with page_info_next and increments the batch number for resumable polling Generation pipeline googleSheets pulls rows with Status = Ready for AI Description; limit throttles batch size openAi Analyze image (model gpt-4o) returns strictly visible features lmChatOpenRouter (Claude 3.5) composes the SEO description, optionally blending verified vendor context from perplexityTool outputParserStructured guarantees strict JSON: product_id, product_title (normalized), generated_description, status googleSheets writes results back to Products for review/publish Sales analytics enrichment Schedule Trigger** runs daily at 2:01 PM to capture previous day's sales httpRequest fetches paid orders from Shopify REST API with date range filtering splitOut and summarize nodes calculate total daily sales Automatic Google Sheets logging with date stamps and totals Zero-sale days are properly recorded for complete analytics continuity Reliability and insight errorTrigger routes failures to an AI agent that explains the root cause and appends a concise note to Error_log. What's Inside (Node Map) Data + API httpRequest (Shopify REST 2024-04 for products and orders) googleSheets (multiple sheet operations) googleSheetsTool (error logging) AI models openAi (gpt-4o vision analysis) lmChatOpenRouter (anthropic/claude-3.5-sonnet for content generation) AI Agent** (intelligent error diagnosis) Analytics & Processing splitOut (order data processing) summarize (sales totals calculation) set nodes (data field mapping) Tools and guards perplexityTool (brand research) outputParserStructured (JSON validation) memoryBufferWindow (conversation context) Control & Scheduling scheduleTrigger (multiple time-based triggers) cron (periodic execution) limit (batch size control) if (conditional logic) code (custom filtering and pagination logic) Observability errorTrigger + AI diagnosis to Error_log Processing state tracking Sales analytics logging Content & Compliance Rules Locale-agnostic copy**; brand voice is configurable per store Only image-verifiable attributes** (no guesses); clean HTML suitable for Shopify themes Optional normalization rules (e.g., color/branding cleanup, title sanitization) Style code inclusion supported when x-styleCode is present Gender-aware content generation when gender tag is present Strict JSON output** and schema consistency for safe downstream publishing Setup Steps Core integrations Shopify Access Token** — Products read + Orders read (REST 2024-04) OpenAI API** — gpt-4o vision OpenRouter API** — Claude Sonnet (3.5) Perplexity API** — vendor/market verification via perplexityTool Google Sheets OAuth** — Products, ProcessingState, Error_log, Sales analytics Configure sheets ProcessingState** with fields: batch number page_info_next Products** with: Product ID Product Title Product Type Vendor Image url Status country of origin x_style_code gender Generated Description Error_log** with: timestamp Reason of Error Sales Analytics Sheet** with: Date Total Sales Workflow Capabilities Discovery and staging Auto-paginate Shopify; stage eligible products in Sheets with reasons and timestamps. Vision-grounded copywriting Descriptions reflect only visible attributes plus verified brand context; concise, mobile-friendly structure with gender-aware tone. Metadata awareness Auto-injects x-styleCode, country_of_origin, and gender when present; natural SEO for brand and product type. Sales intelligence Automated daily sales tracking with Melbourne timezone support, handles zero-sale days, and maintains complete historical records. Error analytics Layman + technical diagnosis logged to Error_log to shorten MTTR. Safe output Structured JSON via outputParserStructured for predictable row updates. Credentials Required Shopify Access Token** (Products + Orders read permissions) OpenAI API Key** (GPT-4o vision) OpenRouter API Key** (Claude Sonnet) Perplexity API Key** Google Sheets OAuth** Ideal For E-commerce teams** scaling compliant, on-brand product copy with comprehensive sales insights Agencies and SEO specialists** standardizing image-grounded descriptions with performance tracking and analytics Stores** needing resumable pagination, auditable content operations, and automated daily sales reporting in Sheets Advanced Features Dual-workflow architecture**: Content generation + Sales analytics in one system Link-header pagination with page_info persistence in ProcessingState Title/content normalization (e.g., color removal) configurable per brand Gender-aware copywriting** based on product tags Memory windows (memoryBufferWindow) to keep multi-step prompts consistent Melbourne timezone support** for accurate daily sales cutoffs Zero-sales handling** ensures complete analytics continuity Structured Output enforcement for downstream safety AI-powered error diagnosis** with technical and layman explanations Time & Scheduling (Universal) The workflow includes two independent schedules: Content Generation**: Every 5 minutes (configurable) for product processing Sales Analytics**: Daily at 2:01 PM Melbourne time for previous day's sales For globally distributed teams, schedule triggers and timestamps can be standardized on UTC to avoid regional drift. Pro Tip Start with small batches (limit set to 10 or fewer) to validate both copy generation and sales tracking flows. The workflow handles dual operations independently - content generation failures won't affect sales analytics and vice versa. Monitor the Error_log sheet for any issues and use the ProcessingState sheet to track pagination progress.
by WeblineIndia
Facebook Page Comment Moderation Scoreboard → Team Report This workflow automatically monitors Facebook Page comments, analyzes them using AI for intent, toxicity & spam, stores moderation results in a database and sends a clear summary report to Slack and Telegram. This workflow runs every few hours to fetch Facebook Page comments and analyze them using OpenAI. Each comment is classified as positive, neutral or negative, checked for toxicity, spam & abusive language and then stored in Supabase. A simple moderation summary is sent to Slack and Telegram. You receive: Automated Facebook comment moderation AI-based intent, toxicity, and spam detection Database logging of all moderated comments Clean Slack & Telegram summary reports Ideal for teams that want visibility into comment quality without manually reviewing every message. Quick Start – Implementation Steps Import the workflow JSON into n8n. Add your Facebook Page access token to the HTTP Request node. Connect your OpenAI API key for comment analysis. Configure your Supabase table for storing moderation data. Connect Slack and Telegram credentials and choose target channels. Activate the workflow — moderation runs automatically. What It Does This workflow automates Facebook comment moderation by: Running on a scheduled interval (every 6 hours). Fetching recent comments from a Facebook Page. Preparing each comment for AI processing. Sending comments to OpenAI for moderation analysis. Extracting structured moderation data: Comment intent Toxicity score Spam detection Abusive language detection Flagging risky comments based on defined rules. Storing moderation results in Supabase. Generating a summary report. Sending the report to Slack and Telegram. This ensures consistent, repeatable moderation with no manual effort. Who’s It For This workflow is ideal for: Social media teams Community managers Marketing teams Customer support teams Moderation and trust & safety teams Businesses managing high-volume Facebook Pages Anyone wanting AI-assisted comment moderation Requirements to Use This Workflow To run this workflow, you need: n8n instance** (cloud or self-hosted) Facebook Page access token** OpenAI API key** Supabase project and table** Slack workspace** with API access Telegram bot** and chat ID Basic understanding of APIs and JSON (helpful but not required) How It Works Scheduled Trigger – Workflow starts automatically every 6 hours. Fetch Comments – Facebook Page comments are retrieved. Prepare Data – Comments are formatted for processing. AI Moderation – OpenAI analyzes each comment. Normalize Results – AI output is cleaned and standardized. Store Data – Moderation results are saved in Supabase. Aggregate Stats – Summary statistics are calculated. Send Alerts – Reports are sent to Slack and Telegram. Setup Steps Import the workflow JSON into n8n. Open the Fetch Facebook Page Comments node and add: Page ID Access token Connect your OpenAI account in the AI moderation node. Create a Supabase table and map fields correctly. Connect Slack and select a reporting channel. Connect Telegram and set the chat ID. Activate the workflow. How To Customize Nodes Customize Flagging Rules Update the normalization logic to: Change toxicity thresholds Flag only spam or abusive comments Add custom moderation rules Customize Storage You can extend Supabase fields to include: Language AI confidence score Reviewer notes Resolution status Customize Notifications Slack and Telegram messages can include: Emojis Mentions (@channel) Links to Facebook comments Severity labels Add-Ons (Optional Enhancements) You can extend this workflow to: Auto-hide or delete toxic comments Reply automatically to positive comments Detect language and region Generate daily or weekly moderation reports Build dashboards using Supabase or BI tools Add escalation alerts for high-risk comments Track trends over time Use Case Examples 1. Community Moderation Automatically identify harmful or spam comments. 2. Brand Reputation Monitoring Spot negative sentiment early and respond faster. 3. Support Oversight Detect complaints or frustration in comments. 4. Marketing Insights Measure positive vs negative engagement. 5. Compliance & Auditing Keep historical moderation logs in a database. Troubleshooting Guide | Issue | Possible Cause | Solution | |-----|---------------|----------| | No comments fetched | Invalid Facebook token | Refresh token & permissions | | AI output invalid | Prompt formatting issue | Use strict JSON prompt | | Data not saved | Supabase mapping mismatch | Verify table fields | | Slack message missing | Channel or credential error | Recheck Slack config | | Telegram alert fails | Wrong chat ID | Confirm bot permissions | | Workflow not running | Trigger disabled | Enable Cron node | Need Help? If you need help customizing, scaling or extending this workflow — such as advanced moderation logic, dashboards, auto-actions or production hardening, then our n8n workflow development team at WeblineIndia can assist with expert automation solutions.
by Rajeet Nair
Overview This workflow implements an AI-powered incident investigation and root cause analysis system that automatically analyzes operational signals when a system incident occurs. When an incident is triggered via webhook, the workflow gathers operational context including application logs, system metrics, recent deployments, and feature flag changes. These signals are processed to detect error patterns, cluster similar failures, and correlate them with recent system changes. The workflow uses vector embeddings to group similar log messages, allowing it to detect dominant failure patterns across services. It then aligns these failures with contextual events such as deployments, configuration changes, or traffic spikes to identify potential causal relationships. An AI agent analyzes all available evidence and generates structured root cause hypotheses, including confidence scores, supporting evidence, and recommended remediation actions. Finally, the workflow posts a detailed incident report directly to Slack, enabling engineering teams to quickly understand the issue and respond faster. This architecture helps teams reduce mean time to resolution (MTTR) by automating the early stages of incident investigation. How It Works 1. Incident Trigger The workflow begins when an incident alert is received through a webhook endpoint. The webhook payload may include information such as: incident ID severity level timestamp affected service This event starts the automated investigation process. 2. Workflow Configuration A configuration node defines the operational parameters used throughout the workflow, including: Logs API endpoint Metrics API endpoint Deployments API endpoint Feature flags API endpoint Time window for analysis Slack channel for incident notifications This allows the workflow to be easily adapted to different observability stacks. 3. Incident Context Collection The workflow collects system context from multiple sources: application logs infrastructure or service metrics recent deployments active feature flags Gathering this information provides the signals required to understand what happened before and during the incident. 4. Log Normalization and Denoising Raw logs are processed to remove low-value entries such as debug or informational messages. The workflow extracts structured error information including: timestamps log severity services involved request or session IDs error messages and stack traces This step ensures that only relevant failure signals are analyzed. 5. Failure Pattern Clustering Error messages are converted into embeddings using OpenAI. The workflow stores these embeddings in an in-memory vector store to group similar log messages together. This clustering step identifies dominant failure patterns that may appear across multiple sessions or services. 6. Failure Pattern Analysis Clustered log data is analyzed to detect recurring error types and dominant failure clusters. The workflow calculates statistics such as: total error volume most common error types error distribution across clusters dominant failure patterns These insights help highlight the primary issues affecting the system. 7. Event Correlation Analysis Failure patterns are then aligned with contextual events such as: deployments configuration changes traffic spikes The workflow calculates correlation scores based on temporal proximity and assigns likelihood scores to potential causes. This allows the system to identify events that may have triggered the incident. 8. AI Root Cause Analysis An AI agent analyzes the collected signals and generates structured root cause hypotheses. The agent considers: error clusters deployment timing configuration changes traffic patterns system metrics The output includes: multiple root cause hypotheses confidence scores supporting evidence recommended remediation actions 9. Incident Ticket Creation The final analysis is formatted into a structured incident report and posted to Slack. The Slack message contains: incident metadata root cause hypotheses confidence scores evidence recommended actions affected services This enables engineers to quickly review the investigation results and take action. Setup Instructions 1. Configure Observability APIs Update the Workflow Configuration node with API endpoints for: Logs API Metrics API Deployments API Feature Flags API These APIs should return JSON responses containing recent operational data. 2. Configure OpenAI Credentials Add OpenAI credentials for: OpenAI Embeddings OpenAI Chat Model These are used for log clustering and root cause analysis. 3. Configure Slack Integration Add Slack credentials and specify the Slack channel ID in the configuration node. Incident reports will be posted automatically to this channel. 4. Configure the Incident Trigger Deploy the webhook endpoint generated by the Incident Trigger node. Your monitoring or alerting system (PagerDuty, Grafana, Datadog, etc.) can call this webhook when incidents occur. 5. Activate the Workflow Once configured, activate the workflow in n8n. When incidents are triggered, the workflow will automatically run the investigation pipeline and generate a Slack incident report. Use Cases Automated Incident Investigation Automatically analyze operational signals when alerts are triggered to identify possible causes. AI-Assisted Site Reliability Engineering Provide engineers with AI-generated root cause hypotheses and investigation insights. Deployment Impact Detection Detect whether a recent deployment or configuration change caused a system failure. Observability Signal Correlation Combine logs, metrics, and system events to produce a unified incident analysis. Faster Incident Response Reduce mean time to resolution (MTTR) by automating the early stages of incident debugging. Requirements n8n with LangChain nodes enabled OpenAI API credentials Slack credentials APIs for retrieving: system logs service metrics deployment history feature flag status
by SpaGreen Creative
WhatsApp Number Verify & Confirmation System with Rapiwa API and Google Sheets Who is this for? This n8n workflow makes it easy to verify WhatsApp numbers submitted through a form. When someone fills out the form, the automation kicks in—capturing the data via a webhook, checking the WhatsApp number using the Rapiwa API, and sending a confirmation message if the number is valid. All submissions, whether verified or not, are logged into a Google Sheet with a clear status. It’s a great solution for businesses, marketers, or developers who need a reliable way to verify leads, manage event signups, or onboard customers using WhatsApp. How it works? This n8n automation listens for form submissions via a webhook, validates the provided WhatsApp number using the Rapiwa API, sends a confirmation message if the number is verified, and then appends the submission data to a Google Sheet, marking each entry as verified or unverified. Features Webhook Trigger**: Captures form submissions via HTTP POST Data Cleaning**: Formats and sanitizes the WhatsApp number Rapiwa API Integration**: Checks if the number is registered on WhatsApp Conditional Messaging**: Sends confirmation messages only to verified WhatsApp users Google Sheets Integration**: Appends all submissions with a validity status Auto Timestamping**: Adds the submission date in YYYY-MM-DD format Throttling Support**: Built-in delay to avoid hitting API or sheet rate limits Separation of Verified/Unverified**: Distinct handling for both types of entries Nodes Used in the Workflow Webhook** Format Webhook Response Data** (Code) Loop Over Items** (Split In Batches) Cleane Number** (Code) check valid whatsapp number** (HTTP Request) If** (Conditional) Send Message Using Rapiwa** verified append row in sheet** (Google Sheets) unverified append row in sheet** (Google Sheets) Wait1** How to set up? Webhook Add a Webhook node to the canvas. Set HTTP Method to POST. Copy the Webhook URL path (/a9b6a936-e5f2-4xxxxxxxxxe0a970d5). In your frontend form or app, make a POST request to: The request body should include: { "business_name": "ABC Corp", "location": "New York", "whatsapp": "+1 234-567-8901", "email": "user@example.com", "name": "John Doe" } Format Webhook Response Data Add a Code node after the Webhook node. Use this JavaScript code: const result = $input.all().map(item => { const body = item.json.body || {}; const submitted_date = new Date().toISOString().split('T')[0]; return { business_name: body.business_name, location: body.location, whatsapp: body.whatsapp, email: body.email, name: body.name, submitted_date: submitted_date }; }); return result; Loop Over Items Insert a SplitInBatches node after the data formatting. Set the Batch Size to a reasonable number (e.g. 1 or 10). This is useful for processing multiple submissions at once, especially if your webhook receives arrays of entries. Note: If you expect only one submission at a time, it still helps future-proof your workflow. Cleane Number Add a Code node named Cleane Number. Paste the following JavaScript: const items = $input.all(); const updatedItems = items.map((item) => { const waNo = item?.json["whatsapp"]; const waNoStr = typeof waNo === 'string' ? waNo : (waNo !== undefined && waNo !== null ? String(waNo) : ""); const cleanedNumber = waNoStr.replace(/\D/g, ""); item.json["whatsapp"] = cleanedNumber; return item; }); return updatedItems; Check WhatsApp Number using Rapiwa Add an HTTP Request node. Set: Method: POST URL: https://app.rapiwa.com/api/verify-whatsapp Add authentication: Type: HTTP Bearer Credentials: Select or create Rapiwa token In Body Parameters, add: number: ={{ $json.whatsapp }} This API call checks if the WhatsApp number exists and is valid. Expected Output: { "success": true, "data": { "number": "+88017XXXXXXXX", "exists": true, "jid": "88017XXXXXXXXXXXXX", "message": "✅ Number is on WhatsApp" } } Conditional If Check Add an If node after the Rapiwa validation. Configure the condition: Left Value: ={{ $json.data.exists }} Operation: true If true → valid number → go to messaging and append as "verified". If false → go to unverified sheet directly. Note: This step branches the flow based on the WhatsApp verification result. Send WhatsApp Message (Rapiwa) Add an HTTP Request node under the TRUE branch of the If node. Set: Method: POST URL: https://app.rapiwa.com/api/send-message Authentication: Type: HTTP Bearer Use same Rapiwa token Body Parameters: number: ={{ $json.data.phone }} message_type: text message: Hi {{ $('Cleane Number').item.json.name }}, Thanks! Your form has been submitted successfully. This sends a confirmation message via WhatsApp to the verified number. Google Sheets – Verified Data Add a Google Sheets node under the TRUE branch (after the message is sent). Set: Operation: Append Document ID: Choose your connected Google Sheet Sheet Name: Set to your active sheet (e.g., Sheet1) Column Mapping: Business Name: ={{ $('Cleane Number').item.json.business_name }} Location: ={{ $('Cleane Number').item.json.location }} WhatsApp Number: ={{ $('Cleane Number').item.json.whatsapp }} Email : ={{ $('Cleane Number').item.json.email }} Name: ={{ $('Cleane Number').item.json.name }} Date: ={{ $('Cleane Number').item.json.submitted_date }} validity: verified Use OAuth2 Google Sheets credentials for access. Note: Make sure the sheet has matching column headers. Google Sheets – Unverified Data Add a Google Sheets node under the FALSE branch of the If node. Use the same settings as the verified node, but set: validity: unverified This stores entries with unverified WhatsApp numbers in the same Google Sheet. Wait Node Add a Wait node after both Google Sheets nodes. Set Wait Time: Value: 2 seconds This delay prevents API throttling and adds buffer time before processing the next item in the batch. Google Sheet Column Reference A Google Sheet formatted like this ➤ Sample Sheet | Business Name | Location | WhatsApp Number | Email | Name | validity | Date | |---------------------|--------------------|------------------|----------------------|------------------|------------|------------| | SpaGreen Creative | Dhaka, Bangladesh | 8801322827799| contact@spagreen.net | Abdul Mannan | unverified | 2025-09-14 | | SpaGreen Creative | Bagladesh | 8801322827799| contact@spagreen.net| Abdul Mannan | verified | 2025-09-14 | > Note: The Email column includes a trailing space. Ensure your column headers match exactly to prevent data misalignment. How to customize the workflow Modify confirmation message with your brand tone Add input validation for missing or malformed fields Route unverified submissions to a separate spreadsheet or alert channel Add Slack or email notifications on new verified entries Notes & Warnings Ensure your Google Sheets credential has access to the target sheet Rapiwa requires an active subscription for API access Monitor Rapiwa API limits and adjust wait time as needed Keep your webhook URL protected to avoid misuse Support & Community WhatsApp Support: Chat Now Discord: Join SpaGreen Community Facebook Group: SpaGreen Support Website: spagreen.net Developer Portfolio: Codecanyon SpaGreen
by Cheng Siong Chin
How It Works This workflow automates student progress monitoring and academic intervention orchestration through intelligent AI-driven analysis. Designed for educational institutions, learning management systems, and academic advisors, it solves the critical challenge of identifying at-risk students while coordinating timely interventions across faculty and support services. The system receives student data via webhook, fetches historical learning records, and merges these sources for comprehensive progress analysis. It employs a dual-agent AI framework for student progress validation and academic orchestration, detecting performance gaps, engagement issues, and intervention opportunities. The workflow intelligently routes findings based on validation status, triggering orchestration actions for students requiring support while logging compliant progress for successful learners. By executing multi-channel interventions through HTTP APIs and email notifications, it ensures educators and students receive timely guidance while maintaining complete audit trails for academic accountability and accreditation compliance. Setup Steps Configure Student Data Webhook trigger endpoint Connect Workflow Configuration node with academic performance parameters Set up Fetch Student Learning History node with LMS API credentials Configure Merge Student Data node for data consolidation Connect Student Progress Validation Agent with Claude/OpenAI API credentials Set up AI processing nodes Configure Route by Validation Status node with performance thresholds Connect Academic Orchestration Agent with AI API credentials for intervention planning Set up orchestration processing Prerequisites Claude/OpenAI API credentials for AI agents, learning management system API access Use Cases Universities identifying students requiring academic support, online learning platforms detecting engagement drops Customization Adjust validation thresholds for institutional academic standards Benefits Reduces student identification lag by 75%, eliminates manual progress tracking