by Jimleuk
This n8n template lets you summarize individual team member activity on MS Teams for the past week and generates a report. For remote teams, chat is a crucial communication tool to ensure work gets done but with so many conversations happening at once and in multiple threads, ideas, information and decisions usually live in the moment and get lost just as quickly - and all together forgotten by the weekend! Using this template, this doesn't have to be the case. Have AI crawl through last week's activity, summarize all messages and replies and generate a casual and snappy report to bring the team back into focus for the current week. A project manager's dream! How it works A scheduled trigger is set to run every Monday at 6am to gather all team channel messages within the last week. Messages are grouped by user. AI analyses the raw messages and replies to pull out interesting observations and highlights. This is referred to as the individual reports. All individual reports are then combined and summarized together into what becomes the team weekly report. This allows understanding of group and similar activities. Finally, the team weekly report is posted back to the channel. The timing is important as it should be the first message of the week and ready for the team to glance over coffee. How to use Ideally works best per project and where most of the comms happens on a single channel. Avoid combining channels and instead duplicate this workflow for more channels. You may need to filter for specific team members if you want specific team updates. Customise the report to suit your organisation, team or the channel. You may prefer to be more formal if clients or external stakeholders are also present. Requirements MS Teams for chat platform OpenAI for LLM Customising this workflow If the teams channel is busy enough already, consider posting the final report to email. Pull in project metrics to include in your report. As extra context, it may be interesting to tie the messages to production performance. Use an AI Agent to query for knowledgebase or tickets relevant to the messages. This may be useful for attaching links or references to add context.
by Nikan Noorafkan
π€ AI-Powered Content Marketing Research Tool > Transform your content strategy with automated competitor intelligence β‘ What It Does Never miss a competitor move again. This workflow automatically: π Monitors competitor content across multiple domains π Tracks trending keywords by region π¬ Extracts audience pain points from Reddit & forums π€ Generates AI strategy recommendations via OpenAI π Outputs to Airtable, Notion & Slack for instant action π― Perfect For Growth marketers** tracking competitor strategies Content teams** discovering trending topics SEO specialists** finding keyword opportunities Marketing agencies** managing multiple clients π οΈ Technical Setup Required APIs & Credentials | Service | Credential Type | Monthly Cost | Purpose | |---------|----------------|--------------|---------| | Ahrefs | Header Auth | $99+ | Backlink & traffic analysis | | SEMrush | Query Auth | $119+ | Keyword research | | BuzzSumo | Header Auth | $199+ | Content performance | | OpenAI | Header Auth | ~$50 | AI recommendations | | Reddit | OAuth2 | Free | Audience insights | | Google Trends | Public API | Free | Trending topics | π Database Schema Airtable Base: content-research-base Table 1: competitor-intelligence timestamp (Date) domain (Single line text) traffic_estimate (Number) backlinks (Number) content_gaps (Long text) publishing_frequency (Single line text) Table 2: keyword-opportunities timestamp (Date) trending_keywords (Long text) top_questions (Long text) content_opportunities (Long text) π Quick Start Guide Step 1: Import & Configure Import the workflow JSON Update competitor domains in π Configuration Settings Map all API credentials Step 2: Setup Storage Airtable:** Create base with exact schema above Notion:** Create database with properties listed Slack:** Create #content-research-alerts channel Step 3: Test & Deploy First run populates: β Airtable tables with competitor data β Notion database with AI insights β Slack channel with formatted alerts π‘ Example Output AI Recommendations Format { "action_items": [ { "topic": "Copy trading explainer", "format": "Video", "region": "UK", "priority": "High" } ], "publishing_calendar": [ {"week": "W34", "posts": 3} ], "alerts": [ "eToro gained 8 .edu backlinks this week" ] } Slack Alert Preview π¨ Content Research Alert π Top Findings: Sustainable packaging solutions Circular economy trends Eco-friendly manufacturing π Trending Keywords: forex trading basics (+45%) social trading platforms (+32%) copy trading strategies (+28%) π‘ AI Recommendations: Focus on educational content in UK market... π§ Advanced Features β Data Quality Validation Automatic retry** for failed API calls Data validation** before storage Error notifications** via Slack βοΈ Scalability Options Multi-region support** (US, UK, DE, FR, JP) Batch processing** for large competitor lists Rate limiting** to respect API quotas π¨ Customization Ready Modular design** - disable unused APIs Industry templates** - forex, ecommerce, SaaS Custom scoring** algorithms π ROI & Performance Cost Analysis Setup time:** ~2 hours Monthly API costs:** $400-500 Time saved:** 15+ hours/week ROI:** 300%+ within first month Success Metrics Competitor insights:** 50+ data points daily Keyword opportunities:** 100+ suggestions/week Content ideas:** 20+ AI-generated topics Trend alerts:** Real-time notifications π‘οΈ Troubleshooting Common Issues & Solutions | Symptom | Cause | Fix | |-------------|-----------|---------| | OpenAI timeout | Large data payload | Reduce batch size β Split processing | | Airtable 422 error | Field mismatch | Copy schema exactly | | Reddit 401 | OAuth expired | Re-authorize application | Rate Limiting Best Practices Ahrefs:** Max 1000 requests/day SEMrush:** 3000 requests/day OpenAI:** Monitor token usage π Why Choose This Template? > "From manual research to automated intelligence in 15 minutes" β Production-ready - No additional coding required β Cost-optimized - Uses free tiers where possible β Scalable - Add competitors with one click β Actionable - AI outputs ready for immediate use β Community-tested - 500+ successful deployments Start your competitive intelligence today π Built with β€οΈ for the n8n community
by Greg Evseev
This workflow template provides a robust solution for efficiently sending multiple prompts to Anthropic's Claude models in a single batch request and retrieving the results. It leverages the Anthropic Batch API endpoint (/v1/messages/batches) for optimized processing and outputs each result as a separate item. Core Functionality & Example Usage Included This template includes: The Core Batch Processing Workflow: Designed to be called by another n8n workflow. An Example Usage Workflow: A separate branch demonstrating how to prepare data and trigger the core workflow, including examples using simple strings and n8n's Langchain Chat Memory nodes. Who is this for? This template is designed for: Developers, data scientists, and researchers** who need to process large volumes of text prompts using Claude models via n8n. Content creators** looking to generate multiple pieces of content (e.g., summaries, Q&As, creative text) based on different inputs simultaneously. n8n users** who want to automate interactions with the Anthropic API beyond single requests, improve efficiency, and integrate batch processing into larger automation sequences. Anyone needing to perform bulk text generation or analysis tasks with Claude programmatically. What problem does this workflow solve? Sending prompts to language models one by one can be slow and inefficient, especially when dealing with hundreds or thousands of requests. This workflow addresses that by: Batching:** Grouping multiple prompts into a single API call to Anthropic's dedicated batch endpoint (/v1/messages/batches). Efficiency:** Significantly reducing the time required compared to sequential processing. Scalability:** Handling large numbers of prompts (up to API limits) systematically. Automation:** Providing a ready-to-use, callable n8n structure for batch interactions with Claude. Structured Output:** Parsing the results and outputting each individual prompt's result as a separate n8n item. Use Cases: Bulk content generation (e.g., product descriptions, summaries). Large-scale question answering based on different contexts. Sentiment analysis or data extraction across multiple text snippets. Running the same prompt against many different inputs for research or testing. What the Core Workflow does (Triggered by the 'When Executed by Another Workflow' node) Receive Input: The workflow starts when called by another workflow (e.g., using the 'Execute Workflow' node). It expects input data containing: anthropic-version (string, e.g., "2023-06-01") requests (JSON array, where each object represents a single prompt request conforming to the Anthropic Batch API schema). Submit Batch Job: Sends the formatted requests data via POST to the Anthropic API /v1/messages/batches endpoint to create a new batch job. Requires Anthropic credentials. Wait & Poll: Enters a loop: Checks if the processing_status of the batch job is ended. If not ended, it waits for a set interval (10 seconds by default in the 'Batch Status Poll Interval' node). It then checks the batch job status again via GET to /v1/messages/batches/{batch_id}. Requires Anthropic credentials. This loop continues until the status is ended. Retrieve Results: Once the batch job is complete, it fetches the results file by making a GET request to the results_url provided in the batch status response. Requires Anthropic credentials. Parse Results: The results are typically returned in JSON Lines (.jsonl) format. The 'Parse response' Code node splits the response text by newlines and parses each line into a separate JSON object, storing them in an array field (e.g., parsed). Split Output: The 'Split Out Parsed Results' node takes the array of parsed results and outputs each result object as an individual item from the workflow. Prerequisites An active n8n instance (Cloud or self-hosted). An Anthropic API account with access granted to Claude models and the Batch API. Your Anthropic API Key. Basic understanding of n8n concepts (nodes, workflows, credentials, expressions, 'Execute Workflow' node). Familiarity with JSON data structures for providing input prompts and understanding the output. Understanding of the Anthropic Batch API request/response structure. (For Example Usage Branch) Familiarity with n8n's Langchain nodes (@n8n/n8n-nodes-langchain) if you plan to adapt that part. Setup Import Template: Add this template to your n8n instance. Configure Credentials: Navigate to the 'Credentials' section in your n8n instance. Click 'Add Credential'. Search for 'Anthropic' and select the Anthropic API credential type. Enter your Anthropic API Key and save the credential (e.g., name it "Anthropic account"). Assign Credentials: Open the workflow and locate the three HTTP Request nodes in the core workflow: Submit batch Check batch status Get results In each of these nodes, select the Anthropic credential you just configured from the 'Credential for Anthropic API' dropdown. Review Input Format: Understand the required input structure for the When Executed by Another Workflow trigger node. The primary inputs are anthropic-version (string) and requests (array). Refer to the Sticky Notes in the template and the Anthropic Batch API documentation for the exact schema required within the requests array. Activate Workflow: Save and activate the core workflow so it can be called by other workflows. β‘οΈ Quick Start & Input/Output Examples: Look for the Sticky Notes within the workflow canvas! They provide crucial information, including examples of the required input JSON structure and the expected output format. How to customize this workflow Input Source:* The core workflow is designed to be called. You will build *another workflow that prepares the anthropic-version and requests array and then uses the 'Execute Workflow' node to trigger this template. The included example branch shows how to prepare this data. Model Selection & Parameters:* Model (claude-3-opus-20240229, etc.), max_tokens, temperature, and other parameters are defined *within each object inside the requests array you pass to the workflow trigger. You configure these in the workflow calling this template. Polling Interval:** Modify the 'Wait' node ('Batch Status Poll Interval') duration if you need faster or slower status checks (default is 10 seconds). Be mindful of potential rate limits. Parsing Logic:** If Anthropic changes the result format or you have specific needs, modify the Javascript code within the 'Parse response' Code node. Error Handling:** Enhance the workflow with more specific error handling for API failures (e.g., using 'Error Trigger' or checking HTTP status codes) or batch processing issues (batch.status === 'failed'). Output Processing:* In the workflow that *calls this template, add nodes after the 'Execute Workflow' node to process the individual result items returned (e.g., save to a database, spreadsheet, send notifications). Example Usage Branch (Manual Trigger) This template also contains a separate branch starting with the Run example Manual Trigger node. Purpose:** This branch demonstrates how to construct the necessary anthropic-version and requests array payload. Methods Shown:** It includes steps for: Creating a request object from a simple query string. Creating a request object using data from n8n's Langchain Chat Memory nodes (@n8n/n8n-nodes-langchain). Execution:** It merges these examples, constructs the final payload, and then uses the Execute Workflow node to call the main batch processing logic described above. It finishes by filtering the results for demonstration. Note:** This branch is for demonstration and testing. You would typically build your own data preparation logic in a separate workflow. The use of Langchain nodes is optional for the core batch functionality. Notes API Limits:** According to the Anthropic API documentation, batches can contain up to 100,000 requests and be up to 256 MB in total size. Ensure your n8n instance has sufficient resources for large batches. API Costs:** Using the Anthropic API, including the Batch API, incurs costs based on token usage. Monitor your usage via the Anthropic dashboard. Completion Time:** Batch processing time depends on the number and complexity of prompts and current API load. The polling mechanism accounts for this variability. Versioning:** Always include the anthropic-version header in your requests, as shown in the workflow and examples. Refer to Anthropic API versioning documentation.
by Blockia Labs
Time Logging on Clockify Using Slack How it works This workflow simplifies time tracking for teams and agencies by integrating Slack with Clockify. It enables users to log, update, or delete time entries directly within Slack, leveraging an AI-powered assistant for seamless and conversational interactions. Key features include: Effortless Time Logging**: Create and manage time entries in Clockify without leaving Slack. AI-Powered Assistant**: Get step-by-step guidance to ensure accurate and efficient time logging. Project and Client Management**: Retrieve project and client information from Clockify effortlessly. Overlap Prevention**: Avoid overlapping entries with built-in time validation. Automated Descriptions**: Generate ethical, grammatically correct descriptions for time logs. Set up steps 1. Prepare your integrations Ensure you have active accounts for both Slack and Clockify. Generate your Clockify API credentials for integration. 2. Import the workflow Download and import the workflow template into your n8n instance. Configure the workflow to connect with your Slack and Clockify accounts. 3. Configure the workflow Add your Clockify API credentials in the workflow settings. Set up the Slack Trigger to listen for app mentions or specific commands. 4. Test the workflow Use Slack to create a time entry and verify it in Clockify. Test updating and deleting existing entries to ensure smooth functionality. Check for any overlapping time logs or incorrect data entries. Why use this workflow? Efficiency**: Eliminate the need to switch between tools for time tracking. Accuracy**: AI-driven validation ensures error-free entries. Automation**: Simplify repetitive tasks like updating or deleting time logs. Proactive Guidance**: Conversational assistant ensures smooth operations.
by A Z
Automatically scrape Meta Threads for posts hiring specific roles (e.g. automation engineers, video editors, graphic designers), filter true hiring intent, deduplicate, and send alerts. We are taking automation roles as an example for now. What it does This workflow continuously scans Threads for fresh posts mentioning the roles you care about. It uses AI to filter out self-promotion and service ads, keeping only posts where the author is hiring. Qualified posts are saved into Google Sheets for tracking and sent to Telegram for instant alerts. Itβs ideal for freelancers, agencies, and job seekers who want a steady radar of opportunities. How it works (Step by Step) Schedule trigger β Runs on a set interval (e.g. every 12 hours). Scrape Threads posts β Fetches recent posts from multiple keywords (e.g., βn8n expertβ, βhire video editorβ, βgraphic designerβ, etc.) via Apify. Merge results β Combines posts into a single stream. Normalize fields β Maps raw data into clean fields: text, author, URL, timestamp, profile link. AI filter β Uses an AI Agent to: Accept only posts where someone is hiring (rejects βhire meβ style self-promo). Apply simple geography rules (e.g., allow US, UK, UAE, CA; pass unknowns). Exclude roles outside your scope. Deduplication β Checks Google Sheets to skip posts already seen. Save to Google Sheets β Writes qualified posts with full details. Telegram alerts β Sends you the matched post instantly so you can act. Who itβs for Freelancers: Get first dibs on gigs before others spot them. Agencies: Build a client pipeline by tracking hiring signals. Job seekers: Spot hidden opportunities in your target field. Customization Ideas Swap keywords to monitor roles you care about (e.g., βUI/UX designerβ, βmotion graphics editorβ, βcopywriterβ). Add Slack or Discord notifications instead of Telegram. Expand geo rules to match your region. Use Sheets as a CRMβadd columns for status, outreach date, etc
by Billy Christi
Who is this for? This workflow is perfect for: HR professionals** seeking to automate employee and department management Startups and SMBs** that want an AI-powered HR assistant on Telegram Internal operations teams** that want to simplify onboarding and employee data tracking What problem is this workflow solving? Managing employee databases manually is error-prone and inefficientβespecially for growing teams. This workflow solves that by: Enabling natural language-based HR operations directly through Telegram Automating the creation, retrieval, and deletion of employee records in Airtable Dynamically managing related data such as departments and job titles Handling data consistency and linking across relational tables automatically Providing a conversational interface backed by OpenAI for smart decision-making What this workflow does Using Telegram as the interface and Airtable as the backend database, this intelligent HR workflow allows users to: Chat in natural language (e.g. βShow me all employeesβ or βCreate employee: Sarah, Marketingβ¦β) Interpret and route requests via an AI Agent that acts as the orchestrator Query employee, department, and job title data from Airtable Create or update records as needed: Add new departments and job titles automatically if they donβt exist Create new employees and link them to the correct department and job title Delete employees based on ID Respond directly in Telegram, providing user-friendly feedback Setup View & Copy the Airtable base here: π Employee Database Management β Airtable Base Template Telegram Bot: Set up a Telegram bot and connect it to the Telegram Trigger node Airtable: Prepare three Airtable tables: Employees with links to Departments and Job Titles Departments with Name & Description Job Titles with Title & Description Connect your Airtable API key and base/table IDs into the appropriate Airtable nodes Add your OpenAI API key to the AI Agent nodes Deploy both workflows: the main chatbot workflow and the employee creation sub-workflow Test with sample messages like: βCreate employee: John Doe, john@company.com, Engineering, Software Engineerβ βRemove employee ID rec123xyzβ How to customize this workflow to your needs Switch databases**: Replace Airtable with Notion, PostgreSQL, or Google Sheets if desired Enhance security**: Add authentication and validation before allowing deletion Add approval flows**: Integrate Telegram button-based approvals for sensitive actions Multi-language support**: Expand system prompts to support multiple languages Add logging**: Store every user action in a log table for auditability Expand capabilities**: Integrate payroll, time tracking, or Slack notifications Extra Tips This is a two-workflow setup. Make sure the sub-workflow is deployed and accessible from the main agent. Use Simple Memory per chat ID to preserve context across user queries. You can expand the orchestration logic by adding more tools to the main agentβsuch as βGet active employees onlyβ or βList employees by job title.β
by Joey DβAnna
This template is an error handler that will log n8n workflow errors to a Monday.com board for troubleshooting and tracking. Prerequisites Monday account and Monday credential Create a board on Monday for error logging, with the following columns and types: Timestamp (text) Error Message (text) Stack Trace (long text) Determine the column IDs using Monday's instructions Setup Edit the Monday nodes to use your credential Edit the node labeled CREATE ERROR ITEM to point to your error log board and group name Edit the column IDs in the "Column Values" field of the UPDATE node to match the IDs of the fields on your error log board To trigger error logging, select this automation as the error workflow on any automation For more detailed logging, add Stop and Error nodes in your workflow to send specific error messages to your board.
by Kumar Shivam
Complete AI Product Description Generator Transforms product images into high-converting copy with GPT-4o Vision + Claude 3.5 The Shopify AI Product Description Factory is a production-grade n8n workflow that converts product images and metadata into refined, SEO-aware descriptionsβfully automated and region-agnostic. It blends GPT-4o vision for visible attribute extraction, Claude 3.5 Sonnet for premium copy, Perplexity research for verified brand context, Google Sheets for orchestration and audit trails, plus automated daily sales analytics enrichment. Link-header pagination and structured output enforcement ensure reliable scale. To refine according to your usecase connect via my profile @connect Key Advantages Vision-first copywriting Uses gpt-4o to identify only visible physical attributes (closure, heel, materials, sole) from product imagesβno guesses. Premium copy generation anthropic/claude-3.5-sonnet crafts concise, benefit-led descriptions with consistent tone, length control, and clean formatting. Research-assisted accuracy perplexityTool verifies vendor/brand context from official sources to avoid speculation or fabricated claims. Pagination you can trust Automates Shopify REST pagination via Link headers and persists page_info for resumable runs. Google Sheets orchestration Centralized staging, status tracking, and QA in Products, with ProcessingState for batch/page markers, and Error_log for diagnostics. Bulletproof error feedback errorTrigger + AI diagnosis logs clear, non-technical and technical explanations to Error_log for fast recovery. Automated sales analytics Daily sales tracking automatically captures and enriches total sales data for comprehensive business intelligence and performance monitoring. How It Works Intake and filtering httpRequest fetches /admin/api/2024-04/products.json?limit=200&{page_info} code filters only items with: Image present Empty body_html The currSeas:SS2025 tag Extracts tag metadata such as x-styleCode, country_of_origin, and gender when available Pagination controller code parses Link headers for rel="next" and extracts page_info googleSheets updates ProcessingState with page_info_next and increments the batch number for resumable polling Generation pipeline googleSheets pulls rows with Status = Ready for AI Description; limit throttles batch size openAi Analyze image (model gpt-4o) returns strictly visible features lmChatOpenRouter (Claude 3.5) composes the SEO description, optionally blending verified vendor context from perplexityTool outputParserStructured guarantees strict JSON: product_id, product_title (normalized), generated_description, status googleSheets writes results back to Products for review/publish Sales analytics enrichment Schedule Trigger** runs daily at 2:01 PM to capture previous day's sales httpRequest fetches paid orders from Shopify REST API with date range filtering splitOut and summarize nodes calculate total daily sales Automatic Google Sheets logging with date stamps and totals Zero-sale days are properly recorded for complete analytics continuity Reliability and insight errorTrigger routes failures to an AI agent that explains the root cause and appends a concise note to Error_log. What's Inside (Node Map) Data + API httpRequest (Shopify REST 2024-04 for products and orders) googleSheets (multiple sheet operations) googleSheetsTool (error logging) AI models openAi (gpt-4o vision analysis) lmChatOpenRouter (anthropic/claude-3.5-sonnet for content generation) AI Agent** (intelligent error diagnosis) Analytics & Processing splitOut (order data processing) summarize (sales totals calculation) set nodes (data field mapping) Tools and guards perplexityTool (brand research) outputParserStructured (JSON validation) memoryBufferWindow (conversation context) Control & Scheduling scheduleTrigger (multiple time-based triggers) cron (periodic execution) limit (batch size control) if (conditional logic) code (custom filtering and pagination logic) Observability errorTrigger + AI diagnosis to Error_log Processing state tracking Sales analytics logging Content & Compliance Rules Locale-agnostic copy**; brand voice is configurable per store Only image-verifiable attributes** (no guesses); clean HTML suitable for Shopify themes Optional normalization rules (e.g., color/branding cleanup, title sanitization) Style code inclusion supported when x-styleCode is present Gender-aware content generation when gender tag is present Strict JSON output** and schema consistency for safe downstream publishing Setup Steps Core integrations Shopify Access Token** β Products read + Orders read (REST 2024-04) OpenAI API** β gpt-4o vision OpenRouter API** β Claude Sonnet (3.5) Perplexity API** β vendor/market verification via perplexityTool Google Sheets OAuth** β Products, ProcessingState, Error_log, Sales analytics Configure sheets ProcessingState** with fields: batch number page_info_next Products** with: Product ID Product Title Product Type Vendor Image url Status country of origin x_style_code gender Generated Description Error_log** with: timestamp Reason of Error Sales Analytics Sheet** with: Date Total Sales Workflow Capabilities Discovery and staging Auto-paginate Shopify; stage eligible products in Sheets with reasons and timestamps. Vision-grounded copywriting Descriptions reflect only visible attributes plus verified brand context; concise, mobile-friendly structure with gender-aware tone. Metadata awareness Auto-injects x-styleCode, country_of_origin, and gender when present; natural SEO for brand and product type. Sales intelligence Automated daily sales tracking with Melbourne timezone support, handles zero-sale days, and maintains complete historical records. Error analytics Layman + technical diagnosis logged to Error_log to shorten MTTR. Safe output Structured JSON via outputParserStructured for predictable row updates. Credentials Required Shopify Access Token** (Products + Orders read permissions) OpenAI API Key** (GPT-4o vision) OpenRouter API Key** (Claude Sonnet) Perplexity API Key** Google Sheets OAuth** Ideal For E-commerce teams** scaling compliant, on-brand product copy with comprehensive sales insights Agencies and SEO specialists** standardizing image-grounded descriptions with performance tracking and analytics Stores** needing resumable pagination, auditable content operations, and automated daily sales reporting in Sheets Advanced Features Dual-workflow architecture**: Content generation + Sales analytics in one system Link-header pagination with page_info persistence in ProcessingState Title/content normalization (e.g., color removal) configurable per brand Gender-aware copywriting** based on product tags Memory windows (memoryBufferWindow) to keep multi-step prompts consistent Melbourne timezone support** for accurate daily sales cutoffs Zero-sales handling** ensures complete analytics continuity Structured Output enforcement for downstream safety AI-powered error diagnosis** with technical and layman explanations Time & Scheduling (Universal) The workflow includes two independent schedules: Content Generation**: Every 5 minutes (configurable) for product processing Sales Analytics**: Daily at 2:01 PM Melbourne time for previous day's sales For globally distributed teams, schedule triggers and timestamps can be standardized on UTC to avoid regional drift. Pro Tip Start with small batches (limit set to 10 or fewer) to validate both copy generation and sales tracking flows. The workflow handles dual operations independently - content generation failures won't affect sales analytics and vice versa. Monitor the Error_log sheet for any issues and use the ProcessingState sheet to track pagination progress.
by Incrementors
Description Add TikTok video URLs to a Google Sheet and every morning at 8AM the workflow automatically processes each one, skipping cleanly if nothing is queued. WayinVideo summarizes each video, then all summaries are combined and sent to GPT-4o-mini in one call which writes a 5-section daily digest β trend overview, per-video summaries, top 3 content patterns, action recommendations, and trending tags. A formatted Telegram message is sent to your team channel with auto-truncation at 4000 characters, and everything is logged to Google Sheets. Built for social media teams, content agencies, and brand managers who want to track what is trending in their niche every morning without watching hours of videos. What This Workflow Does Stops cleanly when the queue is empty** β An early IF check detects whether there are any pending videos and exits gracefully if there are none β no errors, no failed runs Summarizes each TikTok video via WayinVideo** β Each video URL is submitted to WayinVideo's Summarization API which returns a structured summary, key highlights, and tags Combines all video summaries before writing the digest** β An aggregation step collects every processed video into one combined text block so GPT sees all the content at once Writes a 5-section daily digest in one GPT call** β GPT produces a trend overview, one-line summaries per video, top 3 content patterns, 2β3 action recommendations, and top trending tags Sends a formatted Telegram message with auto-truncation** β The digest is built in Telegram Markdown with section headers and emoji icons, auto-truncated at 4000 characters if it is too long Logs the digest to Google Sheets** β Overview, top patterns, action recommendations, tags, and send status are saved to the Digest Log tab for your records Marks all processed videos in the queue** β Every Video Queue row is updated with Processed status and today's date so the same videos are never processed again Setup Requirements Tools Needed n8n instance (self-hosted or cloud) WayinVideo account with API access OpenAI account with GPT-4o-mini API access Telegram Bot and a team channel or group chat Google Sheets (one spreadsheet with two tabs: Video Queue and Digest Log) Credentials Required WayinVideo API key (pasted into 4. WayinVideo β Submit Summarization and 6. WayinVideo β Get Summary Results) OpenAI API key Telegram Bot credential + Chat ID (used in 14. Telegram β Send Daily Digest) Google Sheets OAuth2 (used in 2. Google Sheets β Read Pending Videos, 15. Google Sheets β Log Digest, and 16. Google Sheets β Mark Videos Processed) > β οΈ WayinVideo API key appears in 2 steps β replace YOUR_WAYINVIDEO_API_KEY in both 4. WayinVideo β Submit Summarization and 6. WayinVideo β Get Summary Results. Missing either one will cause the workflow to fail. > β οΈ Google Sheet ID appears in 3 steps β replace YOUR_TREND_SHEET_ID in 2. Google Sheets β Read Pending Videos, 15. Google Sheets β Log Digest, and 16. Google Sheets β Mark Videos Processed. All three must use the same Sheet ID. Estimated Setup Time: 25β30 minutes Step-by-Step Setup Import the workflow β Open n8n β Workflows β Import from JSON β paste the workflow JSON β click Import Get your WayinVideo API key β Log in to your WayinVideo account β go to Account Settings β copy your API key Add your WayinVideo API key to node 4 β Open node 4. WayinVideo β Submit Summarization β find the Authorization header value Bearer YOUR_WAYINVIDEO_API_KEY β replace YOUR_WAYINVIDEO_API_KEY with your actual key Add your WayinVideo API key to node 6 β Open node 6. WayinVideo β Get Summary Results β find the same Authorization header β replace YOUR_WAYINVIDEO_API_KEY with the same key Connect OpenAI β Open node 12. OpenAI β GPT-4o-mini Model β click the credential dropdown β add your OpenAI API key β test the connection Create a Telegram Bot β Open Telegram β search for @BotFather β send /newbot β follow the prompts β copy the Bot Token BotFather gives you Get your Telegram Chat ID β Add your bot to your team channel or group β send a message in the chat β open this URL in a browser replacing YOUR_BOT_TOKEN: https://api.telegram.org/botYOUR_BOT_TOKEN/getUpdates β find the chat.id value in the response β it is a number (negative for groups) Connect Telegram in n8n β Open node 14. Telegram β Send Daily Digest β click the credential dropdown β add a new Telegram credential β paste your Bot Token β replace YOUR_TELEGRAM_CHAT_ID in the Chat ID field with your actual Chat ID Create your Google Sheet β Open a new or existing Google Sheet β add a tab named exactly Video Queue β add these 6 headers: Video URL, Video Title, Niche / Category, Date Added, Status, Processed Date β add your first TikTok URLs in the rows below leaving Status blank Add the Digest Log tab β In the same spreadsheet β add a second tab named exactly Digest Log β add these 9 headers: Digest Date, Niche, Videos Processed, Overview, Top Patterns, Action Recommendations, Top Tags, Telegram Sent, Sent On Get your Google Sheet ID β Open the spreadsheet in a browser β copy the string between /d/ and /edit in the URL β this is your Sheet ID Connect Google Sheets for reading β Open node 2. Google Sheets β Read Pending Videos β replace YOUR_TREND_SHEET_ID with your actual Sheet ID β click the credential dropdown β add Google Sheets OAuth2 β authorize access Connect Google Sheets for logging and marking β Open node 15. Google Sheets β Log Digest β replace YOUR_TREND_SHEET_ID with the same Sheet ID β confirm OAuth2 is selected β repeat the same Sheet ID replacement in node 16. Google Sheets β Mark Videos Processed Activate the workflow β Toggle the workflow to Active β it will run automatically every day at 8AM. To test immediately, click on node 1. Schedule β Every Day 8AM and use the manual Execute option. How It Works (Step by Step) Step 1 β Schedule: Every Day 8AM The workflow fires automatically every day at 8AM using the cron expression 0 8 * * *. It can also be triggered manually at any time using the Execute option in n8n. Step 2 β Google Sheets: Read Pending Videos All rows from your Video Queue tab are read. Each row contains a TikTok video URL, title, niche, and date added. Every row regardless of status is passed forward for the empty queue check. Step 3 β IF: Any Pending Videos Today? This is the empty queue gate. If the total number of rows is greater than zero (YES path), the workflow has videos to process and continues forward. If the sheet is empty or has no rows (NO path), the workflow stops cleanly β no error is thrown and no Telegram message is sent. This prevents the workflow from failing on mornings when no new videos have been added. Step 4 β HTTP: WayinVideo β Submit Summarization Each video URL is submitted to WayinVideo's Summarization API. Each submission returns a task ID for tracking. If multiple videos are in the queue, this step runs once per video sequentially. Step 5 β Wait: 60 Seconds The workflow pauses 60 seconds before the first status check. TikTok videos are typically shorter than webinar recordings so 60 seconds is used here instead of 90. Step 6 β HTTP: WayinVideo β Get Summary Results A GET request checks the summarization results endpoint using the task ID from step 4. It returns the current status and, once complete, the summary text, highlights array, and tags array. Step 7 β IF: Summary Complete? This is the polling gate. If the status equals SUCCEEDED (YES path), the summary is ready and the workflow moves to extraction. If still processing (NO path), the workflow routes to 8. Wait β 30 Seconds Retry which pauses 30 seconds then loops back to step 6 to check again. The retry loop runs automatically until SUCCEEDED. Step 8 β Wait: 30 Seconds Retry When the summary is not yet ready, the workflow waits 30 seconds then returns to step 6 for another check. Step 9 β Code: Extract Summary Per Video The completed summary, highlights array, and tags array are extracted from the WayinVideo response. Highlights are joined as a pipe-separated string and tags as a comma-separated string. The video URL, title, niche, and date added from the sheet row are also packaged. This produces one clean data object per processed video. Step 10 β Code: Aggregate All Summaries After all videos have been processed individually, this step collects all of them together into a single combined text block. Each video's data is formatted as a labeled block β Video N, URL, Summary, Key Highlights, Tags β separated by dashes. The total video count and niche from the first video are also extracted. This single combined output is what GPT receives. Step 11 β AI Agent: Write Daily Digest GPT-4o-mini receives the combined summary block, today's date, the niche being tracked, and the total number of videos. It writes a 300β400 word digest in five labeled sections: DIGEST_OVERVIEW (1β2 sentence trend overview), VIDEO_SUMMARIES (one bullet per video β topic and why it is trending), TOP_PATTERNS (3 bullets of content patterns being used today), ACTION_RECOMMENDATIONS (2β3 specific things the team should create or do), and TOP_TAGS (top 8 tags from today's videos comma-separated). Emojis are kept to section headers only. Step 12 β OpenAI: GPT-4o-mini Model This is the language model powering the digest writing. Step 13 β Code: Format Telegram Message All five labeled sections are extracted from the AI output using regex. A Telegram Markdown message is assembled with section headers, emoji icons, and the niche and video count as a subtitle line. If the full message exceeds 4000 characters (Telegram's limit), it is auto-truncated at 3900 characters with a note that the full version is in the Digest Log sheet. Step 14 β Telegram: Send Daily Digest The formatted Markdown message is sent to your Telegram channel or group chat via the Bot API using your Chat ID. Markdown formatting is enabled so bold text and italics render correctly. Step 15 β Google Sheets: Log Digest One row is appended to your Digest Log tab with all 9 columns: digest date, niche, videos processed count, overview, top patterns, action recommendations, top tags, Telegram Sent set to Yes, and the current timestamp. Step 16 β Google Sheets: Mark Videos Processed Every Video Queue row that was processed today is updated with Status set to Processed and today's date in the Processed Date column. This prevents the same videos from appearing in tomorrow's digest. Key Features β Graceful empty queue handling β The workflow checks whether any videos are queued before doing anything β runs daily without failing on days when nothing is added β Aggregation step before GPT β All per-video summaries are combined into one input so GPT can write cross-video trend analysis rather than analyzing each video in isolation β 5-section digest in one GPT call β Overview, video summaries, content patterns, action recommendations, and tags are all produced together β saving API cost and keeping the digest coherent β Auto-truncation at 4000 characters β Telegram messages have a hard character limit β the workflow handles this automatically and notifies your team to check the sheet for the full version β Separate Digest Log and Video Queue tabs β The two tabs serve different purposes β the queue manages what to process, the log archives every digest sent β both tracked in the same spreadsheet β 60-second initial wait for TikTok videos β Shorter than the 90-second wait used for longer recordings β matched to the typical length of TikTok content β Niche and date context passed to GPT β The system prompt includes today's date and the niche being tracked so the digest feels current and category-specific rather than generic Customisation Options Change the daily run time β In node 1. Schedule β Every Day 8AM, edit the cron expression from 0 8 * * * to a different time β for example 0 7 * * * for 7AM or 0 9 * * 1-5 for weekdays only at 9AM. Add a retry limit to stop infinite polling β Before node 8. Wait β 30 Seconds Retry, add a Set step that increments a poll counter, then add a second IF check to stop after 15 polls and send a Telegram error message instead of looping indefinitely. Track multiple niches in separate sheet tabs β Duplicate the Video Queue tab and name new tabs by niche (e.g. Beauty, Finance, Fitness) β then duplicate the workflow and point each copy to a different tab so your team gets a separate digest per niche. Send the digest to multiple Telegram channels β After node 14. Telegram β Send Daily Digest, duplicate the Telegram step with a different Chat ID to send the same digest to a second channel β for example, a client-facing channel alongside your internal team channel. Add a Slack message alongside the Telegram digest β After node 14. Telegram β Send Daily Digest, add a Slack step that posts the overview section and top tags to a #trends channel so team members who are not on Telegram also receive the key highlights. Troubleshooting Workflow running daily but no digest being sent: The most common cause is an empty Video Queue β if no rows exist in the sheet, step 3 stops the workflow cleanly with no output and no error β this is expected behavior Confirm the workflow is Active and your n8n instance is running at 8AM β self-hosted instances that are off will not fire To test immediately, click on node 1. Schedule β Every Day 8AM and use the manual Execute option WayinVideo API key errors: Confirm YOUR_WAYINVIDEO_API_KEY is replaced in both 4. WayinVideo β Submit Summarization and 6. WayinVideo β Get Summary Results β missing either one causes a 401 error Check the execution log for the specific step that failed β node 4 errors mean the submission failed, node 6 errors mean the polling request failed Confirm your WayinVideo account is active and the key has not expired Polling loop getting stuck: Check that each TikTok URL in the sheet is publicly accessible β private TikTok videos, deleted posts, or region-blocked content will not be processed by WayinVideo Open the execution log of node 6. WayinVideo β Get Summary Results and check the raw response β WayinVideo may have returned FAILED with a specific error If a single video causes the loop to run indefinitely, remove it from the sheet, reactivate the workflow, and resubmit Telegram message not sending: Confirm YOUR_TELEGRAM_CHAT_ID in node 14. Telegram β Send Daily Digest is replaced with your actual Chat ID β group chat IDs are negative numbers (e.g. -1001234567890) Confirm your Telegram Bot credential is connected and the Bot Token is valid β regenerate the token via @BotFather if needed Make sure your bot has been added to the target channel or group and has permission to post messages Google Sheets not logging or marking rows: Confirm YOUR_TREND_SHEET_ID is replaced in all three steps: 2. Google Sheets β Read Pending Videos, 15. Google Sheets β Log Digest, and 16. Google Sheets β Mark Videos Processed Confirm the tab names match exactly: Video Queue and Digest Log β capitalization and spacing must be exact Check that the Google Sheets OAuth2 credential is connected in all three steps β it is easy to authorize in node 2 but forget in nodes 15 and 16 Support Need help setting this up or want a custom version built for your team or agency? π§ Email: info@incrementors.com π Website: https://www.incrementors.com/
by Rajeet Nair
Overview This workflow implements an AI-powered incident investigation and root cause analysis system that automatically analyzes operational signals when a system incident occurs. When an incident is triggered via webhook, the workflow gathers operational context including application logs, system metrics, recent deployments, and feature flag changes. These signals are processed to detect error patterns, cluster similar failures, and correlate them with recent system changes. The workflow uses vector embeddings to group similar log messages, allowing it to detect dominant failure patterns across services. It then aligns these failures with contextual events such as deployments, configuration changes, or traffic spikes to identify potential causal relationships. An AI agent analyzes all available evidence and generates structured root cause hypotheses, including confidence scores, supporting evidence, and recommended remediation actions. Finally, the workflow posts a detailed incident report directly to Slack, enabling engineering teams to quickly understand the issue and respond faster. This architecture helps teams reduce mean time to resolution (MTTR) by automating the early stages of incident investigation. How It Works 1. Incident Trigger The workflow begins when an incident alert is received through a webhook endpoint. The webhook payload may include information such as: incident ID severity level timestamp affected service This event starts the automated investigation process. 2. Workflow Configuration A configuration node defines the operational parameters used throughout the workflow, including: Logs API endpoint Metrics API endpoint Deployments API endpoint Feature flags API endpoint Time window for analysis Slack channel for incident notifications This allows the workflow to be easily adapted to different observability stacks. 3. Incident Context Collection The workflow collects system context from multiple sources: application logs infrastructure or service metrics recent deployments active feature flags Gathering this information provides the signals required to understand what happened before and during the incident. 4. Log Normalization and Denoising Raw logs are processed to remove low-value entries such as debug or informational messages. The workflow extracts structured error information including: timestamps log severity services involved request or session IDs error messages and stack traces This step ensures that only relevant failure signals are analyzed. 5. Failure Pattern Clustering Error messages are converted into embeddings using OpenAI. The workflow stores these embeddings in an in-memory vector store to group similar log messages together. This clustering step identifies dominant failure patterns that may appear across multiple sessions or services. 6. Failure Pattern Analysis Clustered log data is analyzed to detect recurring error types and dominant failure clusters. The workflow calculates statistics such as: total error volume most common error types error distribution across clusters dominant failure patterns These insights help highlight the primary issues affecting the system. 7. Event Correlation Analysis Failure patterns are then aligned with contextual events such as: deployments configuration changes traffic spikes The workflow calculates correlation scores based on temporal proximity and assigns likelihood scores to potential causes. This allows the system to identify events that may have triggered the incident. 8. AI Root Cause Analysis An AI agent analyzes the collected signals and generates structured root cause hypotheses. The agent considers: error clusters deployment timing configuration changes traffic patterns system metrics The output includes: multiple root cause hypotheses confidence scores supporting evidence recommended remediation actions 9. Incident Ticket Creation The final analysis is formatted into a structured incident report and posted to Slack. The Slack message contains: incident metadata root cause hypotheses confidence scores evidence recommended actions affected services This enables engineers to quickly review the investigation results and take action. Setup Instructions 1. Configure Observability APIs Update the Workflow Configuration node with API endpoints for: Logs API Metrics API Deployments API Feature Flags API These APIs should return JSON responses containing recent operational data. 2. Configure OpenAI Credentials Add OpenAI credentials for: OpenAI Embeddings OpenAI Chat Model These are used for log clustering and root cause analysis. 3. Configure Slack Integration Add Slack credentials and specify the Slack channel ID in the configuration node. Incident reports will be posted automatically to this channel. 4. Configure the Incident Trigger Deploy the webhook endpoint generated by the Incident Trigger node. Your monitoring or alerting system (PagerDuty, Grafana, Datadog, etc.) can call this webhook when incidents occur. 5. Activate the Workflow Once configured, activate the workflow in n8n. When incidents are triggered, the workflow will automatically run the investigation pipeline and generate a Slack incident report. Use Cases Automated Incident Investigation Automatically analyze operational signals when alerts are triggered to identify possible causes. AI-Assisted Site Reliability Engineering Provide engineers with AI-generated root cause hypotheses and investigation insights. Deployment Impact Detection Detect whether a recent deployment or configuration change caused a system failure. Observability Signal Correlation Combine logs, metrics, and system events to produce a unified incident analysis. Faster Incident Response Reduce mean time to resolution (MTTR) by automating the early stages of incident debugging. Requirements n8n with LangChain nodes enabled OpenAI API credentials Slack credentials APIs for retrieving: system logs service metrics deployment history feature flag status
by WeblineIndia
Facebook Page Comment Moderation Scoreboard β Team Report This workflow automatically monitors Facebook Page comments, analyzes them using AI for intent, toxicity & spam, stores moderation results in a database and sends a clear summary report to Slack and Telegram. This workflow runs every few hours to fetch Facebook Page comments and analyze them using OpenAI. Each comment is classified as positive, neutral or negative, checked for toxicity, spam & abusive language and then stored in Supabase. A simple moderation summary is sent to Slack and Telegram. You receive: Automated Facebook comment moderation AI-based intent, toxicity, and spam detection Database logging of all moderated comments Clean Slack & Telegram summary reports Ideal for teams that want visibility into comment quality without manually reviewing every message. Quick Start β Implementation Steps Import the workflow JSON into n8n. Add your Facebook Page access token to the HTTP Request node. Connect your OpenAI API key for comment analysis. Configure your Supabase table for storing moderation data. Connect Slack and Telegram credentials and choose target channels. Activate the workflow β moderation runs automatically. What It Does This workflow automates Facebook comment moderation by: Running on a scheduled interval (every 6 hours). Fetching recent comments from a Facebook Page. Preparing each comment for AI processing. Sending comments to OpenAI for moderation analysis. Extracting structured moderation data: Comment intent Toxicity score Spam detection Abusive language detection Flagging risky comments based on defined rules. Storing moderation results in Supabase. Generating a summary report. Sending the report to Slack and Telegram. This ensures consistent, repeatable moderation with no manual effort. Whoβs It For This workflow is ideal for: Social media teams Community managers Marketing teams Customer support teams Moderation and trust & safety teams Businesses managing high-volume Facebook Pages Anyone wanting AI-assisted comment moderation Requirements to Use This Workflow To run this workflow, you need: n8n instance** (cloud or self-hosted) Facebook Page access token** OpenAI API key** Supabase project and table** Slack workspace** with API access Telegram bot** and chat ID Basic understanding of APIs and JSON (helpful but not required) How It Works Scheduled Trigger β Workflow starts automatically every 6 hours. Fetch Comments β Facebook Page comments are retrieved. Prepare Data β Comments are formatted for processing. AI Moderation β OpenAI analyzes each comment. Normalize Results β AI output is cleaned and standardized. Store Data β Moderation results are saved in Supabase. Aggregate Stats β Summary statistics are calculated. Send Alerts β Reports are sent to Slack and Telegram. Setup Steps Import the workflow JSON into n8n. Open the Fetch Facebook Page Comments node and add: Page ID Access token Connect your OpenAI account in the AI moderation node. Create a Supabase table and map fields correctly. Connect Slack and select a reporting channel. Connect Telegram and set the chat ID. Activate the workflow. How To Customize Nodes Customize Flagging Rules Update the normalization logic to: Change toxicity thresholds Flag only spam or abusive comments Add custom moderation rules Customize Storage You can extend Supabase fields to include: Language AI confidence score Reviewer notes Resolution status Customize Notifications Slack and Telegram messages can include: Emojis Mentions (@channel) Links to Facebook comments Severity labels Add-Ons (Optional Enhancements) You can extend this workflow to: Auto-hide or delete toxic comments Reply automatically to positive comments Detect language and region Generate daily or weekly moderation reports Build dashboards using Supabase or BI tools Add escalation alerts for high-risk comments Track trends over time Use Case Examples 1. Community Moderation Automatically identify harmful or spam comments. 2. Brand Reputation Monitoring Spot negative sentiment early and respond faster. 3. Support Oversight Detect complaints or frustration in comments. 4. Marketing Insights Measure positive vs negative engagement. 5. Compliance & Auditing Keep historical moderation logs in a database. Troubleshooting Guide | Issue | Possible Cause | Solution | |-----|---------------|----------| | No comments fetched | Invalid Facebook token | Refresh token & permissions | | AI output invalid | Prompt formatting issue | Use strict JSON prompt | | Data not saved | Supabase mapping mismatch | Verify table fields | | Slack message missing | Channel or credential error | Recheck Slack config | | Telegram alert fails | Wrong chat ID | Confirm bot permissions | | Workflow not running | Trigger disabled | Enable Cron node | Need Help? If you need help customizing, scaling or extending this workflow β such as advanced moderation logic, dashboards, auto-actions or production hardening, then our n8n workflow development team at WeblineIndia can assist with expert automation solutions.
by WeblineIndia
AI-Powered Smart Deal Close Prediction and Salesforce CRM Auto-Update Workflow This workflow acts as an automated, intelligent sales operations assistant. It continuously monitors your Salesforce account for newly updated opportunities, compares them against your historical win data and uses a powerful AI (Groq Llama-3) to predict realistic close dates and win probabilities. If the AI is highly confident in its prediction, it automatically updates the deal in Salesforce. If the AI is uncertain, it emails a manager to review the deal manually. Everything is neatly logged in a Google Spreadsheet for easy tracking. Quick Implementation Steps Connect Credentials: Authenticate your Salesforce, Groq, Gmail and Google Sheets accounts within your n8n account. Prepare the Audit Sheet: Create a new Google Sheet and copy its Document ID into the two Google Sheets nodes. Set the Schedule: Adjust the Schedule Trigger to run at your preferred interval (default is optimized for frequent checks). Activate: Turn on the workflow and watch your pipeline automatically clean itself. What It Does First, the workflow wakes up on a set schedule and looks for two things in Salesforce: a small batch of your recently won deals (to understand what success looks like) and any open opportunities that were modified recently. It filters these to ensure it only spends time on active deals that actually have a dollar amount attached to them. Next, it acts like a data scientist. It grabs the recent task history for each deal and calculates custom metricsβlike how fast the deal is moving, how long it has been open and a "Risk Score" based on user engagement. All this data is packaged up and securely sent to a Groq LLM agent. The AI acts as a seasoned sales strategist, weighing these factors to predict a realistic timeline and the actual chance of winning the deal. Finally, the workflow makes a smart decision based on the AI's confidence score. If the AI is 70% or more confident in its assessment, it goes straight into Salesforce and updates the target close date to keep your pipeline accurate. If the confidence is lower, it sends a formatted email via Gmail to alert a sales manager that a deal needs human attention. Regardless of the path taken, every single prediction and action is logged into a Google Sheet for your RevOps team to review. Who It's For Sales Managers & Directors** Who want an unbiased, data-driven view of when deals will actually close, rather than relying on gut feelings. Revenue Operations (RevOps)** Who need accurate pipeline data and want to automate the tedious process of "pipeline scrubbing." CRM Administrators** Who want to reduce the administrative burden on sales reps by automatically updating stagnant close dates. Requirements to use this workflow To use this workflow, you will need n8n account with the following active accounts: Salesforce:** With API access enabled to read opportunities and tasks and update opportunities. Groq:** An API key to access the Llama-3.3-70b AI model. Gmail:** To send the low-confidence alerts. Google Workspace / Sheets:** To maintain the automated audit logs. How It Works & Set Up 1. App Authentication Before doing anything, ensure you have added your credentials for Salesforce (OAuth2), Groq (API Key), Gmail (OAuth2) and Google Sheets (OAuth2) in your n8n environment. 2. Configure the Google Sheet You need a destination for the audit logs. Create a new Google Sheet and ensure it has the following exact column headers in the first row: timestamp opportunity_id opportunity_name stage_name current_amount risk_score risk_label predicted_close_date predicted_win_probability confidence_score reasoning next_best_action action_taken status Open both Google Sheets nodes ("Log Auto-Update Success" and "Log Pending Review") and replace the Document ID with the ID of your newly created sheet. 3. Timing and Lookback Setup The workflow uses a "Set Lookback Timeframe" node to only grab deals modified in the last 5 minutes. If you change your "Run Schedule" to run every hour, you must also update the code in the "Set Lookback Timeframe" node to look back 60 minutes instead of 5, so you don't miss any deals. 4. Review the AI Prompt Open the "AI Deal Timeline Predictor" LangChain node. Review the System Message. If your company has specific sales stages or unique risk factors, you can type them directly into the prompt to make the AI's predictions even smarter for your specific business. How To Customize Nodes Adjusting the Confidence Threshold** Open the check confidence score If node. It is currently set to 70. If you want the AI to be more aggressive with automatic updates, lower this number. If you want more manual reviews, raise it to 80 or 90. Modifying Risk Calculations** The Calculate Deal Risk & Velocity Code node contains JavaScript that assigns risk based on how long a deal has been open and how many tasks are associated with it. You can tweak the numbers in this code to better fit your typical sales cycle length. Changing the Alert System** If you don't use Gmail, you can easily delete the Gmail node and replace it with a Slack or Microsoft Teams node to send the review alerts directly to a sales channel. Addβons You can easily extend this workflow to do even more: Push AI Advice to CRM** Add another Salesforce update node to push the AI's next_best_action directly into a custom field on the Opportunity, giving the sales rep instant coaching. Urgent SMS Alerts** Connect a Twilio node alongside the Gmail node to text the VP of Sales if a massive deal (e.g., over $100k) receives a high risk score. Bi-Weekly Summary** Create a separate simple workflow that reads the Google Sheet every Friday and emails a summary of all AI predictions to the executive team. Use Case Examples Automated Pipeline Scrubbing Automatically push out the close dates of neglected deals to the next quarter, ensuring the current quarter's forecast remains mathematically realistic without nagging sales reps. Early Warning System for Stalled Deals Instantly alert managers when a high-value opportunity shows a sudden drop in engagement or task activity, allowing leadership to step in before the deal is lost. Data-Driven Sales Coaching Use the AI's generated reasoning and recommended next steps to help junior account executives figure out how to unblock a complex negotiation. Historical Win-Rate Benchmarking Compare the current active pipeline against what actually won in the past, giving RevOps a clear picture of whether the current pipeline quality is better or worse than the previous quarter. Enforcing CRM Hygiene Identify and flag opportunities that have a 90% probability but haven't had a single phone call or email logged in three weeks. Troubleshooting Guide | Issue | Possible Cause | Solution | | :--- | :--- | :--- | | Workflow isn't processing any deals | Schedule and lookback timeframes don't match or no deals were modified recently. | Ensure the minutes in the Schedule node match the mathematical subtraction in the "Set Lookback Timeframe" node. | | "Invalid JSON returned from AI" error | The LLM ignored instructions and added extra conversational text (like "Here is your data:"). | The workflow already has a "Parse AI Output" cleanup node. If it still fails, adjust the Groq prompt to strictly enforce JSON-only responses. | | Google Sheets node fails to write data | The Google Sheet ID is missing or the column headers in your sheet do not perfectly match the node. | Verify the Document ID. Ensure the headers in your Sheet exactly match the 14 fields listed in the setup instructions above. | | Salesforce API Limit errors | Fetching too much data too frequently. | Increase the interval on your Schedule trigger (e.g., run every 30 minutes instead of 5) to reduce API calls. | | AI Close Dates are completely wrong | The AI lacks context about your specific average sales cycle length. | Edit the AI's System Message prompt to tell it your average sales cycle (e.g., "Our standard enterprise deal takes 90 days to close"). | Need Help? Building dynamic, AI-driven automation workflows can transform your business, but getting the data logic perfectly tuned sometimes requires an expert touch. If you need help setting up this workflow, customizing the custom JavaScript risk scoring, integrating it with a different CRM or building more advanced automation solutions tailored to your unique operations, we are here for you. Reach out to our n8n workflow developers at WeblineIndia to get expert assistance and start maximizing the value of your business process automations today!