by Joseph LePage
n8n Creators Leaderboard Workflow Why Use This Workflow? The n8n Creators Leaderboard Workflow is a powerful tool for analyzing and presenting detailed statistics about workflow creators and their contributions within the n8n community. It provides users with actionable insights into popular workflows, community trends, and top contributors, all while automating the process of data retrieval and report generation. Benefits Discover Popular Workflows**: Identify workflows with the most unique visitors and inserters (weekly and monthly). Understand Community Trends**: Gain insights into what workflows are resonating with the community. Recognize Top Contributors**: Highlight impactful creators to foster collaboration and inspiration. Save Time with Automation**: Automates data fetching, processing, and reporting for efficiency. Use Cases For Workflow Creators**: Track performance metrics of your workflows to optimize them for better engagement. For Community Managers**: Identify trends and recognize top contributors to improve community resources. For New Users**: Explore popular workflows as inspiration for building your own automations. How It Works This workflow aggregates data from GitHub repositories containing statistics about workflow creators and their templates. It processes this data, filters it based on user input, and generates a detailed Markdown report using an AI agent. Key Features Data Aggregation: Fetches creator and workflow statistics from GitHub JSON files. Custom Filtering: Focuses on specific creators based on a username provided via chat. AI-Powered Reports: Generates comprehensive Markdown reports with summaries, tables, and insights. Output Flexibility: Saves reports locally with timestamps for easy access. Data Retrieval & Processing Creators Data**: Retrieved via an HTTP Request node from a JSON file containing aggregated statistics about creators. Workflows Data**: Pulled from another JSON file with workflow metrics like visitor counts and inserter statistics. Data Merging**: Combines creator and workflow data by matching usernames to provide enriched statistics. Report Generation The AI agent generates a Markdown report that includes: A summary of the creator’s contributions. A table of workflows with key metrics (e.g., unique visitors, inserters). Insights into trends or community feedback. The report is saved locally as a file with a timestamp for tracking purposes. Quick Start Guide Prerequisites Ensure your n8n instance is running. Verify that the GitHub base URL and file variables are correctly set in the Global Variables node. Confirm that your OpenAI credentials are configured for the AI Agent node. How to Start Activate the Workflow: Make sure the workflow is active in your n8n environment. Trigger via Chat: Use the Chat Trigger node to initiate the workflow by sending a message like: show me stats for username [desired_username] Replace [desired_username] with the username you want to analyze. Processing & Report Generation: The workflow fetches data, processes it, and generates a Markdown report. View Output: The final report is saved locally as a file (with a timestamp), which you can review to explore leaderboard insights.
by Rahi
WABA Message Journey Flow Documentation This document outlines the automated workflow for sending WhatsApp messages to contacts, triggered hourly and managed through disposition and message count logic. The workflow is designed to ensure contacts receive messages based on their status and the frequency of previous interactions. Trigger and Data Retrieval The journey begins with a time-based trigger and data retrieval from the Supabase contacts table. Trigger: A "Schedule Trigger3" node initiates the workflow every hour. This ensures that the system regularly checks for contacts requiring messages. Get Contacts: The "Get many rows1" node (Supabase) then retrieves all relevant contact data from the contacts_ampere table in Supabase. This brings in contact details such as name, phone, Disposition, Count, and last_message_sent. Disposition-Based Segregation After retrieving the contacts, the workflow segregates them based on their Disposition status. Disposition Switch: The "Disposition Switch" node acts as the primary routing mechanism. It evaluates the Disposition field of each contact and directs them to different branches of the workflow based on predefined categories. Case 0: new_lead: Contacts with the disposition new_lead are routed to the "Count Switch" for further processing. Cases 1-4: The workflow also includes branches for test_ride, Booking, walk_in, and Sale dispositions, though the detailed logic for these branches is not fully laid out in the provided JSON beyond the switch nodes ("Switch2", "Switch3", "Switch4", "Switch5"). The documentation focuses on the new_lead disposition's detailed flow, which can be replicated for others. Message Count Logic (for new_lead Disposition) For contacts identified as new_lead, the workflow uses a "Count Switch" to determine which message in the sequence should be sent. Count Switch: This node evaluates the Count field for each new_lead contact. This Count likely represents the number of messages already sent to the contact within this specific journey. Count = 0: Directs to "Loop Over Items1" (first message in sequence). Count = 1: Directs to "Loop Over Items2" (second message in sequence). Count = 2: Directs to "Loop Over Items3" (third message in sequence). Count = 3: Directs to "Loop Over Items4" (fourth message in sequence). Looping and Interval Check Each "Loop Over Items" node processes contacts in batches and incorporates an "If Interval" check (except for Loop Over Items1). Loop Over Items (e.g., "Loop Over Items1", "Loop Over Items2", "Loop Over Items3", "Loop Over Items4"): These nodes iterate through the contacts received from the "Count Switch" output. Interval Logic: "If Interval" (for Count = 1 from "Loop Over Items2"): Checks if the interval is greater than or equal to 4. This interval value is handled by a separate Supabase cron job, which updates it every minute based on Current time - last api hit time in hours. "If Interval1" (for Count = 2 from "Loop Over Items3"): Checks if the interval is exactly 24 hours. "If2" (for Count = 3 from "Loop Over Items4"): Checks if the interval is exactly 24 hours. Sending WhatsApp Messages If a contact passes the interval check (or immediately for Count = 0), a WhatsApp message is sent using the Gallabox API. HTTP Request Nodes (e.g., "new_lead_0", "new_lead_", "new_lead_3", "new_lead_2"): These nodes are responsible for sending the actual WhatsApp messages via the Gallabox API. They are configured with: Method: POST URL: https://server.gallabox.com/devapi/messages/whatsapp Authentication: apiKey and apiSecret are used in the headers. Body: Contains channelId, channelType (whatsapp), and recipient (including name and phone). WhatsApp Message Content: Includes type: "template" and templateName (e.g., testing_rahi, wu_2, testing_rahi_1). The bodyValues dynamically insert the contact's name and other details. Some messages also include buttonValues for quick replies (e.g., "Show me Brochure"). Logging and Updating Contact Status After a message is sent (or attempted), the workflow logs the interaction and updates the contact's record. Create Logs (e.g., "Create Logs", "Create Logs1", "Create Logs2", "Create Logs3"): These Supabase nodes record details of the message send attempt into the logs_nurture_ampere table. This includes: message_id (from the Gallabox API response body) phone and name of the contact disposition and mes_count (which is Count + 1 from the contacts table) last_sent (timestamp from Gallabox API response headers) status_code and status_message (from Gallabox API response or error). These nodes are configured to "continueRegularOutput" on error, meaning the workflow will attempt to proceed even if logging fails. Status Code Check (e.g., "If StatusCode", "If StatusCode 202", "If StatusCode 203", "If StatusCode 204"): Immediately after attempting to create a log, an "If" node checks if the status_code from the message send attempt is "202" (indicating acceptance by the messaging service). Update Contact Row (e.g., "Update a row1", "Update a row2", "Update a row3", "Update a row4"): If the status code is 202, these Supabase nodes update the contacts_ampere table for the specific contact. The Count for the contact is incremented by 1 (Count + 1). The last_message_sent field is updated with the date from the Gallabox API response headers. These nodes are also configured to "continueRegularOutput" on error. This structured flow ensures that contacts are nurtured through a sequence of WhatsApp messages, with each interaction logged and the contact's status updated for future reference and continuation of the journey.
by Juan Carlos Cavero Gracia
This automation template is an AI-powered booking agent that schedules property viewings and reserves restaurant tables for you, all coordinated through Telegram. It checks your calendar to avoid conflicts, places the calls on your behalf, negotiates times, confirms details, and delivers a crisp summary back to Telegram—hands-free. Note: This workflow uses a voice-calling provider for outbound calls, your calendar for availability, and Telegram for notifications. Usage costs depend on your telephony provider, call duration, and any API usage.* Who Is This For? Home Buyers & Renters:** Queue up and confirm viewings without calling around. Real Estate Agents & Property Managers:** Automate client viewing scheduling and confirmations. Relocation Specialists & Assistants:** Coordinate multi-property tours with calendar-aware logic. Busy Professionals:** Let AI handle restaurant bookings and post-viewing meals. Concierge & Ops Teams:** Standardize bookings with structured logs and Telegram updates. What Problem Does This Workflow Solve? Scheduling property viewings and restaurant tables often means endless calls, conflicts, and coordination. This workflow removes the friction by: AI Phone Calls on Your Behalf:** Natural voice calls to agents/venues to secure slots. Calendar-Aware Booking:** Checks your real-time availability and avoids overlaps. Preference Handling:** Location, budget, party size, time windows, language, and notes. Instant Telegram Summaries:** Clear outcomes (confirmed, waitlist, action needed) and quick next steps. Scalable Coordination:** Handles multiple properties and dining options with fallback logic. How It Works Intent Capture (Telegram): You send a simple message (e.g., “Viewings tomorrow 17:00–20:00, Eixample, 2-bed; table for 4 at 21:30 near there”). Calendar Check: Reads free/busy blocks and suggests viable windows or alternatives. AI Calling: Places outbound calls to listing agents/restaurants, negotiates slots, and confirms. Result Parsing: Extracts confirmed time, address, contact name, reservation name, and special instructions. Telegram Delivery: Sends a concise recap plus optional quick-reply buttons (confirm/cancel/map/nav). Optional Calendar Hold: Adds confirmed bookings to your calendar and blocks time. Logging (Optional): Writes outcomes to a sheet/database for tracking and analytics. Setup Telephony Provider: Connect your AI calling service (API key). Configure voice/language. Calendar Access: Link Google Calendar (or similar). Grant read (and optionally write) access. Telegram Bot: Create a bot with BotFather and add the bot token to credentials. Environment/Credentials: Store calling API token, calendar credentials, Telegram token in n8n. Preferences: Set defaults for viewings (areas, price range, time windows) and dining (party size, cuisine, budget). Test Run: Trial a single booking to verify calling, calendar sync, and Telegram delivery. Requirements Accounts:** n8n, telephony provider, calendar account, Telegram bot. API Keys:** Telephony API token, Calendar credentials, Telegram bot token. Permissions:** Calendar read (and write if auto-blocking); outbound calls enabled. Budget:** Telephony per-minute fees and minor API costs where applicable. Features Dual-Domain Booking:** Property viewings + restaurant tables in one flow. Calendar Intelligence:** Checks conflicts and proposes best-fit time slots. Telegram-Native UX:** Simple prompts, instant confirmations, and quick actions. Preference Profiles:** Time windows, neighborhoods, max budget, language, and notes. Fallbacks & Alternatives:** Suggests nearby times/venues if first choice is unavailable. Multilingual Voice:** Configure voice and language to match region/venue. Structured Logs:** Optional recording of outcomes for reporting and audits. Extensible:** Add CRM, maps, SMS, or payment links as needed. Automate your day from tours to tables: message your intent on Telegram, and let the AI book, confirm, and keep your calendar clean—so you just show up on time.
by Rahul Joshi
📘 Description: This workflow automates the Developer Q&A Classification and Documentation process using Slack, Azure OpenAI GPT-4o, Notion, Airtable, and Google Sheets. Whenever a new message is posted in a specific Slack channel, the workflow automatically: Captures and validates the message data Uses GPT-4o (Azure OpenAI) to check if the question matches any existing internal FAQs Logs answered questions into Notion as new FAQ entries Sends unanswered ones to Airtable for human follow-up Records any workflow or API errors into Google Sheets This ensures that every developer query is instantly categorized, documented, and tracked, turning daily Slack discussions into a continuously improving knowledge base. ⚙️ What This Workflow Does (Step-by-Step) 🟢 Slack Channel Trigger – Developer Q&A Triggers the workflow whenever a new message is posted in a specific Slack channel. Captures message text, user ID, timestamp, and channel info. 🧩 Validate Slack Message Payload (IF Node) Ensures the incoming message payload contains valid user and text data. ✅ True Path → Continues to extract and process the message ❌ False Path → Logs error to Google Sheets 💻 Extract Question Metadata (JavaScript) Cleans and structures the Slack message into a standardized JSON format — removing unnecessary characters and preparing a clean “question object” for AI processing. 🧠 Classify Developer Question (AI) (Powered by Azure OpenAI GPT-4o) Uses GPT-4o to semantically compare the question with an internal FAQ dataset. If a match is found → Marks as answered and generates a canonical response If not → Flags it as unanswered 🧾 Parse AI JSON Output (Code Node) Converts GPT-4o’s text output into structured JSON so that workflow logic can reference fields like status, answer_quality, and canonical_answer. ⚖️ Check If Question Was Answered (IF Node) If status == "answered", the question is routed to Notion for documentation; otherwise, it’s logged in Airtable for review. 📘 Save Answered Question to Notion FAQ Creates a new Notion page under the “FAQ” database containing the question, AI’s canonical answer, and answer quality rating — automatically building a self-updating internal FAQ. 📋 Log Unanswered Question to Airtable Adds unresolved or new questions into Airtable for manual review by the developer support team. These records later feed back into the FAQ training loop. 🚨 Log Workflow Errors to Google Sheets Any missing payloads, parsing errors, or failed integrations are logged in Google Sheets (error log sheet) for transparent tracking and debugging. 🧩 Prerequisites: Slack API credentials (for message trigger) Azure OpenAI GPT-4o API credentials Notion API connection (for FAQ database) Airtable API credentials (for unresolved questions) Google Sheets OAuth connection (for error logging) 💡 Key Benefits: ✅ Automates Slack Q&A classification ✅ Builds and updates internal FAQs with zero manual input ✅ Ensures all developer queries are tracked ✅ Reduces redundant questions in Slack ✅ Maintains transparency with error logs 👥 Perfect For: Engineering or support teams using Slack for developer communication Organizations maintaining internal FAQs in Notion Teams wanting to automatically capture and reuse knowledge from real developer interactions
by Guy
🎯General Principles This workflow automates the import of leads into the Company table of a CRM built with Airtable. Its originality lies in leveraging the new "Data Table" node (an internal table within n8n) to generate an execution report. 📚 Why Data Tables: This approach eliminates the need for reading/writing operations on a Google Sheet file or an external database. 🧩 It is structured on 3 main key steps: Reading leads for which email address validity has been verified. Creating or updating company information. Generating of execution report. This workflow enables precise tracking of marketing actions while facilitating the historical record of interactions with prospects and clients. Prerequisites Leads file: A prior validation check on email address accuracy is required. Airtable: Must contain at least a Company table with the following fields: Company: company name Business Leader: name of the executive Activity: business sector (notary, accountant, plumber, electrician, etc.) Address: main company address Zip Code: postal code City: city Phone Number: phone number Email: email address of a manager URL Site: company website URL Opt-in: company’s consent for commercial prospecting Campaign: reserved for future marketing campaigns Valid Email: indicator confirming email verification ⚙️ Step-by-Step Description 1️⃣ Initialization and Lead Selection Data Table Initialization: An internal n8n table is created to build the execution report. Lead Selection: The workflow selects leads from the Google Sheet file (Sheet1 tab) where the condition "Valid Email" is equal to OK. 2️⃣ Iterative Loop Company Existence Check: The Search Company node is configured with Always Output Data enabled. A JavaScript code node distinguishes three possibilities: Company does not exist: create a new record and increment the created records counter. Company exists once: update the record and increment the updated records counter. Company appears multiple times: log the issue in the Leads file under the Logs tab, requiring a data quality procedure. 3️⃣ Execution Report Generation An execution report is generated and emailed, example format: Leads Import Report: Number of records read: 2392 Number of records created: 2345 Number of records updated: 42 If the sum of records created and updated differs from the total records read, it indicates the presence of duplicates. A counter for duplicated companies could be added. ✅ Benefits of this template Exception Management and Logging: Identification and traceability of inconsistencies during import with dedicated logs for issues. Data Quality and Structuring: Built-in checks for duplicate detection, validation, and mapping to ensure accurate analysis and compliance. Automated Reporting: Systematic production and delivery of a detailed execution report covering records read, created, and updated. 📬 Contact Need help customizing this (e.g., expanding Data Tables, connecting multiple surveys, or automating follow-ups)? 📧 smarthome.smartelec@gmail.com 🔗 guy.salvatore 🌐 smarthome-smartelec.fr
by Rahul Joshi
Description: Make your SDK documentation localization-ready before translation with this n8n automation template. The workflow pulls FAQ content from Notion, evaluates each entry using Azure OpenAI GPT-4o-mini, and scores its localization readiness based on jargon density, cultural context, and translation risk. It logs results into Google Sheets and notifies your team on Slack if an FAQ scores poorly (≤5). Perfect for developer documentation teams, localization managers, and globalization leads who want to identify high-risk content early and ensure smooth translation for multi-language SDKs. ✅ What This Template Does (Step-by-Step) ⚙️ Step 1: Fetch FAQs from Notion Retrieves all FAQ entries from your Notion database, including question, answer, and unique ID fields for tracking. 🤖 Step 2: AI Localization Review (GPT-4o-mini) Uses Azure OpenAI GPT-4o-mini to evaluate each FAQ for localization challenges such as: Heavy use of technical or cultural jargon Region-specific policy or legal references Non-inclusive or ambiguous phrasing Potential mistranslation risk Outputs a detailed report including: Score (1–10) – overall localization readiness Detected Issues – list of problematic elements Priority – high, medium, or low for translation sequencing Recommendations – actionable rewrite suggestions 🧩 Step 3: Parse AI Response Converts the raw AI output into structured JSON (score, issues, priority, recommendations) for clean logging and filtering. 📊 Step 4: Log Results to Google Sheets Appends one row per FAQ, storing fields like Question, Score, Priority, and Recommendations — creating a long-term localization quality tracker. 🚦 Step 5: Filter High-Risk Content (Score ≤5) Flags FAQs with low localization readiness for further review, ensuring that potential translation blockers are addressed first. 📢 Step 6: Send Slack Alerts Sends a Slack message with summary details for all high-risk FAQs — including their score and key issues — keeping localization teams informed in real time. 🧠 Key Features 🌍 AI-powered localization scoring for SDK FAQs 🤖 Azure OpenAI GPT-4o-mini integration 📊 Google Sheets-based performance logging 📢 Slack notifications for at-risk FAQs ⚙️ Automated Notion-to-AI-to-Sheets pipeline 💼 Use Cases 🧾 Audit SDK documentation before translation 🌐 Prioritize localization tasks based on content risk 🧠 Identify FAQs that need rewriting for non-native audiences 📢 Keep global documentation teams aligned on translation readiness 📦 Required Integrations Notion API – to fetch FAQ entries Azure OpenAI (GPT-4o-mini) – for AI evaluation Google Sheets API – for logging structured results Slack API – for sending alerts on high-risk FAQs 🎯 Why Use This Template? ✅ Detect localization blockers early in your SDK documentation ✅ Automate readiness scoring across hundreds of FAQs ✅ Reduce translation rework and cultural misinterpretation ✅ Ensure a globally inclusive developer experience
by Rahul Joshi
📘 Description: This workflow automates sales performance tracking and motivational updates by integrating HighLevel CRM, Notion, GPT-4o, and Slack. It pulls all deals from HighLevel, cleans and summarizes sales data per representative, creates performance dashboards in Notion, and uses GPT-powered AI to generate personalized motivational Slack messages. It eliminates manual leaderboard tracking and boosts sales engagement with real-time insights and daily motivation — ensuring every sales rep stays informed, recognized, and inspired. What This Workflow Does (Step-by-Step) 🟢 Manual Trigger – Starts the automation manually for data refresh or testing. 📦 Fetch All Deals from HighLevel CRM – Retrieves all opportunities from HighLevel CRM, including deal names, reps, values, and stages for full visibility. 🔍 Validate Deal Fetch Success (IF Node) – Verifies that fetched data contains valid deal IDs. ✅ True Path: Continues to data cleaning. ❌ False Path: Logs failed records to Google Sheets for debugging. 🧹 Clean & Structure Deal Data – Normalizes raw deal data into a consistent schema (deal ID, rep ID, client name, value, status). Ensures clean inputs for analytics. 📊 Summarize Sales by Representative – Aggregates deals per sales rep and computes: Total deals handled Total deal value Total deals won Average deal value 🧾 Generate Notion Performance Dashboard – Creates personalized Notion dashboards for each rep with daily updated performance summaries and motivation metrics. ⚙️ Transform Data for AI Input – Converts summarized data into AI-readable format for GPT-4o processing. 🧠 GPT-4o Model Configuration – Sets up Azure OpenAI GPT-4o model to generate motivational and contextual Slack messages. 🤖 AI-Generated Motivational Slack Messages – Uses LangChain + GPT-4o to create energetic, emoji-filled messages that celebrate rep achievements and encourage improvement. 💬 Notify Sales Team in Slack – Sends the AI-generated performance summaries and motivational blurbs directly to each rep or the team Slack channel for transparency and engagement. 🚨 Log Fetch or Validation Errors (Error Handling) – Records any fetch or validation failures in the Google Sheets “error log sheet” for easy review and error management. Prerequisites HighLevel CRM API credentials Google Sheets for “Error Log” tracking Notion API integration for dashboards Azure OpenAI (GPT-4o) credentials Slack API connection for notifications Key Benefits ✅ Fully automated daily performance tracking ✅ Personalized AI-powered motivation in Slack ✅ Transparent visibility for managers and reps ✅ Improved accountability and sales performance ✅ Seamless integration across CRM, Notion, and Slack 👥 Perfect For Sales teams seeking real-time motivation and transparency Managers who want automated performance dashboards Teams using HighLevel CRM and Slack for daily operations Companies aiming to gamify sales productivity
by Jitesh Dugar
Automated Invoice Intelligence: PDF-to-JSON Financial Orchestrator 🎯 Description This is an elite enterprise-grade solution for Accounts Payable and Finance Ops teams. It automates the high-volume extraction of unstructured data from PDF invoices using the HTML to PDF (Parse PDF to JSON) node, transforming raw email attachments into validated, audit-ready financial records across multiple platforms. ✨ The Sovereign Lifecycle Intelligent Intake & Validation - Monitors Gmail for incoming invoices. Implements a strict validation layer to ensure only valid PDF binaries enter the intelligence stream, filtering out noise and non-invoice attachments. Atomic JSON Transformation - Leverages the HTML to PDF (Parse PDF to JSON) node to decompose monolithic PDF files into structured data objects. This allows the system to treat document text as actionable data points. AI Ledger Mapping - A sophisticated hybrid-intelligence Code Node acts as a virtual "Financial Controller." It uses advanced pattern matching to extract key fields (Vendor Name, Invoice Number, Tax, and Total Amount) and calculates a Confidence Score for every entry. Conditional Approval Gating - Implements a "Trust-but-Verify" architecture: Auto-Process: High-confidence extractions of standard value invoices are posted immediately. Human-in-the-Loop (HITL): Low-confidence results (<0.7) or high-value invoices (>$5,000) are automatically diverted to a Review Queue in Google Sheets. Multi-Platform Ledger Sync - Simultaneously synchronizes data across Google Sheets (for real-time reporting) and Airtable (for project management), ensuring a single source of truth. Forensic Archival & Alerts - Moves original files to a secure Google Drive archive. High-priority items trigger instant Slack and Gmail alerts for the finance team. 💡 Key Technical Features Heuristic Pattern Matching:** Dynamically handles various invoice layouts by analyzing financial context and keywords. Math Integrity Check:** Automatically verifies if (Subtotal + Tax = Total) to boost extraction confidence. Bi-Directional Governance:** Keeps track of every document with a unique Processing ID and metadata log for regulatory compliance. 🚀 Benefits ✅ 95% Reduction in Data Entry - Shifts human effort from manual typing to high-level oversight. ✅ Financial Risk Mitigation - Automatically flags high-value transactions and extraction anomalies before they hit the books. ✅ Real-Time Visibility - Instant updates to financial dashboards the moment an invoice is received. Tags: #finance #accounts-payable #pdf-to-json #automation #fintech #google-sheets #airtable Category: Finance & Operations | Difficulty: Advanced
by noda
Price Anomaly Detection & News Alert (Marketstack + HN + DeepL + Slack) Overview This workflow monitors a stock’s closing price via Marketstack. It computes a 20-day moving average and standard deviation (±2σ). If the latest close is outside ±2σ, it flags an anomaly, fetches related headlines from Hacker News, translates them to Japanese with DeepL, and posts both original and translated text to Slack. When no anomaly is detected, it sends a concise “normal” report. How it works 1) Daily trigger at 09:00 JST 2) Marketstack: fetch EOD data 3) Code: compute mean/σ and classify (normal/high/low) 4) IF: anomaly? → yes = news path / no = normal report 5) Hacker News: search related items 6) DeepL: translate EN → JA 7) Slack: send bilingual notification Requirements Marketstack API key DeepL API key Slack OAuth2 (bot token / channel permission) Notes Edit the ticker in Get Stock Data. Adjust N (days) and k (sigma multiplier) in Calculate Deviation. Keep credentials out of HTTP nodes (use n8n Credentials).
by Rahul Joshi
Description Automatically detect customer churn risks from Zendesk tickets, log them into Google Sheets for tracking, and send instant Slack alerts to your customer success team. This workflow helps you spot unhappy customers early and take proactive action to reduce churn. 🚨📊💬 What This Template Does Fetches Zendesk tickets daily on schedule (8:00 PM). ⏰ Processes and formats ticket data into clean JSON (priority, age, urgency). 🧠 Identifies churn risks based on negative satisfaction ratings. ⚠️ Logs churn risk tickets into Google Sheets for analysis and reporting. 📈 Sends formatted Slack alerts with ticket details to the CS team channel. 📢 Key Benefits Detects unhappy customers before they churn. 🚨 Centralized churn tracking for reporting and team reviews. 🧾 Proactive alerts to reduce response delays. ⏱️ Clean, structured ticket data for analytics and filtering. 🔄 Strengthens customer success strategy with real-time visibility. 🌐 Features Schedule Trigger – Runs every weekday at 8:00 PM. 🗓️ Zendesk Integration – Fetches all tickets automatically. 🎫 Smart Data Processing – Adds ticket age, urgency, and priority mapping. 🧮 Churn Risk Filter – Flags tickets with negative satisfaction scores. 🚩 Google Sheets Logging – Saves churn risk details with metadata. 📊 Slack Alerts – Sends formatted messages with ID, subject, rating, and action steps. 💬 Requirements n8n instance (cloud or self-hosted). Zendesk API credentials with ticket read access. Google Sheets OAuth2 credentials with write permissions. Slack Bot API credentials with channel posting permissions. Pre-configured Google Sheet for churn risk logging. Target Audience Customer Success teams monitoring churn risk. 👩💻 SaaS companies tracking customer health. 🚀 Support managers who want proactive churn alerts. 🛠️ SMBs improving retention through automation. 🏢 Remote CS teams needing instant notifications. 🌐 Step-by-Step Setup Instructions Connect your Zendesk, Google Sheets, and Slack credentials in n8n. 🔑 Update the Schedule Trigger (default: daily at 8:00 PM) if needed. ⏰ Replace the Google Sheet ID with your churn risk tracking sheet. 📊 Confirm the Slack channel ID for alerts (default: zendesk-churn-alerts). 💬 Adjust churn filter logic (default: satisfaction_score = "bad"). 🎯 Run a test to fetch Zendesk tickets and validate Sheets + Slack outputs. ✅
by 福壽一貴
Who is this for? Dream journaling enthusiasts** who want to visualize and record their dreams Self-improvement practitioners** interested in dream analysis and psychology Content creators** looking for unique, AI-generated dream-based content Wellness coaches and therapists** who use dream work with clients What it does Receives dream descriptions via Telegram bot commands Parses visual style selection from 8 options (cinematic, ghibli, surreal, vintage, horror, abstract, watercolor, cyberpunk) Analyzes the dream using AI to extract themes, symbols, and psychological meaning Generates optimized video prompt tailored to the selected style with audio descriptions Creates AI video with native audio using Google Veo3 (single API call) Logs to Google Sheets as a searchable dream journal Sends video + analysis back to user via Telegram How to set up Estimated setup time: 15 minutes Step 1: Create Telegram Bot Message @BotFather on Telegram Send /newbot and follow prompts Copy the API token Step 2: Get fal.ai API Key Sign up at fal.ai Generate API key from dashboard In n8n, create Header Auth credential: Name: Authorization Value: Key YOUR_FAL_API_KEY Step 3: Get OpenRouter API Key Sign up at openrouter.ai Generate API key Add to n8n as OpenRouter credential Step 4: Set up Google Sheets (Optional) Create new spreadsheet with columns: Timestamp, Username, Style, Dream, Theme, Emotion, Type, Meaning, Video URL Connect Google Sheets credential in n8n Select your document and sheet in the "Log to Google Sheets" node Step 5: Connect Credentials Add Telegram credential to all Telegram nodes Add fal.ai Header Auth to both HTTP Request nodes Add OpenRouter credential to the LLM node Requirements | Service | Purpose | Cost | |---------|---------|------| | Telegram Bot | User interface | Free | | fal.ai (Veo3) | Video + audio generation | ~$0.10-0.15/video | | OpenRouter | LLM for dream analysis | ~$0.01-0.03/request | | Google Sheets | Dream journal storage | Free | How to customize Change LLM**: Replace OpenRouter with OpenAI, Anthropic, or other providers Add styles**: Edit the STYLES object in "Parse Dream Command" node Modify analysis**: Edit system prompt in "AI Dream Analyzer Agent" node Change video model**: Replace Veo3 URL with Kling, Luma, or other fal.ai models Skip logging**: Remove or disable the Google Sheets node Commands | Command | Description | |---------|-------------| | /dream [text] | Generate video in cinematic style | | /dream [style] [text] | Generate with specific style | | /styles | Show all available styles | Example Input: /dream ghibli I was flying over a forest where the trees had glowing leaves Output: 8-second AI video with magical Ghibli-style visuals, ambient soundtrack, plus psychological analysis of flight symbolism and nature connection themes.
by Rahul Joshi
📘 Description This workflow automates guest upsell discovery and recommendation for hotels by combining Google Sheets, OpenAI, and Slack. It is designed to help hospitality teams proactively identify the single best upsell opportunity for each guest—before arrival or during the stay—without manual analysis or guesswork. The workflow runs on a fixed daily schedule and reads guest records from Google Sheets, which acts as the operational source of truth. Guests are automatically categorized based on stay status (upcoming arrivals vs. currently checked in). For each guest, relevant context such as room type, repeat status, spend level, preferences, and special occasions is prepared and passed to an AI engine. The AI deterministically recommends one high-confidence upsell (e.g., room upgrade, airport pickup, spa, dining, or experience), returns a structured JSON response, and explains the reasoning behind the recommendation. The selected upsell type is written back to the spreadsheet for tracking, and a clear, actionable Slack notification is sent to the team so they can act immediately. Any workflow failure triggers a Slack alert, ensuring reliability and operational visibility. ⚙️ What This Workflow Does (Step-by-Step) ⏰ Daily Scheduled Trigger Runs automatically every day at a fixed time. 📊 Read Guest Data from Google Sheets Fetches all guest records from the central spreadsheet. 🔀 Split by Stay Status Routes guests into two paths: before arrival or during stay. 🎯 Prepare Guest Context Extracts guest attributes (room type, spend level, preferences, occasion, stay phase). 🤖 AI Upsell Recommendation Uses OpenAI to recommend one best upsell per guest and returns structured JSON. 🧹 Parse AI Response Cleans and validates AI output to ensure reliability. 💾 Update Guest Record Writes the selected upsell type back into Google Sheets. 💬 Notify Team in Slack Posts a formatted upsell notification with reasoning for immediate action. 🚨 Error Handling → Slack Alert Sends an instant alert if any step in the workflow fails. 🧩 Prerequisites • Google Sheets OAuth2 (read/write) • OpenAI API key (GPT-4o-mini) • Slack API credentials • Self-hosted n8n recommended 💡 Key Benefits ✔ Identifies high-value upsell opportunities automatically ✔ Context-aware AI recommendations (not generic offers) ✔ Clear before-arrival vs. during-stay logic ✔ Real-time team visibility via Slack ✔ Centralized tracking in Google Sheets ✔ Built-in error monitoring 👥 Perfect For Hotels and resorts Revenue and upsell teams Front-desk and concierge teams Hospitality operators focused on guest experience and ARPU growth