by Ahmad Bukhari
Who is this for? This workflow is built for n8n admins, automation agencies, solopreneurs, and ops teams running multiple workflows in production who need to know the moment something breaks. If you're manually checking your n8n execution logs every day to catch failures or worse, finding out about broken workflows days later this template gives you real-time monitoring with zero effort. What problem does this solve? Failed workflows go unnoticed for hours or days because nobody checks the execution log When you do find a failure, you have to dig through execution data to figure out what went wrong No centralized error history you can't spot patterns or recurring issues Alert fatigue from generic monitoring tools that don't tell you why something failed or how to fix it What this workflow does This workflow monitors your entire n8n instance for failed executions and handles the full error lifecycle automatically: Continuous monitoring** runs every minute on a schedule (or on-demand via webhook) Smart filtering** only processes failures from the last 5 minutes and excludes its own executions to prevent alert loops Automatic error classification** categorizes every failure into one of 7 types: Auth Error, Rate Limit, Network, Data/Config Error, Not Found, Server Error, and Permission Severity assignment** tags each error as 🔴 Critical, 🟠 High, or 🟡 Medium Suggested fixes** — generates an actionable fix suggestion for each error category (e.g., "Re-authenticate credential for: [node name]") Google Sheets logging** appends a detailed row to your error log with timestamp, workflow name, error category, severity, message, suggested fix, and retry status Color-coded Slack alerts** sends a formatted message to your alerts channel with the workflow name, failed node, error type, error message, suggested fix, and clickable links to the workflow and execution Setup Credentials needed n8n API Key** HTTP Header Auth credential with header name X-N8N-API-KEY Google Sheets** OAuth credentials Slack** OAuth credentials Configuration Create an API key in your n8n instance (Settings → API) Add it as an HTTP Header Auth credential in n8n with the header name X-N8N-API-KEY Open the Get Failures and Get Execution Detail nodes and replace YOUR-N8N-INSTANCE-URL with your actual n8n domain (e.g., n8n.yourcompany.com) Create a Google Sheet with these columns: Timestamp, Workflow Name, Workflow ID, Execution ID, Failed Node, Error Category, Severity, Error Message, Suggested Fix, Retryable, Status, Notes Open the Log to Sheet node and update the spreadsheet ID Open the Slack Alert node and set the channel to your alerts channel (e.g., #n8n-errors) Test it Manually trigger the webhook, or intentionally break a test workflow and wait one minute for the scheduled check. You should see a Slack alert and a new row in your Google Sheet. How to customize this workflow Different alerting channel?** Replace or duplicate the Slack node to send alerts to Microsoft Teams, Discord, email, or PagerDuty Auto-retry?** Add a retry branch after the Classify Error node that automatically re-runs retryable executions (Rate Limit, Network, Server Error) via the n8n API Different logging?** Replace the Google Sheets node with Airtable, Notion, or a database insert for more structured tracking Adjust the schedule?** Change the Schedule Trigger from every minute to every 5 minutes, 15 minutes, or hourly depending on your needs Add more error categories?** Extend the classification logic in the Classify Error code node to handle domain-specific errors (e.g., Stripe payment failures, Shopify API errors)
by Rahul Joshi
Description This workflow automates the evaluation of interviewer feedback using AI. It retrieves raw notes from Google Sheets, processes them through GPT-4o-mini for structured scoring, validates outputs, and calculates weighted quality scores. The system provides real-time Slack feedback to interviewers, logs AI errors for transparency, and recommends training if the feedback quality is low. What This Template Does (Step-by-Step) ⚡ Manual Trigger – Runs the workflow manually to start evaluation. 📋 Fetch Raw Feedback Data (Google Sheets) – Reads all feedback entries (Role, Stage, Interviewer Email, Feedback Text, row_number). 🧠 AI Quality Evaluator (Azure GPT-4o-mini) – Processes feedback into structured JSON across 5 dimensions. 🔍 Analyze Feedback Quality (LLM Chain) – Applies scoring rules (Specificity, STAR, Bias-Free, Actionability, Depth) and outputs structured JSON. ✅ Validate AI Response – Ensures AI output isn’t undefined or malformed. 🚨 Log AI Errors (Google Sheets) – Records invalid AI responses for debugging and auditing. 🔄 Parse AI JSON Output (Code Node) – Converts AI JSON text into structured n8n objects with error handling. 🧮 Calculate Weighted Quality Score (Code Node) – Computes final weighted score (0–100), generates flags, formats vague phrases, and preserves context. 💾 Save Scores to Spreadsheet (Google Sheets) – Updates the original feedback row with Score, Flags, and AI JSON. 💬 Send Feedback Summary to Interviewer (Slack) – Sends interviewers a structured Slack report (score, flags, vague phrases, STAR improvement tips). 🎯 Check if Training Needed – Applies threshold logic: if score < 50, route to training recommendations. 📚 Send Training Recommendations (Slack) – Delivers STAR method guides and bias-free interviewing resources to low scorers. Prerequisites Google Sheets (Raw_Feedback + Error Log Sheet) Azure OpenAI API credentials (for GPT-4o-mini) Slack API credentials (for sending feedback & training notifications) n8n instance (cloud or self-hosted) Key Benefits ✅ Automated interview feedback quality scoring ✅ Bias detection and vague feedback flagging ✅ Real-time Slack feedback to interviewers ✅ Error logging for AI reliability tracking ✅ Training recommendations for low scorers ✅ Audit trail maintained in Google Sheets Perfect For HR & Recruitment teams ensuring structured interviewer feedback Organizations enforcing STAR method & bias-free hiring Teams seeking continuous interviewer coaching Companies needing audit-ready records of interview quality
by n8n Automation Expert | Template Creator | 2+ Years Experience
🎯 What This Workflow Does Transform your digital payment business with a fully-featured Telegram bot that handles everything from product listings to transaction processing. Perfect for entrepreneurs looking to automate their PPOB (mobile credit, data packages, bill payments) business operations without coding expertise. ✨ Key Features 📱 Complete Transaction Management Prepaid Services**: Mobile credit, data packages, PLN tokens Gaming**: Game vouchers for popular platforms E-Wallet**: OVO, DANA, GoPay, ShopeePay top-ups Bill Payments**: PLN postpaid, Telkom, cable TV, internet, credit cards 💰 Smart Business Operations Real-time balance checking with low-balance alerts Automated transaction processing with MD5 security Interactive product catalog with categorized browsing Transaction history and status tracking Deposit request management 🤖 User-Friendly Interface Intuitive inline keyboard navigation Multi-step transaction flows with validation Comprehensive error handling and user feedback Professional messaging with emojis and formatting 🛠️ Technical Highlights Robust Architecture Switch-based routing** for efficient command handling MD5 signature authentication** for secure API communications Session management** for multi-step user interactions Comprehensive error handling** with user-friendly messages API Integrations Digiflazz API**: Balance checking, product listings, transactions, bill inquiries Telegram Bot API**: Message handling, inline keyboards, callback queries Secure credential management** with environment variables 📋 Setup Requirements Prerequisites Active Digiflazz account with API credentials Telegram Bot Token from @BotFather n8n instance (cloud or self-hosted) Environment Variables DIGIFLAZZ_USERNAME=your_digiflazz_username DIGIFLAZZ_API_KEY=your_digiflazz_api_key 🎮 How to Use Customer Commands /start - Welcome message and main menu /menu - Access main navigation /balance - Check account balance /products - Browse product catalog /topup - Process prepaid transactions /checkbill - Inquiry postpaid bills /paybill - Pay postpaid services /deposit - Request balance deposit /history - View transaction history Business Features Automated balance monitoring** with threshold alerts Product categorization** for easy browsing Transaction confirmation** with detailed receipts Multi-payment type support** across various service providers 🔒 Security & Compliance MD5 signature verification** for all API calls Input validation** and sanitization Session timeout management** Error logging** and monitoring HTTPS-only communications** 💡 Business Benefits For PPOB Entrepreneurs Reduce manual work** by 90% through automation 24/7 customer service** without human intervention Professional presentation** builds customer trust Scalable operations** handle unlimited transactions For Customers Instant transactions** with real-time confirmations Easy navigation** through intuitive menus Multiple service options** in one convenient bot Reliable service** with comprehensive error handling 📊 Performance Features Sub-second response times** for balance checks Concurrent transaction processing** Automatic retry logic** for failed operations Detailed logging** for business analytics 🎯 Perfect For Digital payment entrepreneurs** starting PPOB businesses Existing businesses** looking to automate customer service Resellers** wanting professional transaction interfaces Developers** seeking proven automation templates 📱 Supported Services Prepaid Products Mobile credit (all Indonesian operators) Data packages and internet vouchers PLN electricity tokens Game vouchers (Mobile Legends, Free Fire, PUBG, etc.) Postpaid Services PLN electricity bills Telkom phone bills Cable TV subscriptions (First Media, MNC, etc.) Internet service providers Credit card payments Multifinance installments 🚀 Getting Started Import the workflow JSON into your n8n instance Configure Telegram and Digiflazz credentials Set up environment variables Activate the workflow Test with your Telegram bot Start serving customers immediately! 💎 Premium Features Comprehensive documentation** with setup guides Error handling** for all edge cases Professional UI/UX** design Scalable architecture** for business growth Community support** and updates Transform your digital payment business today with this production-ready Telegram bot automation. No coding required – just configure and launch! Perfect for the Indonesian PPOB market with full Digiflazz integration and professional customer experience.
by malcolm
Inspiration & Notes This workflow was born out of a very real problem. While writing a book, I found the process of discovering suitable literary agents and managing outreach to be manual, and surprisingly difficult to scale. Researching agents, checking submission rules, personalizing emails, tracking submissions, and staying organized quickly became a full-time job on its own. So instead of doing it manually, I automated it. I built this entire workflow in 3 days — and the goal of publishing it is to show that you can do the same. With the right structure and intent, complex sales and marketing workflows don’t have to take months to build. Contact & Collaboration If you have questions, business inquiries, or would like help setting up automation workflows, feel free to reach out: 📩 malcolm95authoring@gmail.com I genuinely enjoy designing workflows and automation systems, especially when they support meaningful projects. I work primarily from interest and impact rather than purely financial motivation. Whether I take on a project for FREE or paid for the following reasons: I LOVE setting up workflows and automation. I work for meaningfulness, not for money. I may do the work for free**, depending on how meaningful the project is. If the problem statement matters, the motivation follows. It also depends on the value I bring to the table** -- If I can contribute significant value through system design, I’m more inclined to get involved. If you’re building something thoughtful and need help automating it, I’m always happy to have a conversation. Enjoy~! 0. Overview Automates the end-to-end literary agent outreach pipeline, from data ingestion and eligibility filtering to deep agent research, personalized email generation, submission tracking, and analytics. Architecture The system is organized into four logical domains: The system is modular and is divided into four domains: --> Data Engineering --> Marketing & Research --> Sales (Outreach) --> Data Analysis Each domain operates independently and passes structured data downstream. 1. Data Engineering Purpose: Ingest and normalize agent data from multiple sources into a single source of truth. Inputs Google BigQuery Azure Blob Storage AWS S3 Google Sheets (Optional) HTTP sources Key Steps Scheduled ingestion trigger Merge and normalize heterogeneous data formats (CSV, tables) Deduplication and validation AI-assisted enrichment for missing metadata Append-only writes to a central Google Sheet Output Clean, normalized agent records ready for eligibility evaluation 2. Marketing & Research Purpose: Decide who to contact and how to personalize outreach. Eligibility Evaluation An AI agent evaluates each record against strict rules: Email submissions enabled Not QueryTracker-only or QueryManager-only Genre fit (e.g. Memoir, Spiritual, Self-help, Psychology, Relationships, Family) Outputs send_email (boolean) reason (auditable explanation) Deep Research For eligible agents only: Public research from agency sites, interviews, Manuscript Wish List, and LinkedIn (if public) Extracts: Professional background Editorial interests Genres represented Notable clients/books (if publicly listed) Public statements Source-backed personalization angles Strict Rule: All claims must be explicitly cited; no inference or hallucination is allowed. 3. Sales (Outreach) Purpose: Execute personalized email outreach and maintain clean submission tracking. Steps AI generates agent-specific email copy Copy is normalized for tone and clarity Email is sent (e.g. Gmail) Submission metadata is logged: Submission Completed Submission Timestamp Channel used Result Consistent, traceable outreach with CRM-style hygiene 4. Data Analysis Purpose: Measure pipeline health and outreach effectiveness. Features Append-only decision and submission logs QuickChart visualizations for fast validation (e.g. TRUE vs FALSE completion rates) Optional integration with: Power BI Google Analytics 4 Supports Completion rate analysis Funnel tracking Source/platform performance Decision auditing Design Principles Separation of concerns** (ingestion ≠ decision ≠ outreach ≠ analytics) AI with hard guardrails** (strict schemas, source-only facts) Append-only logging** (analytics-safe, debuggable) Modular & extensible** (plug-and-play data sources) Human-readable + machine-usable outputs** Constraints & Notes Only public, professional information is used No private or speculative data HTTP scraping avoided unless necessary Power BI Embedded is not required Workflow designed and implemented end-to-end in ~3 days Use Cases Marketing Audience discovery Agent segmentation Personalization at scale Campaign readiness Funnel automation Sales Lead qualification Deduplication Outreach execution Status tracking Pipeline hygiene Tech Stack Automation:** n8n AI:** OpenAI (GPT) Scripting:** JavaScript Data Stores:** Google Sheets Email:** Gmail Visualization:** QuickChart BI (optional):** Power BI, Google Analytics 4 Cloud Sources:** AWS S3, Azure Blob, BigQuery Status This workflow is production-ready, modular, and designed for extension into other sales or marketing domains beyond literary outreach.
by Cheng Siong Chin
How It Works This workflow automates clinical trial signal validation and regulatory governance through intelligent AI-driven oversight. Designed for clinical research organizations, pharmaceutical companies, and regulatory affairs teams, it solves the critical challenge of ensuring trial compliance while managing post-market surveillance obligations across multiple regulatory frameworks.The system operates on scheduled intervals, fetching data from clinical trial databases and laboratory production signals, then merging these sources for comprehensive analysis. It employs dual AI agents for clinical signal validation and governance assessment, detecting protocol deviations, safety signals, and compliance violations. The workflow intelligently routes findings based on governance action requirements, orchestrating parallel processes for regulatory reporting, batch result documentation, and post-market surveillance logging. By maintaining synchronized audit trails across regulatory reports, batch records, post-market surveillance, and comprehensive action logs, it ensures complete traceability while automating escalation to quality teams when intervention is required. Setup Steps Configure Schedule Trigger with monitoring frequency for trial oversight Connect Workflow Configuration node with trial parameters and compliance rules Set up Fetch Clinical Trial Data and Fetch Lab & Production Signals nodes Configure Merge Signal Sources node for data consolidation logic Connect Clinical Signal Validation Agent with OpenAI/Nvidia API credentials Set up parallel AI processing Configure Regulatory Governance Agent with AI API credentials for compliance assessment Connect Route by Governance Action node with classification logic Prerequisites OpenAI or Nvidia API credentials for AI validation agents, clinical trial database API access Use Cases Pharmaceutical companies managing Phase III trial monitoring, CROs overseeing multi-site clinical studies Customization Adjust signal validation criteria for therapeutic area-specific protocols Benefits Reduces regulatory review cycles by 70%, eliminates manual signal triage
by Muh Resky Adiansyah
Meta Ads Insights to Google Sheets (Backfill & Weekly Sync ETL) This workflow provides a structured way to extract Meta Ads performance data and store it in Google Sheets for reporting, dashboarding, or further analysis. It is designed as a lightweight, reliable ETL pipeline focused on stability, clarity, and ease of use, rather than building a full data warehouse solution. What This Workflow Does At a high level, the system: Pulls Meta Ads Insights data via API Supports both historical backfill and automated incremental sync Splits large date ranges into manageable weekly chunks Handles pagination and retries automatically Filters out zero-spend records before storage Stores clean, structured data in Google Sheets Logs skipped or empty responses for traceability Architecture Overview Core Components n8n Meta Ads API Google Sheets Primary Data Outputs Account_A → Campaign-level data (weekly) Account_B → Ad-level data (daily breakdown) Account_A_Log / Account_B_Log → Logging for skipped or empty responses End-to-End Flow A) Dual Entry Points The workflow supports two execution modes: Historical Backfill (Manual Trigger) Used to populate past data. Define start_date and end_date Workflow generates 7-day chunks Each chunk is processed sequentially Incremental Sync (Scheduled Trigger) Runs automatically every 7 days. Dynamically pulls last 7 days No manual input required B) Period Chunking Large date ranges are split into weekly intervals. Prevents API overload Reduces risk of timeouts Ensures consistent data retrieval C) Data Extraction (Per Account) Each period is processed for two separate data streams: Account A Level: campaign Granularity: weekly Account B Level: ad Granularity: daily (time_increment=1) Both using pagination handling & fail-safe response handling D) Response Validation Each API response is validated: Must contain a non-empty data array Invalid or empty responses are redirected to logging This prevents corrupted or empty data from entering the dataset. E) Data Transformation API responses are: Split into individual rows Normalized (numeric fields converted properly) Preserved in full structure (no schema trimming) F) Filtering Logic Only meaningful data is stored: Records where spend != 0 are allowed Zero-spend rows are discarded This keeps the dataset lean and relevant for reporting. G) Data Loading Valid records are appended into Google Sheets: Account A → campaign-level table Account B → ad-level table Each run adds new rows without overwriting previous data. H) Logging & Traceability If a period returns: empty data or API anomaly The workflow logs: status reason account date range execution ID timestamp This creates a lightweight audit trail for debugging and monitoring. Safeguards Built In Pagination handling (auto-follow next page) Fail-safe handling for unstable API responses Execution-level traceability via logs Separation between transformation and filtering logic Google Sheets Schema Account_A / Account_B Includes: date range (start & stop) account, campaign, adset, and ad identifiers performance metrics (spend, impressions, clicks, etc.) action arrays and ranking fields Log Sheets Columns: status reason account since until execution_id timestamp Limitations (By Design) Append-only system (no deduplication) Re-running the same period will create duplicate rows No transactional guarantees (Google Sheets limitation) No concurrency control for parallel executions Not designed for real-time reporting These constraints are intentional to keep the workflow simple and portable. When This Design Works Well Marketing reporting pipelines Looker Studio / dashboard data sources Small to medium datasets Teams without a data warehouse Lightweight ETL needs Setup Requirements Meta Ads API access (ads_read permission) Google Sheets (with required tabs) n8n instance (cloud or self-hosted) Summary This workflow focuses on: clarity over complexity reliability over completeness practical ETL over perfect data modeling It is a solid foundation for building marketing data pipelines without heavy infrastructure.
by Cheng Siong Chin
How It Works This workflow automates end-to-end legal contract review and compliance governance for legal teams, contract managers, and risk officers. It solves the problem of manually reviewing uploaded contracts for regulatory compliance, risk classification, and approval routing, a process that is time-consuming, inconsistent, and difficult to audit at scale. Contracts are ingested via a webhook upload trigger and text is extracted immediately. The extracted content is passed to a Legal Governance Agent backed by shared memory, which coordinates three specialist components: a Contract Review Agent (using a dedicated review model and memory), a Compliance Validation Agent (referencing a Regulatory Database Tool and compliance model), and a Slack Alert Tool with structured output parsing. The agent classifies each contract by risk level and routes accordingly, critical alerts, high-risk alerts, and standard reviews each follow distinct paths. All contracts generate an audit record. High and critical cases trigger contract review tracking, approval requirement checks, approval log preparation, and a final summary. Risk clauses are extracted in parallel, split, and stored for downstream reference. Setup Steps Import workflow; configure the Contract Upload Webhook trigger URL. Add AI model credentials to the Legal Governance Agent, Contract Review Agent, and Compliance Validation Agent nodes. Connect Slack credentials to the Slack Alert Tool node. Link Google Sheets credentials; set sheet IDs for Contract Reviews, Approval Log, and Risk Clauses tabs. Configure the Regulatory Database Tool with your compliance database API endpoint and credentials. Prerequisites OpenAI API key (or compatible LLM) Slack workspace with bot credentials Google Sheets with review and risk log tabs pre-created Regulatory database API endpoint access Use Cases Legal teams auto-triaging uploaded vendor contracts by compliance risk level Customisation Swap the Regulatory Database Tool endpoint to target jurisdiction-specific compliance frameworks (GDPR, CCPA, MAS) Benefits Eliminates manual contract triage, reducing review cycle time significantly
by Alexandra Spalato
Short Description This LinkedIn automation workflow monitors post comments for specific trigger words and automatically sends direct messages with lead magnets to engaged users. The system checks connection status, handles non-connected users with connection requests, and prevents duplicate outreach by tracking all interactions in a database. Key Features Comment Monitoring**: Scans LinkedIn post comments for customizable trigger words Connection Status Check**: Determines if users are 1st-degree connections Automated DMs**: Sends personalized messages with lead magnet links to connected users Connection Requests**: Asks non-connected users to connect via comment replies Duplicate Prevention**: Tracks interactions in NocoDB to avoid repeat messages Message Rotation**: Uses different comment reply variations for authenticity Batch Processing**: Handles multiple comments with built-in delays Who This Workflow Is For Content creators looking to convert post engagement into leads Coaches and consultants sharing valuable LinkedIn content Anyone wanting to automate lead capture from LinkedIn posts How It Works Setup: Configure post ID, trigger word, and lead magnet link via form Comment Extraction: Retrieves all comments from the specified post using Unipile Trigger Detection: Filters comments containing the specified trigger word Connection Check: Determines if commenters are 1st-degree connections Smart Routing: Connected users receive DMs, others get connection requests Database Logging: Records all interactions to prevent duplicates Setup Requirements Required Credentials Unipile API Key**: For LinkedIn API access NocoDB API Token**: For database tracking Database Structure **Table: leads linkedin_id: LinkedIn user ID name: User's full name headline: LinkedIn headline url: Profile URL date: Interaction date posts_id: Post reference connection_status: Network distance dm_status: Interaction type (sent/connection request) Customization Options Message Templates**: Modify DM and connection request messages Trigger Words**: Change the words that activate the workflow Timing**: Adjust delays between messages (8-12 seconds default) Reply Variations**: Add more comment reply options for authenticity Installation Instructions Import the workflow into your n8n instance Set up NocoDB database with required table structure Configure Unipile and NocoDB credentials Set environment variables for Unipile root URL and LinkedIn account ID Test with a sample post before full use
by Growth AI
Google Ads automated reporting to spreadsheets with Airtable Who's it for Digital marketing agencies, PPC managers, and marketing teams who manage multiple Google Ads accounts and need automated monthly performance reporting organized by campaign types and conversion metrics. What it does This workflow automatically retrieves Google Ads performance data from multiple client accounts and populates organized spreadsheets with campaign metrics. It differentiates between e-commerce (conversion value) and lead generation (conversion count) campaigns, then organizes data by advertising channel (Performance Max, Search, Display, etc.) with monthly tracking for budget and performance analysis. How it works The workflow follows an automated data collection and reporting process: Account Retrieval: Fetches client information from Airtable (project names, Google Ads IDs, campaign types) Active Filter: Processes only accounts marked as "Actif" for budget reporting Campaign Classification: Routes accounts through e-commerce or lead generation workflows based on "Typologie ADS" Google Ads Queries: Executes different API calls depending on campaign type (conversion value vs. conversion count) Data Processing: Organizes metrics by advertising channel (Performance Max, Search, Display, Video, Shopping, Demand Gen) Dynamic Spreadsheet Updates: Automatically fills the correct monthly column in client spreadsheets Sequential Processing: Handles multiple accounts with wait periods to avoid API rate limits Requirements Airtable account with client database Google Ads API access with developer token Google Sheets API access Client-specific spreadsheet templates (provided) How to set up Step 1: Prepare your reporting template Copy the Google Sheets reporting template Create individual copies for each client Ensure proper column structure (months B-M for January-December) Link template URLs in your Airtable database Step 2: Configure your Airtable database Set up the following fields in your Airtable: Project names: Client project identifiers ID GADS: Google Ads customer IDs Typologie ADS: Campaign classification ("Ecommerce" or "Lead") Status - Prévisionnel budgétaire: Account status ("Actif" for active accounts) Automation budget: URLs to client-specific reporting spreadsheets Step 3: Set up API credentials Configure the following authentication: Airtable Personal Access Token: For client database access Google Ads OAuth2: For advertising data retrieval Google Sheets OAuth2: For spreadsheet updates Developer Token: Required for Google Ads API access Login Customer ID: Manager account identifier Step 4: Configure Google Ads API settings Update the HTTP request nodes with your credentials: Developer Token: Replace "[Your token]" with your actual developer token Login Customer ID: Replace "[Your customer id]" with your manager account ID API Version: Currently using v18 (update as needed) Step 5: Set up scheduling Default schedule: Runs on the 3rd of each month at 5 AM Cron expression: 0 5 3 * * Recommended timing: Early month execution for complete previous month data Processing delay: 1-minute waits between accounts to respect API limits How to customize the workflow Campaign type customization E-commerce campaigns: Tracks: Cost and conversion value metrics Query: metrics.conversions_value for revenue tracking Use case: Online stores, retail businesses Lead generation campaigns: Tracks: Cost and conversion count metrics Query: metrics.conversions for lead quantity Use case: Service businesses, B2B companies Advertising channel expansion Current channels tracked: Performance Max: Automated campaign type Search: Text ads on search results Display: Visual ads on partner sites Video: YouTube and video partner ads Shopping: Product listing ads Demand Gen: Audience-focused campaigns Add new channels by modifying the data processing code nodes. Reporting period adjustment Current setting: Last month data (DURING LAST_MONTH) Alternative periods: Last 30 days, specific date ranges, quarterly reports Custom timeframes: Modify the Google Ads query date parameters Multi-account management Sequential processing: Handles multiple accounts automatically Error handling: Continues processing if individual accounts fail Rate limiting: Built-in waits prevent API quota issues Batch size: No limit on number of accounts processed Data organization features Dynamic monthly columns Automatic detection: Determines previous month column (B-M) Column mapping: January=B, February=C, ..., December=M Data placement: Updates correct month automatically Multi-year support: Handles year transitions seamlessly Campaign performance breakdown Each account populates 10 rows of data: Performance Max Cost (Row 2) Performance Max Conversions/Value (Row 3) Demand Gen Cost (Row 4) Demand Gen Conversions/Value (Row 5) Search Cost (Row 6) Search Conversions/Value (Row 7) Video Cost (Row 8) Video Conversions/Value (Row 9) Shopping Cost (Row 10) Shopping Conversions/Value (Row 11) Data processing logic Cost conversion: Automatically converts micros to euros (÷1,000,000) Precision rounding: Rounds to 2 decimal places for clean presentation Zero handling: Shows 0 for campaign types with no activity Data validation: Handles missing or null values gracefully Results interpretation Monthly performance tracking Historical data: Year-over-year comparison across all channels Channel performance: Identify best-performing advertising types Budget allocation: Data-driven decisions for campaign investments Trend analysis: Month-over-month growth or decline patterns Account-level insights Multi-client view: Consolidated reporting across all managed accounts Campaign diversity: Understanding which channels clients use most Performance benchmarks: Compare similar account types and industries Resource allocation: Focus on high-performing accounts and channels Use cases Agency reporting automation Client dashboards: Automated population of monthly performance reports Budget planning: Historical data for next month's budget recommendations Performance reviews: Ready-to-present data for client meetings Trend identification: Spot patterns across multiple client accounts Internal performance tracking Team productivity: Track account management efficiency Campaign optimization: Identify underperforming channels for improvement Growth analysis: Monitor client account growth and expansion Forecasting: Use historical data for future performance predictions Strategic planning Budget allocation: Data-driven distribution across advertising channels Channel strategy: Determine which campaign types to emphasize Client retention: Proactive identification of declining accounts New business: Performance data to support proposals and pitches Workflow limitations Monthly execution: Designed for monthly reporting (not real-time) API dependencies: Requires stable Google Ads and Sheets API access Rate limiting: Sequential processing prevents parallel account handling Template dependency: Requires specific spreadsheet structure for proper data placement Previous month focus: Optimized for completed month data (run early in new month) Manual credential setup: Requires individual configuration of API tokens and customer IDs
by Rayan Koemi Karuby
🛠️ How It Works: System Architecture Workflow ini bekerja melalui empat lapisan proses utama yang terintegrasi secara otomatis: Input Processing & Routing Telegram Trigger: Menangkap setiap pesan masuk dari mahasiswa. Smart Routing (Switch): Node Switch menganalisis tipe input. Jika input berupa perintah (/help, /cekstatus), teks biasa (pertanyaan), atau file gambar (formulir), workflow akan mengarahkannya ke jalur pemrosesan yang relevan. Document Intelligence (OCR Path) Image Extraction: Saat mahasiswa mengirim foto formulir, node "from File" menyiapkan data biner untuk dianalisis. AI Vision Analysis: Menggunakan model Gemini (via HTTP Request) untuk membaca teks di dalam gambar dan mengekstraknya menjadi format JSON yang terstruktur (Nama, NIM, Judul, dll). Database Sync: Data yang berhasil diekstrak secara otomatis disimpan ke baris baru di Google Sheets sebagai database utama program studi. Knowledge Retrieval (RAG Path) Knowledge Base Setup: File panduan akademik di Google Drive dipantau oleh trigger. Setiap ada file baru, sistem akan mengunduh, melakukan embedding, dan menyimpannya di Supabase Vector Store. Contextual Answer: Jika mahasiswa bertanya, AI Agent akan mencari informasi yang paling relevan di Supabase terlebih dahulu, lalu menyusun jawaban berdasarkan dokumen resmi tersebut agar jawaban tetap akurat dan tidak "berhalusinasi". Status Tracking & Memory Personalized Status: Saat perintah /cekstatus dipanggil, AI Agent akan mengambil data spesifik mahasiswa tersebut dari Google Sheets dan merangkum kemajuan administrasi mereka dalam bahasa yang mudah dipahami. Conversational Memory: Menggunakan Postgres Chat Memory untuk menyimpan riwayat percakapan sehingga mahasiswa bisa melakukan tanya jawab secara berkesinambungan tanpa kehilangan konteks. Admin Logging: Semua aktivitas chat juga dicatat kembali ke Spreadsheet untuk keperluan audit dan monitoring oleh staf admin.
by Grigory Frolov
📊 YouTube Personal Channel Videos → Google Sheets Automatically sync your YouTube videos (title, description, tags, publish date, captions, etc.) into Google Sheets — perfect for creators and marketers who want a clean content database for analysis or reporting. 🚀 What this workflow does ✅ Connects to your personal YouTube channel via Google OAuth 🔁 Fetches all uploaded videos automatically (with pagination) 🏷 Extracts metadata: title, description, tags, privacy status, upload status, thumbnail, etc. 🧾 Retrieves captions (SRT format) if available 📈 Writes or updates data in your Google Sheets document ⚙️ Can be run manually or scheduled via Cron 🧩 Nodes used Manual Trigger** — to start manually or connect with Cron HTTP Request (YouTube API v3)** — fetches channel, uploads, and captions Code Nodes** — manage pagination and collect IDs SplitOut** — iterates through video lists Google Sheets (appendOrUpdate)** — stores data neatly If Conditions** — control data flow and prevent empty responses ⚙️ Setup guide Connect your Google Account Used for both YouTube API and Google Sheets. Make sure the credentials are set up in Google OAuth2 API and Google Sheets OAuth2 API nodes. Create a Google Sheet Add a tab named Videos. Add these columns: youtube_id | title | description | tags | privacyStatus | uploadStatus | thumbnail | captions You can also include categoryId, maxres, or published if you’d like. Replace the sample Sheet ID In each Google Sheets node, open the “Spreadsheet” field and choose your own document. Make sure the sheet name matches the tab name (Videos). Run the workflow Execute it manually first to pull your latest uploads. Optionally add a Cron Trigger node for daily sync (e.g., once per day). Check your Sheet Your data should appear instantly — with each video’s metadata and captions (if available). 🧠 Notes & tips ⚙️ The flow loops through all pages of your upload playlist automatically — no manual pagination needed. 🕒 The workflow uses YouTube’s “contentDetails.relatedPlaylists.uploads” to ensure you only fetch your own uploads. 💡 Captions fetch may fail for private videos — use “Continue on Fail” if you want the rest to continue. 🧮 Ideal for dashboards, reporting sheets, SEO analysis, or automation triggers. 💾 To improve speed, you can disable the “Captions” branch if you only need metadata. 👥 Ideal for 🎬 YouTube creators maintaining a video database 📊 Marketing teams tracking SEO performance 🧠 Digital professionals building analytics dashboards ⚙️ Automation experts using YouTube data in other workflows 💛 Credits Created by Grigory Frolov YouTube: @gregfrolovpersonal More workflows and guides → ozwebexpert.com/n8n
by Stephan Koning
Real-Time ClickUp Time Tracking to HubSpot Project Sync This workflow automates the synchronization of time tracked on ClickUp tasks directly to a custom project object in HubSpot, ensuring your project metrics are always accurate and up-to-date. Use Case & Problem This workflow is designed for teams that use a custom object in HubSpot for high-level project overviews (tracking scoped vs. actual hours per sprint) but manage daily tasks and time logging in ClickUp. The primary challenge is the constant, manual effort required to transfer tracked hours from ClickUp to HubSpot, a process that is both time-consuming and prone to errors. This automation eliminates that manual work entirely. How It Works Triggers on Time Entry:** The workflow instantly starts whenever a user updates the time tracked on any task in a specified ClickUp space. ⏱️ Fetches Task & Time Details:** It immediately retrieves all relevant data about the task (like its name and custom fields) and the specific time entry that was just updated. Identifies the Project & Sprint:** The workflow processes the task data to determine which HubSpot project it belongs to and categorizes the work into the correct sprint (e.g., Sprint 1, Sprint 2, Additional Requests). Updates HubSpot in Real-Time:** It finds the corresponding project record in HubSpot and updates the master actual_hours_tracked property. It then intelligently updates the specific field for the corresponding sprint (e.g., actual_sprint_1_hours), ensuring your reporting remains granular and accurate. Requirements ✅ ClickUp Account with the following custom fields on your tasks: A Dropdown custom field named Sprint to categorize tasks. A Short Text custom field named HubSpot Deal ID or similar to link to the HubSpot record. ✅ HubSpot Account with: A Custom Object used for project tracking. Custom Properties** on that object to store total and sprint-specific hours (e.g., actual_hours_tracked, actual_sprint_1_hours, total_time_remaining, etc.). > Note: Since this workflow interacts with a custom HubSpot object, it uses flexible HTTP Request nodes instead of the standard n8n HubSpot nodes. Setup Instructions Configure Credentials: Add your ClickUp (OAuth2) and HubSpot (Header Auth with a Private App Token) credentials to the respective nodes in the workflow. Set ClickUp Trigger: In the Time Tracked Update Trigger node, select your ClickUp team and the specific space you want to monitor for time updates. Update HubSpot Object ID: Find the ID of your custom project object in HubSpot. In the HubSpot HTTP Request nodes (e.g., OnProjectFolder), replace the placeholder ID objectTypeId in the URL with your own objectTypeId How to Customize Adjust the Code: Extract Sprint & Task Data node to change how sprint names are mapped or how time is calculated. Update the URLs in the HubSpot HTTP Request nodes if your custom object or property names differ.