by keisha kalra
Try It Out! This n8n template helps you analyze Google Maps reviews for a list of restaurants, summarize them with AI, and identify optimization opportunities—all in one automated workflow. Whether you're managing multiple locations, helping local restaurants improve their digital presence, or conducting a competitor analysis, this workflow helps you extract insights from dozens of reviews in minutes. How It Works? Start with a pre-filled list of restaurants in Google Sheets. The workflow uses SerpAPI to scrape Google Maps reviews for each listing. Reviews with content are passed to ChatGPT for summarization. Empty or failed reviews are logged in a separate tab for easy follow-up. Results are stored back in your Google Sheet for analysis or sharing How To Use Customize the input list in Google Sheets with your own restaurants. Update the OpenAI prompt if you want a different style of summary. You can trigger this manually or swap in a schedule, webhook, or other event. Requirements A SerpAPI account to fetch reviews An OpenAI account for ChatGPT summarization Access to Google Sheets and n8n Who Is It For? This is helpful for people looking to analyze a large batch of Google reviews in a short amount of time. Additionally, it can be used to compare restaurants and see where each can be optimized. How To Set-Up? Use a SerpAPI endpoint to include in the HTTP request node. Refer to this n8n documentation for more help! https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolserpapi/. Happy Automating!
by Wyeth
Are you writing complex Code nodes and need Intellisense support? Follow this simple pattern to get autocomplete for any n8n or custom classes.
by David Ashby
Complete MCP server exposing 27 Amazon CloudWatch Application Insights API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Amazon CloudWatch Application Insights credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Amazon CloudWatch Application Insights API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to http://applicationinsights.{region}.amazonaws.com • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (27 total) 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Createapplication (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.CreateApplication: Adds an application that is created from a resource group. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Createcomponent (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.CreateComponent: Creates a custom component by grouping similar standalone instances to monitor. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Createlogpattern (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.CreateLogPattern: Adds an log pattern to a LogPatternSet. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Deleteapplication (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DeleteApplication: Removes the specified application from monitoring. Does not delete the application. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Deletecomponent (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DeleteComponent: Ungroups a custom component. When you ungroup custom components, all applicable monitors that are set up for the component are removed and the instances revert to their standalone status. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Deletelogpattern (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DeleteLogPattern: Removes the specified log pattern from a LogPatternSet. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Describeapplication (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DescribeApplication: Describes the application. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Describecomponent (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DescribeComponent: Describes a component and lists the resources that are grouped together in a component. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Describecomponentconfiguration (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DescribeComponentConfiguration: Describes the monitoring configuration of the component. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Describecomponentconfigurationrecommendation (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DescribeComponentConfigurationRecommendation: Describes the recommended monitoring configuration of the component. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Describelogpattern (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DescribeLogPattern: Describe a specific log pattern from a LogPatternSet. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Describeobservation (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DescribeObservation: Describes an anomaly or error with the application. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Describeproblem (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DescribeProblem: Describes an application problem. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Describeproblemobservations (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.DescribeProblemObservations: Describes the anomalies or errors associated with the problem. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Listapplications (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.ListApplications: Lists the IDs of the applications that you are monitoring. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Listcomponents (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.ListComponents: Lists the auto-grouped, standalone, and custom components of the application. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Listconfigurationhistory (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.ListConfigurationHistory: Lists the INFO, WARN, and ERROR events for periodic configuration updates performed by Application Insights. Examples of events represented are: INFO: creating a new alarm or updating an alarm threshold. WARN: alarm not created due to insufficient data points used to predict thresholds. ERROR: alarm not created due to permission errors or exceeding quotas. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Listlogpatternsets (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.ListLogPatternSets: Lists the log pattern sets in the specific application. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Listlogpatterns (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.ListLogPatterns: Lists the log patterns in the specific log LogPatternSet. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Listproblems (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.ListProblems: Lists the problems with your application. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Listtagsforresource (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.ListTagsForResource: Retrieve a list of the tags (keys and values) that are associated with a specified application. A tag is a label that you optionally define and associate with an application. Each tag consists of a required tag key and an optional associated tag value. A tag key is a general label that acts as a category for more specific tag values. A tag value acts as a descriptor within a tag key. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Tagresource (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.TagResource: Add one or more tags (keys and values) to a specified application. A tag is a label that you optionally define and associate with an application. Tags can help you categorize and manage application in different ways, such as by purpose, owner, environment, or other criteria. Each tag consists of a required tag key and an associated tag value, both of which you define. A tag key is a general label that acts as a category for more specific tag values. A tag value acts as a descriptor within a tag key. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Untagresource (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.UntagResource: Remove one or more tags (keys and values) from a specified application. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Updateapplication (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.UpdateApplication: Updates the application. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Updatecomponent (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.UpdateComponent: Updates the custom component name and/or the list of resources that make up the component. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Updatecomponentconfiguration (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.UpdateComponentConfiguration: Updates the monitoring configurations for the component. The configuration input parameter is an escaped JSON of the configuration and should match the schema of what is returned by DescribeComponentConfigurationRecommendation. 🔧 #X-Amz-Target=Ec2Windowsbarleyservice.Updatelogpattern (1 endpoints) • POST /#X-Amz-Target=EC2WindowsBarleyService.UpdateLogPattern: Adds a log pattern to a LogPatternSet. 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Amazon CloudWatch Application Insights API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by jason
Nathan is a proof of concept framework for creating a personal assistant who can handle various day to day functions for you.
by The { AI } rtist
Tutorial: https://comunidad-n8n.com/bot-multi-idioma-no-code/ Comunidad de telegram: https://t.me/comunidadn8n BOT: https://t.me/NocodeTranslateBot
by Jason Foster
Gets Google Calendar events for the day (12 hours from execution time), and filters out in-person meetings, Signal meetings, and meetings canceled by Calendly ("transparent").
by manohar
This workflow assigns a user to an issue if they include "assign me" when opening or commenting. To use this workflow you will need to update the credentials used for the Github nodes.
by Ronald
Sometimes you need the rich text field to be in HTML instead of Markdown. This template either syncs a single record or all records at once. Youtube tutorial
by Jordan Hoyle
Description Automate the discovery and analysis of PDF files across a deeply nested OneDrive folder structure. This workflow recursively searches folders, filters for new or updated PDFs, extracts text, and uses a Mistral AI agent to generate a concise Executive Summary, Key Findings, and Structured Metadata (Date, Location, etc.), storing all insights into a n8n Data Table for easy access and further automation. Key Features & How It Works Scheduled Trigger & Recursive Folder Search: The workflow runs automatically (scheduled for 8 PM in this template) to monitor a specified main folder on OneDrive. It performs a deep, multi-level search (up to 8 layers) across subfolders to ensure no documents are missed. Smart Deduplication & Filtering: It checks new files against an internal n8n Data Table using the Compare Datasets node, ensuring only new or unique PDF files are processed, saving AI credits and processing time. A size check is also included, preventing attempts to process excessively large files. AI-Powered Document Intelligence (Mistral LLM): For each new PDF, the workflow extracts the text and passes it to a Mistral AI model for dual-stream analysis: Overview Agent: Generates an impartial, professional Executive Summary, a list of Key Findings & Data Points, and the document's Scope/Context. Document Information Agent: Extracts crucial metadata, including the single most relevant date, location (City/State/Country), and professional information (Name, Title, Organization). Structured Output and Archiving: AI outputs are meticulously validated and reformatted into a clean JSON object using Structured Output Parsers. The complete analysis, along with the original file name and path, is then logged as a new row in an n8n Data Table. Setup Notes OneDrive Folder: You must specify the exact name of your main folder in the 'Search for Main Folder' node. Data Table: Ensure your n8n Data Table exists with the required columns: Summary, Key_Findings, Scope, Date, Location, File_Name, and Path. Deep Folder Structure: The current configuration supports up to 8 levels of subfolders. If your files go deeper, you may need to add more "Get items in a folder" and "If" nodes. AI Customization: Review the AI agent prompts and the structured output schemas to customize the fields you want to extract or the summary style you require. Extend This Workflow The final output is organized data. You can easily extend this workflow to: Send daily/weekly digest emails with new summaries. Sync the extracted data to a Google Sheet, Airtable, or other database. Add a secondary AI agent to perform follow-up actions based on the "Key Findings."
by Vigh Sandor
Setup Instructions Overview This n8n workflow monitors your Proxmox VE server and sends automated reports to Telegram every 15 minutes. It tracks VM status, host resource usage, temperature sensors, and detects recently stopped VMs. Prerequisites Required Software n8n instance (self-hosted or cloud) Proxmox VE server with API access Telegram account with bot created via BotFather lm-sensors package installed on Proxmox host Required Access Proxmox admin credentials (username and password) SSH access to Proxmox server Telegram Bot API token Telegram Chat ID Installation Steps Step 1: Install Temperature Sensors on Proxmox SSH into your Proxmox server and run: apt-get update apt-get install -y lm-sensors sensors-detect Press ENTER to accept default answers during sensors-detect setup. Test that sensors work: sensors | grep -E 'Package|Core' Step 2: Create Telegram Bot Open Telegram and search for BotFather Send /newbot command Follow prompts to create your bot Save the API token provided Get your Chat ID by sending a message to your bot, then visiting: https://api.telegram.org/bot<YOUR_TOKEN>/getUpdates Look for "chat":{"id": YOUR_CHAT_ID in the response Step 3: Configure n8n Credentials SSH Password Credential In n8n, go to Credentials menu Create new credential: SSH Password Enter: Host: Your Proxmox IP address Port: 22 Username: root (or your admin user) Password: Your Proxmox password Telegram API Credential Create new credential: Telegram API Enter the Bot Token from BotFather Step 4: Import and Configure Workflow Import the JSON workflow into n8n Open the "Set Variables" node Update the following values: PROXMOX_IP: Your Proxmox server IP address PROXMOX_PORT: API port (default: 8006) PROXMOX_NODE: Node name (default: pve) TELEGRAM_CHAT_ID: Your Telegram chat ID PROXMOX_USER: Proxmox username with realm (e.g., root@pam) PROXMOX_PASSWORD: Proxmox password Connect credentials: SSH - Get Sensors node: Select your SSH credential Send Telegram Report node: Select your Telegram credential Save the workflow Activate the workflow Configuration Options Adjust Monitoring Interval Edit the "Schedule Every 15min" node: Change minutesInterval value to desired interval (in minutes) Recommended: 5-30 minutes Adjust Recently Stopped VM Detection Window Edit the "Process Data" node: Find line: const fifteenMinutesAgo = now - 900; Change 900 to desired seconds (900 = 15 minutes) Modify Temperature Warning Threshold The workflow uses the "high" threshold defined by sensors. To manually set threshold, edit "Process Data" node: Modify the temperature parsing logic Change comparison: if (current >= high) to use custom value Testing Test Individual Components Execute "Set Variables" node manually - verify output Execute "Proxmox Login" node - check for valid ticket Execute "API - VM List" - confirm VM data received Execute complete workflow - check Telegram for message Troubleshooting Login fails: Verify PROXMOX_USER format includes realm (e.g., root@pam) Check password is correct Ensure allowUnauthorizedCerts is enabled for self-signed certificates No temperature data: Verify lm-sensors is installed on Proxmox Run sensors command manually via SSH Check SSH credentials are correct Recently stopped VMs not detected: Check task log API endpoint returns data Verify VM was stopped within detection window Ensure task types qmstop or qmshutdown are logged Telegram not receiving messages: Verify bot token is correct Confirm chat ID is accurate Check bot was started (send /start to bot) Verify parse_mode is set to HTML in Telegram node How It Works Workflow Architecture The workflow executes in a sequential chain of nodes that gather data from multiple sources, process it, and deliver a formatted report. Execution Flow Schedule Trigger (15min) Set Variables Proxmox Login (get authentication ticket) Prepare Auth (prepare credentials for API calls) API - VM List (get all VMs and their status) API - Node Tasks (get recent task log) API - Node Status (get host CPU, memory, uptime) SSH - Get Sensors (get temperature data) Process Data (analyze and structure all data) Generate Formatted Message (create Telegram message) Send Telegram Report (deliver via Telegram) Data Collection VM Information (Proxmox API) Endpoint: /api2/json/nodes/{node}/qemu Retrieves: Total VM count Running VM count Stopped VM count VM names and IDs Task Log (Proxmox API) Endpoint: /api2/json/nodes/{node}/tasks?limit=100 Retrieves recent tasks to detect: qmstop operations (VM stop commands) qmshutdown operations (VM shutdown commands) Task timestamps Task status Host Status (Proxmox API) Endpoint: /api2/json/nodes/{node}/status Retrieves: CPU usage percentage Memory total and used (in GB) System uptime (in seconds) Temperature Data (SSH) Command: sensors | grep -E 'Package|Core' Retrieves: CPU package temperature Individual core temperatures High and critical thresholds Data Processing VM Status Analysis Counts total, running, and stopped VMs Queries task log for stop/shutdown operations Filters tasks within 15-minute window Extracts VM ID from task UPID string Matches VM ID to VM name from VM list Calculates time elapsed since stop operation Temperature Intelligence The workflow implements smart temperature reporting: Normal Operation (all temps below high threshold): Calculates average temperature across all cores Displays min, max, and average values Example: "Average: 47.5 C (Min: 44.0 C, Max: 52.0 C)" Warning State (any temp at or above high threshold): Displays all temperature readings in detail Shows full sensor output with thresholds Changes section title to "Temperature Warning" Adds fire emoji indicator Resource Calculation CPU Usage: API returns decimal (0.0 to 1.0) Converted to percentage: cpu * 100 Memory: API returns bytes Converted to GB: bytes / (1024^3) Calculates percentage: (used / total) * 100 Uptime: API returns seconds Converted to days and hours: days = seconds / 86400, hours = (seconds % 86400) / 3600 Report Generation Message Structure The Telegram message uses HTML formatting for structure: Header Section Report title Generation timestamp Virtual Machines Section Total VM count Running VMs with checkmark Stopped VMs with stop sign Recently stopped count with warning Detailed list if VMs stopped in last 15 minutes Host Resources Section CPU usage percentage Memory used/total with percentage Host uptime in days and hours Temperature Section Smart display (summary or detailed) Warning indicator if thresholds exceeded Monospace formatting for sensor output HTML Formatting Features Bold tags for headers and labels Italic for timestamps Code blocks for temperature data Unicode separators for visual structure Emoji indicators for status (checkmark, stop, warning, fire) Security Considerations Credential Storage Passwords stored in n8n Set node (encrypted in database) Alternative: Use n8n environment variables Recommendation: Use Proxmox API tokens instead of passwords API Communication HTTPS with self-signed certificate acceptance Authentication via session tickets (15-minute validity) CSRF token validation for API requests SSH Access Password-based authentication (can use key-based) Commands limited to read-only operations No privilege escalation required Performance Impact API Load 3 API calls per execution (VM list, tasks, status) Lightweight endpoints with minimal data 15-minute interval reduces server load Execution Time Typical workflow execution: 5-10 seconds Login: 1-2 seconds API calls: 2-3 seconds SSH command: 1-2 seconds Processing: less than 1 second Resource Usage Minimal CPU impact on Proxmox Small memory footprint Negligible network bandwidth Extensibility Adding Additional Metrics To monitor additional data points: Add new API call node after "Prepare Auth" Update "Process Data" node to include new data Modify "Generate Formatted Message" for display Integration with Other Services The workflow can be extended to: Send to Discord, Slack, or email Write to database or log file Trigger alerts based on thresholds Generate charts or graphs Multi-Node Monitoring To monitor multiple Proxmox nodes: Duplicate API call nodes Update node names in URLs Merge data in processing step Generate combined report
by isaWOW
Description Paste any video URL in chat — YouTube, podcast, webinar, anything — and this n8n workflow automatically finds the best short clips using WayinVideo AI, adds captions, reframes to 9:16 vertical format, and uploads everything directly to your Google Drive. No editing software, no manual work. Just paste and done. Built for content creators, marketers, and agencies who want to repurpose long-form video into viral short clips on autopilot. What This Workflow Does This automation handles your complete video-to-clips pipeline: Chat-triggered** — User pastes any video URL and the workflow starts instantly AI clip detection** — WayinVideo AI finds the most engaging moments automatically Auto-captions** — Adds styled captions to every clip without manual effort Smart reframing** — Converts horizontal video to 9:16 vertical (perfect for Reels, Shorts, TikTok) Smart polling loop** — Waits and retries every 30 seconds until processing is complete Batch download** — Downloads all generated clips automatically Google Drive upload** — Saves every clip with its title directly to your chosen folder Setup Requirements Tools You'll Need: Active n8n instance (self-hosted or n8n Cloud) WayinVideo account with API access Google Drive with OAuth2 access Estimated Setup Time: 10–15 minutes Step-by-Step Setup 1. Get Your WayinVideo API Key WayinVideo is the AI engine that generates short clips from your videos. Go to WayinVideo and create a free/paid account Navigate to Dashboard → API section Copy your Bearer API token Open the "🎬 Submit Video to WayinVideo API" node in n8n Replace YOUR_WAYINVIDEO_API_KEY with your actual token (in the Authorization header) Do the same in the "🔄 Poll for Clip Results" node — replace the same placeholder there too > ⚠️ The API key appears in two places — Submit node and Poll node. Replace both! 2. Connect Google Drive In n8n: Go to Credentials → Add Credential → Google Drive OAuth2 API Complete the Google OAuth authentication Open the "☁️ Upload Clip to Google Drive" node Select your Google Drive credential Replace YOUR_GOOGLE_DRIVE_FOLDER_ID with your actual folder ID How to find your Google Drive Folder ID: Open Google Drive in browser Navigate to your target folder Look at the URL: drive.google.com/drive/folders/THIS_IS_YOUR_FOLDER_ID Copy just that last part and paste it in the node 3. Configure the Chat Trigger This workflow is triggered when a user sends a video URL in the n8n chat interface. The trigger node receives $json.chatInput — this is the video URL the user pastes Make sure your n8n instance has the chat trigger enabled No additional setup needed — it works out of the box 4. Customise Clip Settings (Optional) Open the "🎬 Submit Video to WayinVideo API" node to customise: | Parameter | Default Value | What It Does | |---|---|---| | target_duration | DURATION_30_60 | Clip length (30–60 seconds) | | limit | 3 | Number of clips to generate | | resolution | HD_720 | Video quality (720p) | | ratio | RATIO_9_16 | Vertical format (for Reels/Shorts) | | enable_caption | true | Auto-captions on/off | | caption_display | original | Caption language/style | | cc_style_tpl | temp-7 | Caption design template | | enable_ai_reframe | true | Auto-reframe to vertical | To change clip length options: DURATION_15_30 → 15 to 30 second clips DURATION_30_60 → 30 to 60 second clips (default) DURATION_60_90 → 60 to 90 second clips To change ratio: RATIO_9_16 → Vertical (TikTok, Reels, Shorts) RATIO_16_9 → Horizontal (YouTube, LinkedIn) RATIO_1_1 → Square (Instagram feed) To change resolution: HD_720 → 720p (faster processing) FULL_HD_1080 → 1080p (higher quality) 5. Test & Activate Open n8n and go to this workflow Click "Chat" button to open the chat interface Paste any YouTube video URL and press send Watch the workflow run step by step in the execution view Check your Google Drive folder — clips should appear within 1–5 minutes Once confirmed, toggle the workflow Active at the top ✅ How It Works (Step by Step) Step 1 — Chat Trigger User pastes a video URL in the n8n chat. The URL is captured as $json.chatInput and passed to the next node. Step 2 — Submit Video to WayinVideo API The workflow sends a POST request to the WayinVideo API (/api/v2/clips) with: The video URL All your clip preferences (duration, ratio, captions, resolution) A project name auto-generated with today's date (e.g., Podcast Clips - 2025-01-15) The API returns a Job ID which is used to track processing status. Step 3 — Wait 30 Seconds The workflow pauses for 30 seconds to give WayinVideo time to start processing. This avoids hitting the API too early with an empty response. Step 4 — Poll for Clip Results Using the Job ID from Step 2, the workflow calls: GET /api/v2/clips/results/{job_id} to check if clips are ready. Step 5 — Clips Ready? (Smart Loop) The IF node checks if the clips array in the response is non-empty: YES (clips ready)** → Moves forward to extract clip details NO (still processing)** → Loops back to "Wait 30 Seconds" and tries again > This smart retry loop runs automatically every 30 seconds until your clips are done. No manual retries needed. Step 6 — Extract Clip Details The Code node loops through all generated clips and extracts: title — AI-generated clip title export_link — Direct download URL for the clip score — AI virality/engagement score tags — Topic tags for the clip desc — Short clip description begin_ms / end_ms — Timestamp of the clip in the original video Each clip becomes a separate item for parallel processing. Step 7 — Download Clip File For each clip, the workflow downloads the video file from the export_link URL. The file is stored as binary data (responseFormat: file) ready for upload. Step 8 — Upload to Google Drive Each downloaded clip is uploaded to your specified Google Drive folder. The file is named using the clip's AI-generated title automatically. Key Features ✅ Zero manual editing — AI handles clip selection, captions, and reframing ✅ Smart polling loop — Auto-retries every 30s, no timeout issues ✅ Batch processing — Handles multiple clips from one video in one run ✅ AI virality scoring — Each clip comes with an engagement score ✅ Auto-named files — Clips saved with meaningful AI-generated titles ✅ Flexible formats — Supports 9:16, 16:9, and 1:1 ratios ✅ Caption support — Multiple caption style templates available ✅ Google Drive ready — Organised storage with zero manual uploads Customisation Options Generate more clips per video: Change limit from 3 to 5 or 10 in the Submit node. Sort clips by score before uploading: Add a Sort node after "Extract Clip Details" and sort by score descending — this ensures your best clips are uploaded first. Save clip metadata to Google Sheets: Add a Google Sheets node after "Extract Clip Details" to log the title, score, tags, and timestamp of each clip for tracking. Add a response message back to chat: Add a "Respond to Webhook" node at the end to send a confirmation message like: "✅ 3 clips have been uploaded to your Google Drive!" Process multiple videos in batch: Modify the chat trigger to accept comma-separated URLs and add a Split node to process each video sequentially. Troubleshooting Clips not generating / API returns empty: Verify your WayinVideo API key is correct in both the Submit and Poll nodes Check your WayinVideo account has active credits Ensure the video URL is publicly accessible (private YouTube videos won't work) Try with a shorter video first (under 20 minutes) Workflow stuck in polling loop: Very long videos (1+ hour) can take 5–10 minutes to process — this is normal Check WayinVideo dashboard to see if your job is still running If stuck for more than 15 minutes, check API status at wayinvideo.com Google Drive upload failing: Re-authenticate your Google Drive OAuth credential Verify the Folder ID is correct (copy directly from the URL) Check that your Google account has write access to that folder Ensure the folder is not inside a Shared Drive (use My Drive for simplest setup) Video URL not accepted: Make sure you paste only the URL with no extra text YouTube Shorts URLs work — use full URL format: https://www.youtube.com/watch?v=... Some region-locked videos may not be accessible Clips uploading with wrong names: The file name comes from clip.title in the Extract node If titles look odd, you can add a Code node to clean/rename them before upload Support Need help setting this up or want a custom version for your use case? 📧 Email: info@isawow.com 🌐 Website: https://isawow.com
by Ahmad Bukhari
Who is this for? This workflow is built for n8n admins, automation agencies, solopreneurs, and ops teams running multiple workflows in production who need to know the moment something breaks. If you're manually checking your n8n execution logs every day to catch failures or worse, finding out about broken workflows days later this template gives you real-time monitoring with zero effort. What problem does this solve? Failed workflows go unnoticed for hours or days because nobody checks the execution log When you do find a failure, you have to dig through execution data to figure out what went wrong No centralized error history you can't spot patterns or recurring issues Alert fatigue from generic monitoring tools that don't tell you why something failed or how to fix it What this workflow does This workflow monitors your entire n8n instance for failed executions and handles the full error lifecycle automatically: Continuous monitoring** runs every minute on a schedule (or on-demand via webhook) Smart filtering** only processes failures from the last 5 minutes and excludes its own executions to prevent alert loops Automatic error classification** categorizes every failure into one of 7 types: Auth Error, Rate Limit, Network, Data/Config Error, Not Found, Server Error, and Permission Severity assignment** tags each error as 🔴 Critical, 🟠 High, or 🟡 Medium Suggested fixes** — generates an actionable fix suggestion for each error category (e.g., "Re-authenticate credential for: [node name]") Google Sheets logging** appends a detailed row to your error log with timestamp, workflow name, error category, severity, message, suggested fix, and retry status Color-coded Slack alerts** sends a formatted message to your alerts channel with the workflow name, failed node, error type, error message, suggested fix, and clickable links to the workflow and execution Setup Credentials needed n8n API Key** HTTP Header Auth credential with header name X-N8N-API-KEY Google Sheets** OAuth credentials Slack** OAuth credentials Configuration Create an API key in your n8n instance (Settings → API) Add it as an HTTP Header Auth credential in n8n with the header name X-N8N-API-KEY Open the Get Failures and Get Execution Detail nodes and replace YOUR-N8N-INSTANCE-URL with your actual n8n domain (e.g., n8n.yourcompany.com) Create a Google Sheet with these columns: Timestamp, Workflow Name, Workflow ID, Execution ID, Failed Node, Error Category, Severity, Error Message, Suggested Fix, Retryable, Status, Notes Open the Log to Sheet node and update the spreadsheet ID Open the Slack Alert node and set the channel to your alerts channel (e.g., #n8n-errors) Test it Manually trigger the webhook, or intentionally break a test workflow and wait one minute for the scheduled check. You should see a Slack alert and a new row in your Google Sheet. How to customize this workflow Different alerting channel?** Replace or duplicate the Slack node to send alerts to Microsoft Teams, Discord, email, or PagerDuty Auto-retry?** Add a retry branch after the Classify Error node that automatically re-runs retryable executions (Rate Limit, Network, Server Error) via the n8n API Different logging?** Replace the Google Sheets node with Airtable, Notion, or a database insert for more structured tracking Adjust the schedule?** Change the Schedule Trigger from every minute to every 5 minutes, 15 minutes, or hourly depending on your needs Add more error categories?** Extend the classification logic in the Classify Error code node to handle domain-specific errors (e.g., Stripe payment failures, Shopify API errors)