by Jatin Khatri
This template lets you poll multiple Gmail accounts from a single workflow using n8n’s credential-aware execution. Instead of creating separate workflows for every inbox, this setup loops through all accounts stored in your data table and runs Gmail operations dynamically using the correct OAuth2 credential. It’s built for cases like cold-email systems, multi-client mail monitoring, or anything that needs centralized polling and logging. ### What It Does Pulls all email account records from the cold_email_accounts table. For every account, the workflow: Runs the Gmail “Get Many Messages” node for that account using "Run Node With Credentials X" community node. Messages are reformatted into a clean JSON object (from, subject, preview text, body, labels, attachments, etc.). Saved into the All Emails table using an upsert (avoids duplicates). Sends a Discord notification like “15 new emails arrived”. ### How to Set It Up Create Gmail OAuth2 credentials in n8n, one for each email account you want to poll. Add those credentials into a Data Table named cold_email_accounts with columns: cred_id credentials_name email last_polled (datetime) Import this workflow template into n8n. Update: DataTable references (if your names/IDs differ) Discord channel/server IDs Any domain filters inside the Gmail search query Activate the workflow and it will automatically: Poll each Gmail inbox every hour Save all new emails Notify you on Discord Keep everything synced via last_polled.
by Jitesh Dugar
Verified Product Return Guide Generator A comprehensive n8n workflow template for automating e-commerce return processes with fraud prevention and professional document generation. Description Who's It For E-commerce businesses** preventing fraudulent returns Customer service teams** automating return processing Online retailers** needing professional documentation Dropshipping stores** fighting chargeback abuse SMB merchants** reducing manual workload How It Works Captures return requests via Webhook from forms/apps Validates customer emails with VerifiEmail to prevent fraud Blocks disposable emails and non-existent domains Generates QR codes for quick processing at drop-off points Creates professional HTML return guides with branding Converts to both PDF (printing) and PNG (email preview) Calculates return deadlines automatically (7-day default) Sends automated emails with download links to customers Tracks return authorization with unique order IDs Provides customer instructions and contact information Offers: fraud prevention, dual formats, QR integration, automation How to Set Up Connect your return form to Webhook (POST to https://[your-n8n-url]/webhook/return-guide) Sign up for VerifiEmail API (100 free verifications/month) Get HtmlCssToPdf and HtmlCssToImage API keys Configure Gmail OAuth2 via Google Cloud Console Set up all credentials in n8n: VerifiEmail, Gmail, PDF/Image APIs Customize HTML template colors and branding in Code node Test with sample data: {"customer_name": "Test User", "customer_email": "test@gmail.com", "order_id": "ORD123"} Requirements n8n instance** (cloud or self-hosted) API credentials**: VerifiEmail (~$0.01/check), HtmlCssToPdf, HtmlCssToImage, Gmail OAuth2 Return form** or e-commerce integration to send webhook data Email delivery** capability for customer notifications Core Features Email Fraud Detection**: Blocks fake and disposable emails Dual Format Output**: PDF for printing, PNG for previews QR Code Generation**: Quick processing at shipping locations Professional Templates**: Branded HTML with modern styling Automated Deadlines**: 7-day return window calculation Customer Communication**: Plain text emails with download links Use Cases & Applications E-commerce Returns**: Automate return authorization process Customer Service**: Reduce manual return guide creation Fraud Prevention**: Stop fake return attempts before processing Brand Consistency**: Professional documentation across all returns Operational Efficiency**: Handle high return volumes automatically Key Benefits Fraud Reduction**: Email validation prevents 60%+ fake returns Time Savings**: Eliminates manual return guide creation Professional Image**: Branded documents improve customer experience Cost Control**: Prevents processing costs for invalid returns Scalability**: Handles unlimited return requests automatically Customization Options Adjust return deadline in Set node (default 7 days) Modify HTML styling and colors in Code node Change QR code size and format in Set node expressions Edit email templates and company branding Add tracking integration with shipping APIs Customize validation rules for different email types Technical Specifications Execution Time**: 15-30 seconds per return request Success Rate**: 95%+ for valid email addresses File Sizes**: PDF 300KB, PNG 120KB average Retention**: Files hosted 30 days on service providers Rate Limits**: Respects all API provider limitations Cost Breakdown VerifiEmail**: $0.01 per email verification after free tier PDF Generation**: $0.001 per document Image Generation**: $0.001 per image Gmail**: Free (subject to Google's sending limits) Estimated**: $0.012 per return request after free tiers Integration Examples Shopify**: Webhook from return app to n8n endpoint WooCommerce**: PHP form submission to webhook URL Custom Forms**: Direct POST request with customer data Customer Portals**: Integration via REST API calls Sample Webhook Data { "customer_name": "Jane Doe", "customer_email": "test.user@gmail.com", "order_id": "ORD123456", "return_reason": "Wrong size", "product_name": "Blue Cotton T-Shirt", "purchase_date": "2025-01-15" } Installation Import the workflow JSON file into your n8n instance Set up required credentials (see setup instructions above) Activate the workflow Test with sample data to ensure proper configuration Important Disclaimers Email validation** accuracy depends on third-party service Test thoroughly** with your specific use case and volume Monitor API quotas** to avoid service interruptions Backup processes** recommended for critical return periods Compliance**: Ensure adherence to your return policy terms Support For issues with this template: Check n8n execution logs for detailed error messages Verify all API credentials are properly configured Test individual nodes to isolate problems Review the n8n community forums for similar issues License This template is provided as-is for educational and commercial use. Users are responsible for ensuring compliance with all applicable laws and service provider terms of use.
by Elodie Tasia
Create centralized, structured logs directly from your n8n workflows, using Supabase as your scalable log database. Whether you're debugging a workflow, monitoring execution status, or tracking error events, this template makes it easy to log messages in a consistent, structured format inspired by Log4j2 levels (DEBUG, INFO, WARN, ERROR, FATAL). You’ll get a reusable sub-workflow that lets you log any message with optional metadata, tied to a workflow execution and a specific node. What this template does Provides a sub-workflow that inserts log entries into Supabase. Each log entry supports the following fields: workflow_name: Your n8n workflow identifier node_name: last executed node execution_id: n8n execution ID for correlation log_level: One of DEBUG, INFO, WARN, ERROR, FATAL message: Textual message for the log metadata: Optional JSON metadata (flexible format) Comes with examples for diffrerent log levels: Easily call the sub-workflow from any step with a Execute Workflow node and pass dynamic parameters. Use Cases Debug complex workflows without relying on internal n8n logs. Catch and trace errors with contextual metadata. Integrate logs into external dashboards or monitoring tools via Supabase SQL or APIs. Analyze logs by level, time, or workflow. Requirements To use this template, you'll need: A Supabase project with: A log_level_type enum A logs table matching the expected structure A service role key or supabase credentials available in n8n. The table shema and SQL scripts are given in the template file. How to Use This Template Clone the sub-workflow into your n8n instance. Set up Supabase credentials (in the Supabase node). Call the sub-workflow using the Execute Workflow node. Provide input values like: { "workflow_name": "sync_crm_to_airtable", "execution_id": {{$execution.id}}, "node_name": "Airtable Insert", "log_level": "INFO", "message": "New contact pushed to Airtable successfully", "metadata": { "recordId": "rec123", "fields": ["email", "firstName"] } } Repeat anywhere you need to log custom events.
by Rivers Colyer
Who’s it for: SALES teams and AGENCIES that want to automatically send call summaries to their CRM. This flow is best suited for agencies and businesses that use ZoomInfo Chorus AI to capture and analyze calls, and HubSpot as their main CRM system for managing client data. How it works / What it does: The flow runs every hour and checks Chorus for new call summaries. If new summaries are found, they are added as notes to the corresponding HubSpot company record. If the company does not exist in HubSpot, the summary will be skipped. If the engagement already exists as a note in HubSpot, it will also be skipped. In short: every hour, Chorus → HubSpot notes sync (only when unique & matched). Workflow Overview Here’s how the logic works, step by step: Triggers Run every hour (scheduled) or When clicking Execute workflow (manual testing). Fetch from Chorus.AI Get Chorus Engagement per last day: requests new engagements from the last 24h. Pagination is handled automatically (up to 15 requests). Merge paginated engagements: combines all pages into a single list. Filter items with empty meeting_summary: removes calls without summaries. Loop through engagements Loop Over Engagements: processes each engagement one by one. Find matching HubSpot company Search Company In HubSpot By Name: looks for an exact match with account_name. If company exists: continues only if a match is found. Otherwise → Skip, if company not found. Check for existing notes Search notes: searches HubSpot for a note containing the engagement_id. If note not exist: continues only if no duplicate is found. Otherwise → Skip, if note already exists. Create a new HubSpot note Create Note Payload: builds a formatted note body with: Call date/time Summary text Link to call recording Engagement ID Create Note: inserts the note into HubSpot, associated with the matched company. Loop continues After processing, the flow moves back to Loop Over Engagements until all are handled. Requirements Chorus AI account or an existing API Token. HubSpot account with the ability to create applications OR an existing App Token. How to set up: Generate a Chorus.AI API Token: Go to Settings → Personal Settings → API Access → Generate API Token. Save this token in a secure place. In your N8N instance, create new credentials with any name you prefer (e.g., Chorus.AI - API Token). Type: Header Auth Header Name: Authorization Header Value: your generated Chorus API Token Save the credentials. Create a HubSpot Application with the following access scopes: crm.objects.contacts.write crm.objects.companies.read Save the Access Token and Client Secret securely. The Access Token will be used in this flow. In your N8N instance, create new credentials with any name you prefer (e.g., HubSpot App Token). Type: HubSpot App Token App Token: paste the token from step 3 Save the credentials. Install this flow in your N8N instance (cloud or self-hosted). After installation, some nodes—Get Chorus Engagement per last day, Search Company In HubSpot By Name, Search notes, Create Note—will show errors. This is expected, as they need to be linked to your new credentials. Open the node Get Chorus Engagement per last day and update the credentials: Authentication: Generic Credential Type Generic Auth Type: Header Auth Header Auth: select the Chorus credentials you created in step 2. Open the node Search Company In HubSpot By Name and update the credentials: Authentication: Predefined Credential Type Credential Type: HubSpot App Token HubSpot App Token: select the credentials from step 4. Repeat step 8 for the node Search notes. Repeat step 8 for the node Create Note. Save your updated flow by clicking Save in the top-right corner. Run the workflow by clicking Execute Workflow to test if all credentials work correctly. Once confirmed, activate your flow by toggling the switch in the top-right corner. That’s it! 🎉 Your flow will now run every hour, syncing new Chorus call summaries to HubSpot—only when the company names match and the engagement isn’t already logged.
by automedia
Monitor RSS Feeds, Extract Full Articles, and Save to Supabase Overview This workflow solves a common problem with RSS feeds: they often only provide a short summary or snippet of the full article. This template automatically monitors a list of your favorite blog RSS feeds, filters for new content, visits the article page to extract the entire blog post, and then saves the structured data into a Supabase database. It's designed for content creators, marketers, researchers, and anyone who needs to build a personal knowledge base, conduct competitive analysis, or power a content aggregation system without manual copy-pasting. Use Cases Content Curation: Automatically gather full-text articles for a newsletter or social media content. Personal Knowledge Base: Create a searchable archive of articles from experts in your field. Competitive Analysis: Track what competitors are publishing without visiting their blogs every day. AI Model Training: Collect a clean, structured dataset of full-text articles to fine-tune an AI model. How It Works Scheduled Trigger: The workflow runs automatically on a set schedule (default is once per day). Fetch RSS Feeds: It takes a list of RSS feed URLs you provide in the "blogs to track" node. Filter for New Posts: It checks the publication date of each article and only continues if the article is newer than a specified age (e.g., published within the last 60 days). Extract Full Content: For each new article, the workflow uses the Jina AI Reader URL (https://r.jina.ai/)) to scrape the full, clean text from the blog post's webpage. This is a free and powerful way to get past the RSS snippet limit. Save to Supabase: Finally, it organizes the extracted data and saves it to your chosen Supabase table. The following data is saved by default: title source_url (the link to the original article) content_snippet (the full extracted article text) published_date creator (the author) status (a static value you can set, e.g., "new") content_type (a static value you can set, e.g., "blog") Setup Instructions You can get this template running in about 10-15 minutes. Set Up Your RSS Feed List: Navigate to the "blogs to track" Set node. In the source_identifier field, replace the example URLs with the RSS feed URLs for the blogs you want to monitor. You can add as many as you like. Tip: The best way to find a site's RSS feed is to use a tool like Perplexity or a web-browsing enabled LLM. // Example list of RSS feeds ['https://blog.n8n.io/rss', 'https://zapier.com/blog/feeds/latest/'] Configure the Content Age Filter: Go to the "max\_content\_age\_days" Set node. Change the value from the default 60 to your desired timeframe (e.g., 7 to only get articles from the last week). Connect Your Storage Destination: The template uses the "Save Blog Data to Database" Supabase node. First, ensure you have a table in your Supabase project with columns to match the data (e.g., title, source_url, content_snippet, published_date, creator, etc.). In the n8n node, create new credentials using your Supabase Project URL and Service Role Key. Select your table from the list and map the data fields from the workflow to your table columns. Want to use something else? You can easily replace the Supabase node with a Google Sheets, Airtable, or the built-in n8n Table node. Just drag the final connection to your new node and configure the field mapping. Set Your Schedule: Click on the first node, "Schedule Trigger". Adjust the trigger interval to your needs. The default is every day at noon. Activate Workflow: Click the "Save" button, then toggle the workflow to Active. You're all set\!
by Stephan Koning
🌊 What it Does This workflow automatically classifies uploaded files (PDFs or images) as floorplans or non‑floorplans. It filters out junk files, then analyzes valid floorplans to extract room sizes and measurements. 👥 Who it’s For Built for real estate platforms, property managers, and automation builders who need a trustworthy way to detect invalid uploads while quickly turning true floorplans into structured, reusable data. ⚙️ How it Works User uploads a file (PDF, JPG, PNG, etc.). Workflow routes the file based on type for specialized processing. A two‑layer quality check is applied using heuristics and AI classification. A confidence score determines if the file is a valid floorplan. Valid floorplans are passed to a powerful OCR/AI for deep analysis. Results are returned as JSON and a user-friendly HTML table. 🧠 The Technology Behind the Demo This MVP is a glimpse into a more advanced commercial system. It runs on a custom n8n workflow that leverages Mistral AI's latest OCR technology. Here’s what makes it powerful: Structured Data Extraction: The AI is forced to return data in a clean, predictable JSON Schema. This isn't just text scraping; it’s a reliable data pipeline. Intelligent Data Enrichment: The workflow doesn't just extract data—it enriches it. A custom script automatically calculates crucial metrics like wall surface area from the floor dimensions, even using fallback estimates if needed. Automated Aggregation: It goes beyond individual rooms by automatically calculating totals per floor level and per room type, providing immediate, actionable insights. While this demo shows the core classification and measurement (Step 1), the full commercial version includes Step 2 & 3 (Automated Offer Generation), currently in use by a client in the construction industry. Test the Live MVP 📋 Requirements Jigsaw Stack API Key n8n Instance Webhook Endpoint 🎨 Customization Adjust thresholds, fine‑tune heuristics, or swap OCR providers to better match your business needs and downstream integrations.
by Sean Spaniel
Predict Housing Prices with a Neural Network This n8n template demonstrates how a simple Multi-Layer Perceptron (MLP) neural network can predict housing prices. The prediction is based on four key features, processed through a three-layer model. Input Layer Receives the initial data via a webhook that accepts four query parameters. Hidden Layer Composed of two neurons. Each neuron calculates a weighted sum of the inputs, adds a bias, and applies the ReLU activation function. Output Layer Contains one neuron that calculates the weighted sum of the hidden layer's outputs, adds its bias, and returns the final price prediction. Setup This template works out-of-the-box and requires no special configuration or prerequisites. Just import the workflow to get started. How to Use Trigger this workflow by sending a GET request to the webhook endpoint. Include the house features as query parameters in the URL. Endpoint: /webhook/regression/house/price Query Parameters square_feet: The total square footage of the house. number_rooms: The total number of rooms. age_in_years: The age of the house in years. distance_to_city_in_km: The distance to the nearest city center in kilometers. Example Here’s an example curl request for a 1,500 sq ft, 3-room house that is 10 years old and 5 km from the city. Request curl "https://your-n8n-instance.com/webhook/regression/house/price?square_feet=1500&number_rooms=3&age_in_years=10&distance_to_city_in_km=5" Response JSON { "price": 53095.832123960805 } `
by David Olusola
🎓 n8n Learning Hub — AI-Powered YouTube Educator Directory 📋 Overview This workflow demonstrates how to use n8n Data Tables to create a searchable database of educational YouTube content. Users can search for videos by topic (e.g., "voice", "scraping", "lead gen") and receive formatted recommendations from top n8n educators. What This Workflow Does: Receives search queries** via webhook (e.g., topic: "voice agents") Processes keywords** using JavaScript to normalize search terms Queries a Data Table** to find matching educational videos Returns formatted results** with video titles, educators, difficulty levels, and links Populates the database** with a one-time setup workflow 🎯 Key Features ✅ Data Tables Introduction - Learn how to store and query structured data ✅ Webhook Integration - Accept external requests and return JSON responses ✅ Keyword Processing - Simple text normalization and keyword matching ✅ Batch Operations - Use Split in Batches to populate tables efficiently ✅ Frontend Ready - Easy to connect with Lovable, Replit, or custom UIs 🛠️ Setup Guide Step 1: Import the Workflow Copy the workflow JSON In n8n, go to Workflows → Import from File or Import from URL Paste the JSON and click Import Step 2: Create the Data Table The workflow uses a Data Table called n8n_Educator_Videos with these columns: Educator** (text) - Creator name video_title** (text) - Video title Difficulty** (text) - Beginner/Intermediate/Advanced YouTubeLink** (text) - Full YouTube URL Description** (text) - Video summary for search matching To create it: Go to Data Tables in your n8n instance Click + Create Data Table Name it n8n_Educator_Videos Add the 5 columns listed above Step 3: Populate the Database Click on the "When clicking 'Execute workflow'" node (bottom branch) Click Execute Node to run the setup This will insert all 9 educational videos into your Data Table Step 4: Activate the Webhook Click on the Webhook node (top branch) Copy the Production URL (looks like: https://your-n8n.app.n8n.cloud/webhook/1799531d-...) Click Activate on the workflow Test it with a POST request: curl -X POST https://your-n8n.app.n8n.cloud/webhook/YOUR-WEBHOOK-ID \ -H "Content-Type: application/json" \ -d '{"topic": "voice"}' 🔍 How the Search Works Keyword Processing Logic The JavaScript node normalizes search queries: "voice", "audio", "talk"** → Matches voice agent tutorials "lead", "lead gen"** → Matches lead generation content "scrape", "data", "scraping"** → Matches web scraping tutorials The Data Table query uses LIKE matching on the Description field, so partial matches work great. Example Queries: {"topic": "voice"} // Returns Eleven Labs Voice Agent {"topic": "scraping"} // Returns 2 scraping tutorials {"topic": "avatar"} // Returns social media AI avatar videos {"topic": "advanced"} // Returns all advanced-level content 🎨 Building a Frontend with Lovable or Replit Option 1: Lovable (lovable.dev) Lovable is an AI-powered frontend builder perfect for quick prototypes. Prompt for Lovable: Create a modern search interface for an n8n YouTube learning hub: Title: "🎓 n8n Learning Hub" Search bar with placeholder "Search for topics: voice, scraping, RAG..." Submit button that POSTs to webhook: [YOUR_WEBHOOK_URL] Display results as cards showing: 🎥 Video Title (bold) 👤 Educator name 🧩 Difficulty badge (color-coded) 🔗 YouTube link button 📝 Description Design: Dark mode, modern glassmorphism style, responsive grid layout Implementation Steps: Go to lovable.dev and start a new project Paste the prompt above Replace [YOUR_WEBHOOK_URL] with your actual webhook Export the code or deploy directly Option 2: Replit (replit.com) Use Replit's HTML/CSS/JS template for more control. HTML Structure: <!DOCTYPE html> <html> <head> <title>n8n Learning Hub</title> <style> body { font-family: Arial; max-width: 900px; margin: 50px auto; } #search { padding: 10px; width: 70%; font-size: 16px; } button { padding: 10px 20px; font-size: 16px; } .video-card { border: 1px solid #ddd; padding: 20px; margin: 20px 0; } </style> </head> <body> 🎓 n8n Learning Hub <input id="search" placeholder="Search: voice, scraping, RAG..." /> <button onclick="searchVideos()">Search</button> <script> async function searchVideos() { const topic = document.getElementById('search').value; const response = await fetch('YOUR_WEBHOOK_URL', { method: 'POST', headers: {'Content-Type': 'application/json'}, body: JSON.stringify({topic}) }); const data = await response.json(); document.getElementById('results').innerHTML = data.Message || 'No results'; } </script> </body> </html> Option 3: Base44 (No-Code Tool) If using Base44 or similar no-code tools: Create a Form with a text input (name: topic) Add a Submit Action → HTTP Request Set Method: POST, URL: Your webhook Map form data: {"topic": "{{topic}}"} Display response in a Text Block using {{response.Message}} 📊 Understanding Data Tables Why Data Tables? Persistent Storage** - Data survives workflow restarts Queryable** - Use conditions (equals, like, greater than) to filter Scalable** - Handle thousands of records efficiently No External DB** - Everything stays within n8n Common Operations: Insert Row - Add new records (used in the setup branch) Get Row(s) - Query with filters (used in the search branch) Update Row - Modify existing records by ID Delete Row - Remove records Best Practices: Use descriptive column names Include a searchable text field (like Description) Keep data normalized (avoid duplicate entries) Use the "Split in Batches" node for bulk operations 🚀 Extending This Workflow Ideas to Try: Add More Educators - Expand the video database Category Filtering - Add a Category column (Automation, AI, Scraping) Difficulty Sorting - Let users filter by skill level Vote System - Add upvote/downvote columns Analytics - Track which topics are searched most Admin Panel - Build a form to add new videos via webhook Advanced Features: AI-Powered Search** - Use OpenAI embeddings for semantic search Thumbnail Scraping** - Fetch YouTube thumbnails via API Auto-Updates** - Periodically check for new videos from educators Personalization** - Track user preferences in a separate table 🐛 Troubleshooting Problem: Webhook returns empty results Solution: Check that the Description field contains searchable keywords Problem: Database is empty Solution: Run the "When clicking 'Execute workflow'" branch to populate data Problem: Frontend not connecting Solution: Verify webhook is activated and URL is correct (use Test mode first) Problem: Search too broad/narrow Solution: Adjust the keyword logic in "Load Video DB" node 📚 Learning Resources Want to learn more about the concepts in this workflow? Data Tables:** n8n Data Tables Documentation Webhooks:** Webhook Node Guide JavaScript in n8n:** "Every N8N JavaScript Function Explained" (see database) 🎓 What You Learned By completing this workflow, you now understand: ✅ How to create and populate Data Tables ✅ How to query tables with conditional filters ✅ How to build webhook-based APIs in n8n ✅ How to process and normalize user input ✅ How to format data for frontend consumption ✅ How to connect n8n with external UIs Happy Learning! 🚀 Built with ❤️ using n8n Data Tables
by System Admin
No description available
by as311
Regional Prospecting for registered Companies in Germany Find and qualify registered companies in specific regions using Implisense Search API (Handelsregister). This API provides all officially registered companies in Germany (about 2,5 million). Input Parameters: query: Search terms (e.g., "software OR it") regionCode: ZIP/postal code region (e.g., "de-10") industryCode: NACE industry code (e.g., "J62") pageSize: Max results (1-1000) Quality Levels: High:** Score ≥15 (active, website, full address) Medium:** Score <15 How it works Phase 1: Init Phase 2: Search Phase 3: Vetting Setup steps 1. Configure Credentials: Set up RapidAPI API credentials Create an account on RapidAPI (free tier available) Insert your RapidAPI x-rapidapi-key as password 2. Configure Search Parameters see above. 3. Connect CRM/Database After "Merge & Log Results" node, add: HTTP Request node for REST API Database node for direct insertion Or CRM-specific integration node
by Vigh Sandor
Workflow Overview This n8n workflow provides automated monitoring of YouTube channels and sends real-time notifications to RocketChat when new videos are published. It supports all YouTube URL formats, uses dual-source video fetching for reliability, and intelligently filters videos to prevent duplicate notifications. Key Features Multi-Format URL Support**: Handles @handle, /user/, and /channel/ URL formats Dual Fetching Strategy**: Uses both RSS feeds and HTML scraping for maximum reliability Smart Filtering**: Only notifies about videos published in the last hour Shorts Exclusion**: Automatically excludes YouTube Shorts from notifications Rate Limiting**: 30-second delay between notifications to prevent spam Batch Processing**: Processes multiple channels sequentially Error Handling**: Continues execution even if one channel fails Customizable Schedule**: Default hourly checks, adjustable as needed Use Cases Monitor competitor channels, track favorite creators, aggregate content from multiple channels, build content curation workflows, stay updated on educational channels, monitor brand mentions, track news channels for breaking updates. Setup Instructions Prerequisites n8n instance (self-hosted or cloud) version 1.0+ RocketChat server with admin or bot access RocketChat API credentials Internet connectivity for YouTube access Step 1: Obtain RocketChat Credentials Create Bot User: Log in to RocketChat as administrator Navigate to Administration → Users → New Fill in details: Name (YouTube Monitor Bot), Username (youtube-bot), Email, Password, Roles (bot) Click Save Get API Credentials: Log in as bot user Navigate to My Account → Personal Access Tokens Click Generate New Token Enter token name: n8n YouTube Monitor Copy generated token immediately Note User ID from account settings Step 2: Configure RocketChat in n8n Open n8n web interface Navigate to Credentials section Click Add Credential → RocketChat API Fill in: Domain: Your RocketChat URL (e.g., https://rocket.yourdomain.com) User: Bot username (e.g., youtube-bot) Password: Bot password or personal access token Click Save and test connection Step 3: Prepare RocketChat Channel Create new channel in RocketChat: youtube-notifications Add bot user to channel: Click channel menu → Members → Add Users Search for bot username Click Add Step 4: Collect YouTube Channel URLs Handle Format: https://www.youtube.com/@ChannelHandle User Format: https://www.youtube.com/user/Username Channel ID Format: https://www.youtube.com/channel/UCxxxxxxxxxx All formats supported. Find channel ID in page source or use browser extension. Step 5: Import Workflow Copy workflow JSON In n8n: Workflows → Import from File/URL Paste JSON or upload file Click Import Step 6: Configure Channel List Locate Channel List node Enter YouTube URLs in channel_urls field, one per line: https://www.youtube.com/@NoCopyrightSounds/videos https://www.youtube.com/@chillnation/videos Include /videos suffix or workflow adds it automatically Step 7: Configure RocketChat Notification Locate RocketChat Notification node Replace YOUR-CHANNEL-NAME with your channel name Select RocketChat credential Customize message template if needed Step 8: Configure Schedule (Optional) Default: Every 1 hour To change: Open Hourly Check node Modify interval (Minutes, Hours, Days) Recommended Intervals: Every hour (default): Good balance Every 30 minutes: More frequent Every 2 hours: Less frequent Avoid intervals less than 15 minutes Important: YouTube RSS updates every 15 minutes. Hourly checks match 1-hour filter window. Step 9: Test the Workflow Click Execute Workflow button Monitor execution (green = success, red = errors) Check node outputs: Channel List: Shows URLs Filter New Videos: Shows found videos (may be empty) RocketChat Notification: Shows sent messages Verify notifications in RocketChat No notifications is normal if no videos posted in last hour. Step 10: Activate Workflow Toggle Active switch in top-right Workflow runs on schedule automatically Monitor RocketChat channel for notifications How to Use Understanding Workflow Execution Default Schedule: Hourly Executes every hour Checks all channels Processes videos from last 60 minutes Prevents duplicate notifications Execution Duration: 1-5 minutes for 10 channels. Rate limiting adds 30 seconds per video. Adding New Channels Open Channel List node Add new URL on new line Save (Ctrl+S) Change takes effect on next run Removing Channels Open Channel List node Delete line or comment out with # at start Save changes Changing Check Frequency Open Hourly Check node Modify interval If changing from hourly, update Filter New Videos node: Find: cutoffDate.setHours(cutoffDate.getHours() - 1); Change -1 to match interval (-2 for 2 hours, -6 for 6 hours) Important: Time window should match or exceed check interval. Understanding Video Sources RSS Feed (Primary): Official YouTube RSS Fast and reliable 5-15 minute delay for new videos Structured data HTML Scraping (Fallback): Immediate results Works when RSS unavailable More fragile Benefits of dual approach: Reliability: If one fails, other works Speed: Scraping catches videos immediately Completeness: RSS ensures nothing missed Videos are deduplicated automatically Excluding YouTube Shorts Shorts are filtered by checking URL for /shorts/ path. To include Shorts: Open Filter New Videos node Find: if (videoUrl && !videoUrl.includes('/shorts/')) Remove the !videoUrl.includes('/shorts/') check Rate Limiting 30-second wait between notifications: Prevents flooding RocketChat Allows users to read each notification Avoids rate limits Impact: 5 videos = 2.5 minutes, 10 videos = 5 minutes To adjust: Open Wait 30 sec node, change amount field (15-60 seconds recommended) Handling Multiple Channels Channels processed sequentially: Prevents overwhelming workflow Ensures reliable execution One failed channel doesn't stop others Recommend 20-50 channels per workflow FAQ Q: How many channels can I monitor? A: Recommend 20-50 per workflow. Split into multiple workflows for more. Q: Why use both RSS and scraping? A: RSS is reliable but delayed. Scraping is immediate but fragile. Both ensures no videos missed. Q: Can I exclude specific video types? A: Yes, add filtering logic in Filter New Videos node. Already excludes Shorts. Q: Will this get my IP blocked? A: Unlikely with hourly checks. Don't check more than every 15 minutes. Q: How do I prevent duplicate notifications? A: Ensure time window matches schedule interval. Already implemented. Q: What if channel changes handle? A: Update URL in Channel List node. YouTube maintains redirects. Q: Can I monitor playlists? A: Not directly. Would need modifications for playlist RSS feeds. Technical Reference YouTube URL Formats Handle: https://www.youtube.com/@handlename User: https://www.youtube.com/user/username Channel ID: https://www.youtube.com/channel/UCxxxxxx RSS Feed Format https://www.youtube.com/feeds/videos.xml?channel_id=UCxxxxxx Contains up to 15 recent videos with title, link, publish date, thumbnail. APIs Used: YouTube RSS (public), RocketChat API (requires auth) License: Open for modification and commercial use
by ZeroBounce
ZeroBounce Email Validation, A.I Scoring & Sending with Gmail This workflow automates the process of validating, scoring, and emailing leads from a Google Sheet. It ensures you only send emails to high-quality contacts, protecting your sender reputation. 🚀 How it Works Trigger: Watches a Google Sheet for new or updated rows (contacts). Validate: Uses ZeroBounce to check if the email address is valid. If Invalid: Updates the sheet with the reason and marks "Emailed" as false. Score: If valid, uses ZeroBounce A.I. Scoring to grade the lead quality (0-10). If Low/Medium Score (<9): Updates the sheet with the score and marks "Emailed" as false. Send: If the score is High (9-10), sends an email via Gmail. Update: Writes the final status back to the Google Sheet, preventing duplicate sends. 📋 Setup Requirements Google Sheet:** A sheet with columns for Email, Validated, Scored, Score, Emailed, Reason, etc. ZeroBounce Account:** API Key for validation and scoring. Gmail Account:** Connected via OAuth2 to send emails. 💡 Key Features Cost Efficient:** Checks existing Validated and Scored columns to avoid re-processing contacts. Risk Protection:** Filters out valid but low-quality leads (e.g., catch-alls or low scores). Status Tracking:** Keeps your Google Sheet updated with rich data (Sub-status, Domain Age, etc.).