by Oriol Seguí
Secret Santa (Amigo Invisible) Automation 🎁 This workflow fully automates a Secret Santa (Amic Invisible) event — perfect for friends, families, or office teams who want to make their gift exchange fun, fair, and completely private. It takes a simple list of participant names and emails, then automatically pairs everyone using a non-repeating random assignment system (derangement algorithm) — meaning no one is ever assigned to themselves and every participant both gives and receives a gift exactly once. Once the pairs are generated, the workflow sends each participant a beautifully formatted email revealing only their assigned recipient. After each email is sent, it is automatically deleted from the “Sent” folder to ensure the organizer cannot see or trace who got whom, keeping the surprise intact. At the end of the process, the workflow generates a numerical summary like “1 sent to 2, 2 sent to 3…” — allowing the organizer to verify that everything worked correctly without revealing the actual names or emails of the participants. This automation combines creativity, privacy, and transparency — making it perfect for anyone who wants to manage a Secret Santa game with ease, fairness, and a touch of automation magic. Created by oxsr11 — explore more original and innovative workflows on my n8n profile.
by Jitesh Dugar
Verified Product Return Guide Generator A comprehensive n8n workflow template for automating e-commerce return processes with fraud prevention and professional document generation. Description Who's It For E-commerce businesses** preventing fraudulent returns Customer service teams** automating return processing Online retailers** needing professional documentation Dropshipping stores** fighting chargeback abuse SMB merchants** reducing manual workload How It Works Captures return requests via Webhook from forms/apps Validates customer emails with VerifiEmail to prevent fraud Blocks disposable emails and non-existent domains Generates QR codes for quick processing at drop-off points Creates professional HTML return guides with branding Converts to both PDF (printing) and PNG (email preview) Calculates return deadlines automatically (7-day default) Sends automated emails with download links to customers Tracks return authorization with unique order IDs Provides customer instructions and contact information Offers: fraud prevention, dual formats, QR integration, automation How to Set Up Connect your return form to Webhook (POST to https://[your-n8n-url]/webhook/return-guide) Sign up for VerifiEmail API (100 free verifications/month) Get HtmlCssToPdf and HtmlCssToImage API keys Configure Gmail OAuth2 via Google Cloud Console Set up all credentials in n8n: VerifiEmail, Gmail, PDF/Image APIs Customize HTML template colors and branding in Code node Test with sample data: {"customer_name": "Test User", "customer_email": "test@gmail.com", "order_id": "ORD123"} Requirements n8n instance** (cloud or self-hosted) API credentials**: VerifiEmail (~$0.01/check), HtmlCssToPdf, HtmlCssToImage, Gmail OAuth2 Return form** or e-commerce integration to send webhook data Email delivery** capability for customer notifications Core Features Email Fraud Detection**: Blocks fake and disposable emails Dual Format Output**: PDF for printing, PNG for previews QR Code Generation**: Quick processing at shipping locations Professional Templates**: Branded HTML with modern styling Automated Deadlines**: 7-day return window calculation Customer Communication**: Plain text emails with download links Use Cases & Applications E-commerce Returns**: Automate return authorization process Customer Service**: Reduce manual return guide creation Fraud Prevention**: Stop fake return attempts before processing Brand Consistency**: Professional documentation across all returns Operational Efficiency**: Handle high return volumes automatically Key Benefits Fraud Reduction**: Email validation prevents 60%+ fake returns Time Savings**: Eliminates manual return guide creation Professional Image**: Branded documents improve customer experience Cost Control**: Prevents processing costs for invalid returns Scalability**: Handles unlimited return requests automatically Customization Options Adjust return deadline in Set node (default 7 days) Modify HTML styling and colors in Code node Change QR code size and format in Set node expressions Edit email templates and company branding Add tracking integration with shipping APIs Customize validation rules for different email types Technical Specifications Execution Time**: 15-30 seconds per return request Success Rate**: 95%+ for valid email addresses File Sizes**: PDF 300KB, PNG 120KB average Retention**: Files hosted 30 days on service providers Rate Limits**: Respects all API provider limitations Cost Breakdown VerifiEmail**: $0.01 per email verification after free tier PDF Generation**: $0.001 per document Image Generation**: $0.001 per image Gmail**: Free (subject to Google's sending limits) Estimated**: $0.012 per return request after free tiers Integration Examples Shopify**: Webhook from return app to n8n endpoint WooCommerce**: PHP form submission to webhook URL Custom Forms**: Direct POST request with customer data Customer Portals**: Integration via REST API calls Sample Webhook Data { "customer_name": "Jane Doe", "customer_email": "test.user@gmail.com", "order_id": "ORD123456", "return_reason": "Wrong size", "product_name": "Blue Cotton T-Shirt", "purchase_date": "2025-01-15" } Installation Import the workflow JSON file into your n8n instance Set up required credentials (see setup instructions above) Activate the workflow Test with sample data to ensure proper configuration Important Disclaimers Email validation** accuracy depends on third-party service Test thoroughly** with your specific use case and volume Monitor API quotas** to avoid service interruptions Backup processes** recommended for critical return periods Compliance**: Ensure adherence to your return policy terms Support For issues with this template: Check n8n execution logs for detailed error messages Verify all API credentials are properly configured Test individual nodes to isolate problems Review the n8n community forums for similar issues License This template is provided as-is for educational and commercial use. Users are responsible for ensuring compliance with all applicable laws and service provider terms of use.
by automedia
Monitor RSS Feeds, Extract Full Articles, and Save to Supabase Overview This workflow solves a common problem with RSS feeds: they often only provide a short summary or snippet of the full article. This template automatically monitors a list of your favorite blog RSS feeds, filters for new content, visits the article page to extract the entire blog post, and then saves the structured data into a Supabase database. It's designed for content creators, marketers, researchers, and anyone who needs to build a personal knowledge base, conduct competitive analysis, or power a content aggregation system without manual copy-pasting. Use Cases Content Curation: Automatically gather full-text articles for a newsletter or social media content. Personal Knowledge Base: Create a searchable archive of articles from experts in your field. Competitive Analysis: Track what competitors are publishing without visiting their blogs every day. AI Model Training: Collect a clean, structured dataset of full-text articles to fine-tune an AI model. How It Works Scheduled Trigger: The workflow runs automatically on a set schedule (default is once per day). Fetch RSS Feeds: It takes a list of RSS feed URLs you provide in the "blogs to track" node. Filter for New Posts: It checks the publication date of each article and only continues if the article is newer than a specified age (e.g., published within the last 60 days). Extract Full Content: For each new article, the workflow uses the Jina AI Reader URL (https://r.jina.ai/)) to scrape the full, clean text from the blog post's webpage. This is a free and powerful way to get past the RSS snippet limit. Save to Supabase: Finally, it organizes the extracted data and saves it to your chosen Supabase table. The following data is saved by default: title source_url (the link to the original article) content_snippet (the full extracted article text) published_date creator (the author) status (a static value you can set, e.g., "new") content_type (a static value you can set, e.g., "blog") Setup Instructions You can get this template running in about 10-15 minutes. Set Up Your RSS Feed List: Navigate to the "blogs to track" Set node. In the source_identifier field, replace the example URLs with the RSS feed URLs for the blogs you want to monitor. You can add as many as you like. Tip: The best way to find a site's RSS feed is to use a tool like Perplexity or a web-browsing enabled LLM. // Example list of RSS feeds ['https://blog.n8n.io/rss', 'https://zapier.com/blog/feeds/latest/'] Configure the Content Age Filter: Go to the "max\_content\_age\_days" Set node. Change the value from the default 60 to your desired timeframe (e.g., 7 to only get articles from the last week). Connect Your Storage Destination: The template uses the "Save Blog Data to Database" Supabase node. First, ensure you have a table in your Supabase project with columns to match the data (e.g., title, source_url, content_snippet, published_date, creator, etc.). In the n8n node, create new credentials using your Supabase Project URL and Service Role Key. Select your table from the list and map the data fields from the workflow to your table columns. Want to use something else? You can easily replace the Supabase node with a Google Sheets, Airtable, or the built-in n8n Table node. Just drag the final connection to your new node and configure the field mapping. Set Your Schedule: Click on the first node, "Schedule Trigger". Adjust the trigger interval to your needs. The default is every day at noon. Activate Workflow: Click the "Save" button, then toggle the workflow to Active. You're all set\!
by David Olusola
🎓 n8n Learning Hub — AI-Powered YouTube Educator Directory 📋 Overview This workflow demonstrates how to use n8n Data Tables to create a searchable database of educational YouTube content. Users can search for videos by topic (e.g., "voice", "scraping", "lead gen") and receive formatted recommendations from top n8n educators. What This Workflow Does: Receives search queries** via webhook (e.g., topic: "voice agents") Processes keywords** using JavaScript to normalize search terms Queries a Data Table** to find matching educational videos Returns formatted results** with video titles, educators, difficulty levels, and links Populates the database** with a one-time setup workflow 🎯 Key Features ✅ Data Tables Introduction - Learn how to store and query structured data ✅ Webhook Integration - Accept external requests and return JSON responses ✅ Keyword Processing - Simple text normalization and keyword matching ✅ Batch Operations - Use Split in Batches to populate tables efficiently ✅ Frontend Ready - Easy to connect with Lovable, Replit, or custom UIs 🛠️ Setup Guide Step 1: Import the Workflow Copy the workflow JSON In n8n, go to Workflows → Import from File or Import from URL Paste the JSON and click Import Step 2: Create the Data Table The workflow uses a Data Table called n8n_Educator_Videos with these columns: Educator** (text) - Creator name video_title** (text) - Video title Difficulty** (text) - Beginner/Intermediate/Advanced YouTubeLink** (text) - Full YouTube URL Description** (text) - Video summary for search matching To create it: Go to Data Tables in your n8n instance Click + Create Data Table Name it n8n_Educator_Videos Add the 5 columns listed above Step 3: Populate the Database Click on the "When clicking 'Execute workflow'" node (bottom branch) Click Execute Node to run the setup This will insert all 9 educational videos into your Data Table Step 4: Activate the Webhook Click on the Webhook node (top branch) Copy the Production URL (looks like: https://your-n8n.app.n8n.cloud/webhook/1799531d-...) Click Activate on the workflow Test it with a POST request: curl -X POST https://your-n8n.app.n8n.cloud/webhook/YOUR-WEBHOOK-ID \ -H "Content-Type: application/json" \ -d '{"topic": "voice"}' 🔍 How the Search Works Keyword Processing Logic The JavaScript node normalizes search queries: "voice", "audio", "talk"** → Matches voice agent tutorials "lead", "lead gen"** → Matches lead generation content "scrape", "data", "scraping"** → Matches web scraping tutorials The Data Table query uses LIKE matching on the Description field, so partial matches work great. Example Queries: {"topic": "voice"} // Returns Eleven Labs Voice Agent {"topic": "scraping"} // Returns 2 scraping tutorials {"topic": "avatar"} // Returns social media AI avatar videos {"topic": "advanced"} // Returns all advanced-level content 🎨 Building a Frontend with Lovable or Replit Option 1: Lovable (lovable.dev) Lovable is an AI-powered frontend builder perfect for quick prototypes. Prompt for Lovable: Create a modern search interface for an n8n YouTube learning hub: Title: "🎓 n8n Learning Hub" Search bar with placeholder "Search for topics: voice, scraping, RAG..." Submit button that POSTs to webhook: [YOUR_WEBHOOK_URL] Display results as cards showing: 🎥 Video Title (bold) 👤 Educator name 🧩 Difficulty badge (color-coded) 🔗 YouTube link button 📝 Description Design: Dark mode, modern glassmorphism style, responsive grid layout Implementation Steps: Go to lovable.dev and start a new project Paste the prompt above Replace [YOUR_WEBHOOK_URL] with your actual webhook Export the code or deploy directly Option 2: Replit (replit.com) Use Replit's HTML/CSS/JS template for more control. HTML Structure: <!DOCTYPE html> <html> <head> <title>n8n Learning Hub</title> <style> body { font-family: Arial; max-width: 900px; margin: 50px auto; } #search { padding: 10px; width: 70%; font-size: 16px; } button { padding: 10px 20px; font-size: 16px; } .video-card { border: 1px solid #ddd; padding: 20px; margin: 20px 0; } </style> </head> <body> 🎓 n8n Learning Hub <input id="search" placeholder="Search: voice, scraping, RAG..." /> <button onclick="searchVideos()">Search</button> <script> async function searchVideos() { const topic = document.getElementById('search').value; const response = await fetch('YOUR_WEBHOOK_URL', { method: 'POST', headers: {'Content-Type': 'application/json'}, body: JSON.stringify({topic}) }); const data = await response.json(); document.getElementById('results').innerHTML = data.Message || 'No results'; } </script> </body> </html> Option 3: Base44 (No-Code Tool) If using Base44 or similar no-code tools: Create a Form with a text input (name: topic) Add a Submit Action → HTTP Request Set Method: POST, URL: Your webhook Map form data: {"topic": "{{topic}}"} Display response in a Text Block using {{response.Message}} 📊 Understanding Data Tables Why Data Tables? Persistent Storage** - Data survives workflow restarts Queryable** - Use conditions (equals, like, greater than) to filter Scalable** - Handle thousands of records efficiently No External DB** - Everything stays within n8n Common Operations: Insert Row - Add new records (used in the setup branch) Get Row(s) - Query with filters (used in the search branch) Update Row - Modify existing records by ID Delete Row - Remove records Best Practices: Use descriptive column names Include a searchable text field (like Description) Keep data normalized (avoid duplicate entries) Use the "Split in Batches" node for bulk operations 🚀 Extending This Workflow Ideas to Try: Add More Educators - Expand the video database Category Filtering - Add a Category column (Automation, AI, Scraping) Difficulty Sorting - Let users filter by skill level Vote System - Add upvote/downvote columns Analytics - Track which topics are searched most Admin Panel - Build a form to add new videos via webhook Advanced Features: AI-Powered Search** - Use OpenAI embeddings for semantic search Thumbnail Scraping** - Fetch YouTube thumbnails via API Auto-Updates** - Periodically check for new videos from educators Personalization** - Track user preferences in a separate table 🐛 Troubleshooting Problem: Webhook returns empty results Solution: Check that the Description field contains searchable keywords Problem: Database is empty Solution: Run the "When clicking 'Execute workflow'" branch to populate data Problem: Frontend not connecting Solution: Verify webhook is activated and URL is correct (use Test mode first) Problem: Search too broad/narrow Solution: Adjust the keyword logic in "Load Video DB" node 📚 Learning Resources Want to learn more about the concepts in this workflow? Data Tables:** n8n Data Tables Documentation Webhooks:** Webhook Node Guide JavaScript in n8n:** "Every N8N JavaScript Function Explained" (see database) 🎓 What You Learned By completing this workflow, you now understand: ✅ How to create and populate Data Tables ✅ How to query tables with conditional filters ✅ How to build webhook-based APIs in n8n ✅ How to process and normalize user input ✅ How to format data for frontend consumption ✅ How to connect n8n with external UIs Happy Learning! 🚀 Built with ❤️ using n8n Data Tables
by Elodie Tasia
Create centralized, structured logs directly from your n8n workflows, using Supabase as your scalable log database. Whether you're debugging a workflow, monitoring execution status, or tracking error events, this template makes it easy to log messages in a consistent, structured format inspired by Log4j2 levels (DEBUG, INFO, WARN, ERROR, FATAL). You’ll get a reusable sub-workflow that lets you log any message with optional metadata, tied to a workflow execution and a specific node. What this template does Provides a sub-workflow that inserts log entries into Supabase. Each log entry supports the following fields: workflow_name: Your n8n workflow identifier node_name: last executed node execution_id: n8n execution ID for correlation log_level: One of DEBUG, INFO, WARN, ERROR, FATAL message: Textual message for the log metadata: Optional JSON metadata (flexible format) Comes with examples for diffrerent log levels: Easily call the sub-workflow from any step with a Execute Workflow node and pass dynamic parameters. Use Cases Debug complex workflows without relying on internal n8n logs. Catch and trace errors with contextual metadata. Integrate logs into external dashboards or monitoring tools via Supabase SQL or APIs. Analyze logs by level, time, or workflow. Requirements To use this template, you'll need: A Supabase project with: A log_level_type enum A logs table matching the expected structure A service role key or supabase credentials available in n8n. The table shema and SQL scripts are given in the template file. How to Use This Template Clone the sub-workflow into your n8n instance. Set up Supabase credentials (in the Supabase node). Call the sub-workflow using the Execute Workflow node. Provide input values like: { "workflow_name": "sync_crm_to_airtable", "execution_id": {{$execution.id}}, "node_name": "Airtable Insert", "log_level": "INFO", "message": "New contact pushed to Airtable successfully", "metadata": { "recordId": "rec123", "fields": ["email", "firstName"] } } Repeat anywhere you need to log custom events.
by Rivers Colyer
Who’s it for: SALES teams and AGENCIES that want to automatically send call summaries to their CRM. This flow is best suited for agencies and businesses that use ZoomInfo Chorus AI to capture and analyze calls, and HubSpot as their main CRM system for managing client data. How it works / What it does: The flow runs every hour and checks Chorus for new call summaries. If new summaries are found, they are added as notes to the corresponding HubSpot company record. If the company does not exist in HubSpot, the summary will be skipped. If the engagement already exists as a note in HubSpot, it will also be skipped. In short: every hour, Chorus → HubSpot notes sync (only when unique & matched). Workflow Overview Here’s how the logic works, step by step: Triggers Run every hour (scheduled) or When clicking Execute workflow (manual testing). Fetch from Chorus.AI Get Chorus Engagement per last day: requests new engagements from the last 24h. Pagination is handled automatically (up to 15 requests). Merge paginated engagements: combines all pages into a single list. Filter items with empty meeting_summary: removes calls without summaries. Loop through engagements Loop Over Engagements: processes each engagement one by one. Find matching HubSpot company Search Company In HubSpot By Name: looks for an exact match with account_name. If company exists: continues only if a match is found. Otherwise → Skip, if company not found. Check for existing notes Search notes: searches HubSpot for a note containing the engagement_id. If note not exist: continues only if no duplicate is found. Otherwise → Skip, if note already exists. Create a new HubSpot note Create Note Payload: builds a formatted note body with: Call date/time Summary text Link to call recording Engagement ID Create Note: inserts the note into HubSpot, associated with the matched company. Loop continues After processing, the flow moves back to Loop Over Engagements until all are handled. Requirements Chorus AI account or an existing API Token. HubSpot account with the ability to create applications OR an existing App Token. How to set up: Generate a Chorus.AI API Token: Go to Settings → Personal Settings → API Access → Generate API Token. Save this token in a secure place. In your N8N instance, create new credentials with any name you prefer (e.g., Chorus.AI - API Token). Type: Header Auth Header Name: Authorization Header Value: your generated Chorus API Token Save the credentials. Create a HubSpot Application with the following access scopes: crm.objects.contacts.write crm.objects.companies.read Save the Access Token and Client Secret securely. The Access Token will be used in this flow. In your N8N instance, create new credentials with any name you prefer (e.g., HubSpot App Token). Type: HubSpot App Token App Token: paste the token from step 3 Save the credentials. Install this flow in your N8N instance (cloud or self-hosted). After installation, some nodes—Get Chorus Engagement per last day, Search Company In HubSpot By Name, Search notes, Create Note—will show errors. This is expected, as they need to be linked to your new credentials. Open the node Get Chorus Engagement per last day and update the credentials: Authentication: Generic Credential Type Generic Auth Type: Header Auth Header Auth: select the Chorus credentials you created in step 2. Open the node Search Company In HubSpot By Name and update the credentials: Authentication: Predefined Credential Type Credential Type: HubSpot App Token HubSpot App Token: select the credentials from step 4. Repeat step 8 for the node Search notes. Repeat step 8 for the node Create Note. Save your updated flow by clicking Save in the top-right corner. Run the workflow by clicking Execute Workflow to test if all credentials work correctly. Once confirmed, activate your flow by toggling the switch in the top-right corner. That’s it! 🎉 Your flow will now run every hour, syncing new Chorus call summaries to HubSpot—only when the company names match and the engagement isn’t already logged.
by Stephan Koning
🌊 What it Does This workflow automatically classifies uploaded files (PDFs or images) as floorplans or non‑floorplans. It filters out junk files, then analyzes valid floorplans to extract room sizes and measurements. 👥 Who it’s For Built for real estate platforms, property managers, and automation builders who need a trustworthy way to detect invalid uploads while quickly turning true floorplans into structured, reusable data. ⚙️ How it Works User uploads a file (PDF, JPG, PNG, etc.). Workflow routes the file based on type for specialized processing. A two‑layer quality check is applied using heuristics and AI classification. A confidence score determines if the file is a valid floorplan. Valid floorplans are passed to a powerful OCR/AI for deep analysis. Results are returned as JSON and a user-friendly HTML table. 🧠 The Technology Behind the Demo This MVP is a glimpse into a more advanced commercial system. It runs on a custom n8n workflow that leverages Mistral AI's latest OCR technology. Here’s what makes it powerful: Structured Data Extraction: The AI is forced to return data in a clean, predictable JSON Schema. This isn't just text scraping; it’s a reliable data pipeline. Intelligent Data Enrichment: The workflow doesn't just extract data—it enriches it. A custom script automatically calculates crucial metrics like wall surface area from the floor dimensions, even using fallback estimates if needed. Automated Aggregation: It goes beyond individual rooms by automatically calculating totals per floor level and per room type, providing immediate, actionable insights. While this demo shows the core classification and measurement (Step 1), the full commercial version includes Step 2 & 3 (Automated Offer Generation), currently in use by a client in the construction industry. Test the Live MVP 📋 Requirements Jigsaw Stack API Key n8n Instance Webhook Endpoint 🎨 Customization Adjust thresholds, fine‑tune heuristics, or swap OCR providers to better match your business needs and downstream integrations.
by Sean Spaniel
Predict Housing Prices with a Neural Network This n8n template demonstrates how a simple Multi-Layer Perceptron (MLP) neural network can predict housing prices. The prediction is based on four key features, processed through a three-layer model. Input Layer Receives the initial data via a webhook that accepts four query parameters. Hidden Layer Composed of two neurons. Each neuron calculates a weighted sum of the inputs, adds a bias, and applies the ReLU activation function. Output Layer Contains one neuron that calculates the weighted sum of the hidden layer's outputs, adds its bias, and returns the final price prediction. Setup This template works out-of-the-box and requires no special configuration or prerequisites. Just import the workflow to get started. How to Use Trigger this workflow by sending a GET request to the webhook endpoint. Include the house features as query parameters in the URL. Endpoint: /webhook/regression/house/price Query Parameters square_feet: The total square footage of the house. number_rooms: The total number of rooms. age_in_years: The age of the house in years. distance_to_city_in_km: The distance to the nearest city center in kilometers. Example Here’s an example curl request for a 1,500 sq ft, 3-room house that is 10 years old and 5 km from the city. Request curl "https://your-n8n-instance.com/webhook/regression/house/price?square_feet=1500&number_rooms=3&age_in_years=10&distance_to_city_in_km=5" Response JSON { "price": 53095.832123960805 } `
by System Admin
No description available
by System Admin
Tagged with: , , , ,
by System Admin
Tagged with: , , , ,
by Luis Hernandez
Custom Interface for GLPI with n8n Transform your GLPI system's user experience with a modern, optimized web interface that simplifies technical support ticket creation. How it works Unified entry portal: Users access an intuitive web form where they select between "Request" or "Incident" Custom data collection: Specific forms adapt fields based on ticket type, requesting only relevant information Automatic processing: The system automatically maps categories and priorities, creates tickets in GLPI via REST API File management: Allows document attachments that are automatically linked to the created ticket Confirmation and tracking: Provides the generated ticket ID for future follow-up Key benefits ✅ More user-friendly interface than native GLPI interface ✅ Adaptive forms based on request type ✅ Error reduction through automatic validations ✅ Mobile-optimized user experience ✅ Seamless integration with existing GLPI Setup steps ⏱️ Estimated time: 15-20 minutes Prerequisites GLPI server with REST API enabled User with application administrator privileges Application token generated in GLPI Basic configuration Connection variables: Update GLPI server URL and application token in the "Configuration Variables" node Authentication credentials: Configure HTTP Basic Auth credentials for GLPI connection Category IDs: Identify and map ITIL category IDs in the processing nodes Form customization: Adapt fields, options, and categories according to organizational needs Activation Activate the workflow and obtain web form URLs Share the portal link with end users Perform ticket creation tests Ideal use cases Companies looking to improve GLPI user experience Organizations needing more intuitive support forms IT teams wanting to reduce miscategorized tickets Companies requiring mobile-friendly support interfaces Technologies used n8n (orchestration and web forms) GLPI REST API HTTP Basic Authentication Automatic session management