by David Olusola
π n8n Learning Hub β AI-Powered YouTube Educator Directory π Overview This workflow demonstrates how to use n8n Data Tables to create a searchable database of educational YouTube content. Users can search for videos by topic (e.g., "voice", "scraping", "lead gen") and receive formatted recommendations from top n8n educators. What This Workflow Does: Receives search queries** via webhook (e.g., topic: "voice agents") Processes keywords** using JavaScript to normalize search terms Queries a Data Table** to find matching educational videos Returns formatted results** with video titles, educators, difficulty levels, and links Populates the database** with a one-time setup workflow π― Key Features β Data Tables Introduction - Learn how to store and query structured data β Webhook Integration - Accept external requests and return JSON responses β Keyword Processing - Simple text normalization and keyword matching β Batch Operations - Use Split in Batches to populate tables efficiently β Frontend Ready - Easy to connect with Lovable, Replit, or custom UIs π οΈ Setup Guide Step 1: Import the Workflow Copy the workflow JSON In n8n, go to Workflows β Import from File or Import from URL Paste the JSON and click Import Step 2: Create the Data Table The workflow uses a Data Table called n8n_Educator_Videos with these columns: Educator** (text) - Creator name video_title** (text) - Video title Difficulty** (text) - Beginner/Intermediate/Advanced YouTubeLink** (text) - Full YouTube URL Description** (text) - Video summary for search matching To create it: Go to Data Tables in your n8n instance Click + Create Data Table Name it n8n_Educator_Videos Add the 5 columns listed above Step 3: Populate the Database Click on the "When clicking 'Execute workflow'" node (bottom branch) Click Execute Node to run the setup This will insert all 9 educational videos into your Data Table Step 4: Activate the Webhook Click on the Webhook node (top branch) Copy the Production URL (looks like: https://your-n8n.app.n8n.cloud/webhook/1799531d-...) Click Activate on the workflow Test it with a POST request: curl -X POST https://your-n8n.app.n8n.cloud/webhook/YOUR-WEBHOOK-ID \ -H "Content-Type: application/json" \ -d '{"topic": "voice"}' π How the Search Works Keyword Processing Logic The JavaScript node normalizes search queries: "voice", "audio", "talk"** β Matches voice agent tutorials "lead", "lead gen"** β Matches lead generation content "scrape", "data", "scraping"** β Matches web scraping tutorials The Data Table query uses LIKE matching on the Description field, so partial matches work great. Example Queries: {"topic": "voice"} // Returns Eleven Labs Voice Agent {"topic": "scraping"} // Returns 2 scraping tutorials {"topic": "avatar"} // Returns social media AI avatar videos {"topic": "advanced"} // Returns all advanced-level content π¨ Building a Frontend with Lovable or Replit Option 1: Lovable (lovable.dev) Lovable is an AI-powered frontend builder perfect for quick prototypes. Prompt for Lovable: Create a modern search interface for an n8n YouTube learning hub: Title: "π n8n Learning Hub" Search bar with placeholder "Search for topics: voice, scraping, RAG..." Submit button that POSTs to webhook: [YOUR_WEBHOOK_URL] Display results as cards showing: π₯ Video Title (bold) π€ Educator name π§© Difficulty badge (color-coded) π YouTube link button π Description Design: Dark mode, modern glassmorphism style, responsive grid layout Implementation Steps: Go to lovable.dev and start a new project Paste the prompt above Replace [YOUR_WEBHOOK_URL] with your actual webhook Export the code or deploy directly Option 2: Replit (replit.com) Use Replit's HTML/CSS/JS template for more control. HTML Structure: <!DOCTYPE html> <html> <head> <title>n8n Learning Hub</title> <style> body { font-family: Arial; max-width: 900px; margin: 50px auto; } #search { padding: 10px; width: 70%; font-size: 16px; } button { padding: 10px 20px; font-size: 16px; } .video-card { border: 1px solid #ddd; padding: 20px; margin: 20px 0; } </style> </head> <body> π n8n Learning Hub <input id="search" placeholder="Search: voice, scraping, RAG..." /> <button onclick="searchVideos()">Search</button> <script> async function searchVideos() { const topic = document.getElementById('search').value; const response = await fetch('YOUR_WEBHOOK_URL', { method: 'POST', headers: {'Content-Type': 'application/json'}, body: JSON.stringify({topic}) }); const data = await response.json(); document.getElementById('results').innerHTML = data.Message || 'No results'; } </script> </body> </html> Option 3: Base44 (No-Code Tool) If using Base44 or similar no-code tools: Create a Form with a text input (name: topic) Add a Submit Action β HTTP Request Set Method: POST, URL: Your webhook Map form data: {"topic": "{{topic}}"} Display response in a Text Block using {{response.Message}} π Understanding Data Tables Why Data Tables? Persistent Storage** - Data survives workflow restarts Queryable** - Use conditions (equals, like, greater than) to filter Scalable** - Handle thousands of records efficiently No External DB** - Everything stays within n8n Common Operations: Insert Row - Add new records (used in the setup branch) Get Row(s) - Query with filters (used in the search branch) Update Row - Modify existing records by ID Delete Row - Remove records Best Practices: Use descriptive column names Include a searchable text field (like Description) Keep data normalized (avoid duplicate entries) Use the "Split in Batches" node for bulk operations π Extending This Workflow Ideas to Try: Add More Educators - Expand the video database Category Filtering - Add a Category column (Automation, AI, Scraping) Difficulty Sorting - Let users filter by skill level Vote System - Add upvote/downvote columns Analytics - Track which topics are searched most Admin Panel - Build a form to add new videos via webhook Advanced Features: AI-Powered Search** - Use OpenAI embeddings for semantic search Thumbnail Scraping** - Fetch YouTube thumbnails via API Auto-Updates** - Periodically check for new videos from educators Personalization** - Track user preferences in a separate table π Troubleshooting Problem: Webhook returns empty results Solution: Check that the Description field contains searchable keywords Problem: Database is empty Solution: Run the "When clicking 'Execute workflow'" branch to populate data Problem: Frontend not connecting Solution: Verify webhook is activated and URL is correct (use Test mode first) Problem: Search too broad/narrow Solution: Adjust the keyword logic in "Load Video DB" node π Learning Resources Want to learn more about the concepts in this workflow? Data Tables:** n8n Data Tables Documentation Webhooks:** Webhook Node Guide JavaScript in n8n:** "Every N8N JavaScript Function Explained" (see database) π What You Learned By completing this workflow, you now understand: β How to create and populate Data Tables β How to query tables with conditional filters β How to build webhook-based APIs in n8n β How to process and normalize user input β How to format data for frontend consumption β How to connect n8n with external UIs Happy Learning! π Built with β€οΈ using n8n Data Tables
by automedia
Monitor RSS Feeds, Extract Full Articles, and Save to Supabase Overview This workflow solves a common problem with RSS feeds: they often only provide a short summary or snippet of the full article. This template automatically monitors a list of your favorite blog RSS feeds, filters for new content, visits the article page to extract the entire blog post, and then saves the structured data into a Supabase database. It's designed for content creators, marketers, researchers, and anyone who needs to build a personal knowledge base, conduct competitive analysis, or power a content aggregation system without manual copy-pasting. Use Cases Content Curation: Automatically gather full-text articles for a newsletter or social media content. Personal Knowledge Base: Create a searchable archive of articles from experts in your field. Competitive Analysis: Track what competitors are publishing without visiting their blogs every day. AI Model Training: Collect a clean, structured dataset of full-text articles to fine-tune an AI model. How It Works Scheduled Trigger: The workflow runs automatically on a set schedule (default is once per day). Fetch RSS Feeds: It takes a list of RSS feed URLs you provide in the "blogs to track" node. Filter for New Posts: It checks the publication date of each article and only continues if the article is newer than a specified age (e.g., published within the last 60 days). Extract Full Content: For each new article, the workflow uses the Jina AI Reader URL (https://r.jina.ai/)) to scrape the full, clean text from the blog post's webpage. This is a free and powerful way to get past the RSS snippet limit. Save to Supabase: Finally, it organizes the extracted data and saves it to your chosen Supabase table. The following data is saved by default: title source_url (the link to the original article) content_snippet (the full extracted article text) published_date creator (the author) status (a static value you can set, e.g., "new") content_type (a static value you can set, e.g., "blog") Setup Instructions You can get this template running in about 10-15 minutes. Set Up Your RSS Feed List: Navigate to the "blogs to track" Set node. In the source_identifier field, replace the example URLs with the RSS feed URLs for the blogs you want to monitor. You can add as many as you like. Tip: The best way to find a site's RSS feed is to use a tool like Perplexity or a web-browsing enabled LLM. // Example list of RSS feeds ['https://blog.n8n.io/rss', 'https://zapier.com/blog/feeds/latest/'] Configure the Content Age Filter: Go to the "max\_content\_age\_days" Set node. Change the value from the default 60 to your desired timeframe (e.g., 7 to only get articles from the last week). Connect Your Storage Destination: The template uses the "Save Blog Data to Database" Supabase node. First, ensure you have a table in your Supabase project with columns to match the data (e.g., title, source_url, content_snippet, published_date, creator, etc.). In the n8n node, create new credentials using your Supabase Project URL and Service Role Key. Select your table from the list and map the data fields from the workflow to your table columns. Want to use something else? You can easily replace the Supabase node with a Google Sheets, Airtable, or the built-in n8n Table node. Just drag the final connection to your new node and configure the field mapping. Set Your Schedule: Click on the first node, "Schedule Trigger". Adjust the trigger interval to your needs. The default is every day at noon. Activate Workflow: Click the "Save" button, then toggle the workflow to Active. You're all set\!
by Stephan Koning
π What it Does This workflow automatically classifies uploaded files (PDFs or images) as floorplans or nonβfloorplans. It filters out junk files, then analyzes valid floorplans to extract room sizes and measurements. π₯ Who itβs For Built for real estate platforms, property managers, and automation builders who need a trustworthy way to detect invalid uploads while quickly turning true floorplans into structured, reusable data. βοΈ How it Works User uploads a file (PDF, JPG, PNG, etc.). Workflow routes the file based on type for specialized processing. A twoβlayer quality check is applied using heuristics and AI classification. A confidence score determines if the file is a valid floorplan. Valid floorplans are passed to a powerful OCR/AI for deep analysis. Results are returned as JSON and a user-friendly HTML table. π§ The Technology Behind the Demo This MVP is a glimpse into a more advanced commercial system. It runs on a custom n8n workflow that leverages Mistral AI's latest OCR technology. Hereβs what makes it powerful: Structured Data Extraction: The AI is forced to return data in a clean, predictable JSON Schema. This isn't just text scraping; itβs a reliable data pipeline. Intelligent Data Enrichment: The workflow doesn't just extract dataβit enriches it. A custom script automatically calculates crucial metrics like wall surface area from the floor dimensions, even using fallback estimates if needed. Automated Aggregation: It goes beyond individual rooms by automatically calculating totals per floor level and per room type, providing immediate, actionable insights. While this demo shows the core classification and measurement (Step 1), the full commercial version includes Step 2 & 3 (Automated Offer Generation), currently in use by a client in the construction industry. Test the Live MVP π Requirements Jigsaw Stack API Key n8n Instance Webhook Endpoint π¨ Customization Adjust thresholds, fineβtune heuristics, or swap OCR providers to better match your business needs and downstream integrations.
by Sean Spaniel
Predict Housing Prices with a Neural Network This n8n template demonstrates how a simple Multi-Layer Perceptron (MLP) neural network can predict housing prices. The prediction is based on four key features, processed through a three-layer model. Input Layer Receives the initial data via a webhook that accepts four query parameters. Hidden Layer Composed of two neurons. Each neuron calculates a weighted sum of the inputs, adds a bias, and applies the ReLU activation function. Output Layer Contains one neuron that calculates the weighted sum of the hidden layer's outputs, adds its bias, and returns the final price prediction. Setup This template works out-of-the-box and requires no special configuration or prerequisites. Just import the workflow to get started. How to Use Trigger this workflow by sending a GET request to the webhook endpoint. Include the house features as query parameters in the URL. Endpoint: /webhook/regression/house/price Query Parameters square_feet: The total square footage of the house. number_rooms: The total number of rooms. age_in_years: The age of the house in years. distance_to_city_in_km: The distance to the nearest city center in kilometers. Example Hereβs an example curl request for a 1,500 sq ft, 3-room house that is 10 years old and 5 km from the city. Request curl "https://your-n8n-instance.com/webhook/regression/house/price?square_feet=1500&number_rooms=3&age_in_years=10&distance_to_city_in_km=5" Response JSON { "price": 53095.832123960805 } `
by System Admin
No description available
by System Admin
Tagged with: , , , ,
by System Admin
No description available
by System Admin
Tagged with: , , , ,
by System Admin
Tagged with: , , , ,
by System Admin
Tagged with: , , , ,
by System Admin
Tagged with: , , , ,
by Luis Hernandez
Custom Interface for GLPI with n8n Transform your GLPI system's user experience with a modern, optimized web interface that simplifies technical support ticket creation. How it works Unified entry portal: Users access an intuitive web form where they select between "Request" or "Incident" Custom data collection: Specific forms adapt fields based on ticket type, requesting only relevant information Automatic processing: The system automatically maps categories and priorities, creates tickets in GLPI via REST API File management: Allows document attachments that are automatically linked to the created ticket Confirmation and tracking: Provides the generated ticket ID for future follow-up Key benefits β More user-friendly interface than native GLPI interface β Adaptive forms based on request type β Error reduction through automatic validations β Mobile-optimized user experience β Seamless integration with existing GLPI Setup steps β±οΈ Estimated time: 15-20 minutes Prerequisites GLPI server with REST API enabled User with application administrator privileges Application token generated in GLPI Basic configuration Connection variables: Update GLPI server URL and application token in the "Configuration Variables" node Authentication credentials: Configure HTTP Basic Auth credentials for GLPI connection Category IDs: Identify and map ITIL category IDs in the processing nodes Form customization: Adapt fields, options, and categories according to organizational needs Activation Activate the workflow and obtain web form URLs Share the portal link with end users Perform ticket creation tests Ideal use cases Companies looking to improve GLPI user experience Organizations needing more intuitive support forms IT teams wanting to reduce miscategorized tickets Companies requiring mobile-friendly support interfaces Technologies used n8n (orchestration and web forms) GLPI REST API HTTP Basic Authentication Automatic session management