by Tiartyos
Voice Cloning Workflow - Zyphra Zonos API Who is this for? This workflow is designed for developers, content creators, and businesses looking to automate high-quality voice synthesis using AI voice cloning technology. What problem does this solve? It automates the process of generating natural-sounding speech from text using a sample voice file, eliminating the need for manual voice recording and providing consistent voice output for applications like audiobooks, virtual assistants, or content localization. What this workflow does The workflow receives text and voice cloning parameters via webhook, reads a sample voice file from your storage, sends the data to Zyphra's Zonos API for voice synthesis, and saves the generated audio file to your specified output location. Prerequisites You'll need: API key from Zyphra (obtain from https://playground.zyphra.com/settings/api-keys) Account registration at https://playground.zyphra.com Sample voice file stored on accessible disk/cloud storage n8n instance running with webhook capabilities Setup Configure your Zyphra API key in the "Call Zyphra Clone API" node under Header Parameters (Name: X-API-Key, Value: your-api-key) Ensure your sample voice files are accessible at the paths you'll specify Test the webhook endpoint is accessible Supported Audio Formats The API supports multiple output formats through the mime_type parameter: WebM** (default) - audio/webm Ogg** - audio/ogg WAV** - audio/wav MP3** - audio/mp3 or audio/mpeg MP4/AAC** - audio/mp4 or audio/aac Usage Example Endpoint: POST http://localhost:5678/webhook-test/voice-clone Headers: Content-Type: application/json Request Body: { "text": "Hello there! This voice sounds just like the sample!", "speaking_rate": 18, "sample_voice_path": "/data/output/sampleVoice.wav", "output_path": "/data/output/", "language_iso_code": "en-us", "mime_type": "audio/wav", "model": "zonos-v0.1-transformer", "emotion": { "happiness": 0.8, "neutral": 0.3, "sadness": 0.05, "disgust": 0.05, "fear": 0.05, "surprise": 0.05, "anger": 0.05, "other": 0.5 } } Parameters Required Parameters text**: Text to synthesize into speech sample_voice_path**: Path to your voice sample file output_path**: Directory where generated audio will be saved Optional Parameters (with defaults) speaking_rate**: 15 - Speech speed language_iso_code**: "en-us" - Language code mime_type**: "audio/wav" - Output audio format model**: "zonos-v0.1-transformer" - AI model to use emotion**: Object with emotion levels (0-1 scale)
by David Olusola
⚙️ How It Works: LocalRAG.AI ⚠️ Note: This system only works for self-hosted n8n instances. It will not function on n8n.cloud or other remote setups. LocalRAG.AI is a private, on-prem AI assistant that uses your own documents to answer questions intelligently. It combines LangChain, Ollama, Qdrant, and Postgres into a powerful AI pipeline — all running locally for maximum data privacy. 🔄 What It Does Monitors Your Google Drive Folders for new or updated files. Downloads the file, extracts the text, and prepares it. Generates Embeddings using your local Ollama model (e.g., LLaMA 3). Stores them in Qdrant, your local vector database. During a chat, it: Uses vector search to retrieve relevant chunks. Combines them with chat history stored in Postgres. Responds via a LangChain AI agent using your local model. 🛠️ Setup Steps (Self-hosted Only) Install and Self-host n8n (e.g., via Docker). Set up your Ollama instance locally and load your desired LLM (e.g., llama3). Deploy Qdrant locally for vector storage. Connect a Postgres DB to store chat history. Create and import the workflow in n8n. Authenticate Google Drive to monitor folders. Connect credentials for Ollama, Qdrant, Postgres in the n8n workflow. Start chatting through the Webhook Trigger or custom UI. 🧠 Perfect For: Research teams handling confidential data Internal documentation Q&A AI chatbots that don’t rely on OpenAI or cloud
by David Ashby
Complete MCP server exposing 2 Wayback API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Wayback API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Wayback API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.archive.org • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (2 total) 🔧 Wayback (2 endpoints) • GET /wayback/v1/available: GET /wayback/v1/available • POST /wayback/v1/available: POST /wayback/v1/available 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Wayback API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Ranjan Dailata
Who this is for? This workflow is designed for professionals and teams who need real-time, structured insights from Perplexity Search results without manual effort. What problem is this workflow solving? This n8n workflow solves the problem of automating Perplexity Search result extraction, cleanup, summarization, and AI-enhanced formatting for downstream use like sending the results to a webhook or another system. What this workflow does Automates Perplexity Search via Bright Data Uses Bright Data’s proxy-based SERP API to run a Google Search query programmatically. Makes the process repeatable and scriptable with different search terms and regions/zones. Cleans and Extracts Useful Content The Readable Data Extractor uses LLM-based cleaning to remove HTML/CSS/JS from the response and extract pure text data. Converts messy, unstructured web content into structured, machine-readable format. Summarizes Search Results Through the Gemini Flash + Summarization Chain, it generates a concise summary of the search results. Ideal for users who don’t have time to read full pages of search results. Formats Data Using AI Agent The AI Agent acts like a virtual assistant that: - Understands search results Formats them in a readable, JSON-compatible form Prepares them for webhook delivery Delivers Results to Webhook Sends the final summary + structured search result to a webhook (could be your app, a Slack bot, Google Sheets, or CRM). Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the Perplexity Search Request node with the prompt you wish to perform the search. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs 1. Change the Perplexity Search Input Default: It searches a fixed query or dataset. Customize: Accept input from a Google Sheet, Airtable, or a form. Auto-trigger searches based on keywords or schedules. 2. Customize Summarization Style (LLM Output) Default: General summary using Google Gemini or OpenAI. Customize: Add tone: formal, casual, technical, executive-summary, etc. Focus on specific sections: pricing, competitors, FAQs, etc. Translate the summaries into multiple languages. Add bullet points, pros/cons, or insight tags. 3.Choose Where the Results Go Options: Email, Slack, Notion, Airtable, Google Docs, or a dashboard. Auto-create content drafts for WordPress or newsletters. Feed into CRM notes or attach to Salesforce leads.
by max e
Turn plain-language chat like “Tomorrow 9 AM: write blog post” into neatly organised Todoist tasks with GPT-4o and n8n—zero code. 🪄 Ultimate Personal Todoist Agent Turn natural-language requests into perfectly-organized Todoist tasks—all on autopilot inside n8n. > “Add Finish quarterly report by Friday afternoon” → the agent creates the task, sets the due date & priority, and even drops it into the right project. ✨ 🌟 Why this workflow rocks All-in-one Todoist super‑powers** – create, update, complete, move, archive… every major Todoist endpoint is wired up (tasks, projects, sections, labels, comments). LLM‑powered intent detection** – an OpenAI model interprets plain-English (or emoji‑filled!) messages so you don’t have to remember slash‑commands. Minimal setup** – just two credentials and you’re live. Battle‑tested building block** – use it as‑is, or plug the Todoist Agent node into your own agents & chatbots. 🛠️ What you’ll need | Credential | Where it’s used | How to set it up | | ------------------ | -------------------------------------- | --------------------------------------------------------------------------------------------- | | OpenAI API | Orchestrator & LLM nodes | Paste your OpenAI secret key into an OpenAI credential in n8n. | | Todoist OAuth2 | Todoist node and HTTP Request node | Log in Todoist from your browser to set up credential in n8n. | > That’s it—no webhooks, no extra secrets. > Tested with *gpt‑4o‑latest* – the fastest & most accurate model in our trials. ⚡ Quick‑start (5 minutes) Import the JSON template (hit ▶️ Try it out on the n8n template page or drag‑drop the file into your canvas). Select your credentials in the two credential dropdowns. Click Test workflow. In the sample Function node, tweak the message field (e.g. “Tomorrow at 9 am: write blog post”). Run → watch your new Todoist task appear. (Optional) Swap the Function node for your favourite chat trigger (Telegram, Slack, WhatsApp, Discord, you name it). Boom—your personal Todoist genie is alive! 🧞♂️ 🧩 How it works (under the hood) [Trigger / Chat message] │ ▼ [🗂️ Orchestrator Agent] ← OpenAI Chat Model + Short‑term Memory │ ↳ Parses intent & entities │ ▼ [🤖 Todoist Agent] ← 15+ Todoist endpoints │ ↳ Executes the right call (create, update, complete, etc.) ▼ [Done ✅ ] The Orchestrator is an example. In production you can drop it and simply expose the Todoist Agent as a tool for any other agent workflow. 🎛️ Customising & extending | Idea | How to do it | | ------------------------- | ---------------------------------------------------------------------------------------- | | Notion / Sheets sync | After the Todoist Agent node, add a Notion or Google Sheets node to log completed items. | | Voice commands | Swap the chat trigger for a Speech‑to‑Text node (e.g. Whisper). | 🤝 Need custom automations? Want me to build or tweak something for you? → Email maxemelyanenko@gmail.com and let’s make it happen! ⚠️ What’s not included (yet) Shared projects & other Todoist Pro/Business endpoints. File attachments in the comments. Editing comments. Pull requests welcome! 🙌
by David Ashby
Complete MCP server exposing 2 CarbonDoomsDay API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add CarbonDoomsDay credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the CarbonDoomsDay API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.carbondoomsday.com/api • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (2 total) 🔧 Co2 (2 endpoints) • GET /co2/: Get CO2 Measurement by Date • GET /co2/{date}/: CO2 measurements from the Mauna Loa observatory. This data is made available through the good work of the people at the Mauna Loa observatory. Their release notes say: These data are made freely available to the public and the scientific community in the belief that their wide dissemination will lead to greater understanding and new scientific insights. We currently scrape the following sources: [co2_mlo_weekly.csv] [co2_mlo_surface-insitu_1_ccgg_DailyData.txt] [weekly_mlo.csv] We have daily CO2 measurements as far back as 1958. Learn about using pagination via [the 3rd party documentation]. [co2_mlo_weekly.csv]: https://www.esrl.noaa.gov/gmd/webdata/ccgg/trends/co2_mlo_weekly.csv [co2_mlo_surface-insitu_1_ccgg_DailyData.txt]: ftp://aftp.cmdl.noaa.gov/data/trace_gases/co2/in-situ/surface/mlo/co2_mlo_surface-insitu_1_ccgg_DailyData.txt [weekly_mlo.csv]: http://scrippsco2.ucsd.edu/sites/default/files/data/in_situ_co2/weekly_mlo.csv [the 3rd party documentation]: http://www.django-rest-framework.org/api-guide/pagination/#pagenumberpagination 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native CarbonDoomsDay API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 3 Background Removal API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Background Removal API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Background Removal API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.remove.bg/v1.0 • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (3 total) 🔧 Account (1 endpoints) • GET /account: Fetch Account Balance 🔧 Improve (1 endpoints) • POST /improve: Submit Image for Improvement 🔧 Removebg (1 endpoints) • POST /removebg: Remove Image Background 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Background Removal API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Abhishek Patoliya
This workflow allows you to scrape website content, clean the HTML, extract structured information using GPT-4o-mini, and store the results along with SEO keywords into Airtable. Ideal for building keyword lists and organizing web content for SEO research. Setup Instructions 1. Prerequisites n8n Community or Cloud instance Airtable account with a base and table ready OpenAI API Key with access to GPT-4o-mini 2. Airtable Structure Ensure your Airtable table has the following fields: | Field Name | Type | Notes | | ------------ | ------- | ------------------------------- | | Website Name | String | Name or URL of the website | | Data | String | Cleaned website text | | Keyword | String | Extracted SEO keyword list | | Status | Options | Values: Todo, In progress, Done | 3. Node Setup ✅ Form Trigger: Collects website URL from the user. ✅ HTTP Request: Fetches the website content. ✅ HTML Cleaner (Code Node): Strips out styles, tags, and whitespace to get clean text. ✅ Topic Extractor (AI Agent + GPT-4o-mini): Extracts topic-wise information from the cleaned website content. ✅ Text Cleaner (Code Node): Removes unwanted symbols like ### and **. ✅ Keyword Extractor (AI Agent + GPT-4o-mini): Generates a list of 90 important SEO keywords. ✅ Airtable Upsert: Stores the cleaned data, keywords, and status in Airtable. 4. Key Features ✅ Automatic website content scraping ✅ Clean HTML and extract plain text ✅ Use GPT-4o-mini for topic-wise information extraction ✅ Generate 90-keyword SEO lists ✅ Store and manage data in Airtable 5. Use Cases SEO Keyword Research Competitor Website Content Analysis Structured Website Data Collection Additional Workflow Recommendations ✅ Rename Nodes for Clarity | Current Name | Suggested Name | | ------------ | ------------------------------- | | Website Name | Website URL Input Form | | HTTP Request | Fetch Website Content | | Code | HTML to Plain Text Cleaner | | Split Out1 | Clean Text Splitter | | AI Agent1 | Topic Extractor (GPT-4o-mini) | | Code1 | Text Cleanup Formatter | | Split Out2 | Final Text Splitter | | AI Agent | Keyword Extractor (GPT-4o-mini) | | Airtable | Airtable Data Upsert | | Wait1 | Delay Before Merge | | Merge | Combine Data for Airtable |
by Tom Cao
🔐 Advanced SSL Health Monitor 👤 Who is this for? This workflow is designed for DevOps engineers, IT administrators, and security professionals who need comprehensive SSL certificate monitoring and health assessment across multiple domains — featuring dual verification and professional reporting without relying on expensive monitoring services. 🧩 What It Does Daily Trigger runs the workflow every morning for proactive monitoring. URL Collection fetches the list of website URLs to monitor from your data source. Dual SSL Analysis: Free SSL Assessment Script — Get from sysadmin-toolkit on Github SSL-Checker.io API — External verification for cross-validation Comprehensive Health Check: Certificate expiration monitoring (customizable threshold) SSL configuration security assessment Protocol support analysis (TLS 1.3, 1.2, deprecated protocols) Cipher suite strength evaluation Vulnerability scanning (POODLE, BEAST, etc.) Compliance checking (PCI DSS, NIST, FIPS) Smart Alert System sends Discord notifications when: Certificates expire within threshold (default: 30 days) SSL configuration issues detected (weak ciphers, deprecated protocols) Security vulnerabilities found Compliance standards not met Grade drops below acceptable level (configurable) 🎯 Key Features 🔄 Dual Verification**: Cross-checks results between internal scanner and external API 📊 SSL Labs-Style Grading**: A+ to F rating system with detailed analysis 🛡️ Security Assessment**: Vulnerability detection and compliance checking 📱 Discord Integration**: Rich embed notifications with color-coded alerts ⚙️ Setup Instructions Data Source: Configure your URL source from Notion Ensure it contains a URL column with domains to monitor Credentials: Set up Discord webhook for alert notifications Configure any required API credentials for data sources Customize Thresholds: Expiration Alert: Days before expiry (default: 30 days) Grade Threshold: Minimum acceptable SSL grade (default: B) Alert Severity: Choose which issues trigger notifications Advanced Configuration: Modify vulnerability checks based on your security requirements Adjust compliance standards for your industry needs Customize Discord message formatting and alert channels 🧠 Technical Notes Dual-Check Reliability**: Combines custom Bubobot scanner with ssl-checker.io for maximum accuracy No Vendor Lock-in**: Uses free public APIs and open-source tools Professional Reporting**: Generates SSL Labs-quality assessments Security-First Approach**: Comprehensive vulnerability and compliance checking Flexible Alerting**: Discord integration with rich formatting and conditional logic This workflow provide a comprehensive SSL security monitoring solution that rivals enterprise-grade tools while remaining completely open-source and free.
by DataAnts
Dynamically Run SuiteQL Queries in NetSuite via HTTP Webhook in n8n > Important: This template uses a NetSuite community node, so it only works on self-hosted n8n. Cloud-based n8n instances currently do not support community nodes. Summary This workflow template allows you to dynamically run SuiteQL queries in NetSuite by sending an HTTP request to an n8n Webhook node. Once triggered, the workflow uses token-based authentication to execute your SuiteQL query and returns the results as JSON. This makes it easy to integrate real-time NetSuite data into dashboards, reporting tools, or other applications. Who Is This For? Developers & Integrators**: Easily embed NetSuite data retrieval into custom apps or internal tools. Enterprises & Consultants**: Integrate dynamic reporting or data enrichment from NetSuite without manual exports. System Administrators**: Automate routine queries and reduce manual intervention. Use Cases & Benefits 1. Dynamic Data Access Send any SuiteQL query on demand instead of hardcoding queries or manually running reports. 2. Seamless Integration Quickly pull NetSuite data into front-end systems (like Excel dashboards, custom web apps, or internal tools) by calling the webhook endpoint. 3. Simplified Reporting Automate data extraction and formatting, reducing the need for manual exports and improving efficiency. How It Works Trigger: An HTTP request to the webhook node initiates the workflow. Input Processing: The workflow reads the SuiteQL query from the incoming request parameter (suiteql). Query Execution: The NetSuite node uses your token-based authentication credentials to run the SuiteQL query. Response: Results are returned as JSON in the HTTP response, ready for further processing or immediate consumption. Prerequisites & Setup NetSuite Community Node This workflow requires the NetSuite community node. Make sure your self-hosted n8n instance supports community nodes. NetSuite Token-Based Authentication Enable TBA in NetSuite. Obtain the required consumer key, consumer secret, token ID, and token secret. n8n Webhook Copy the auto-generated webhook URL (e.g. http://<your-n8n-domain>/webhook/unique-id) from the Webhook node. Usage Send an HTTP GET or POST request to the webhook with your SuiteQL query. For example: curl "http://<your-n8n-domain>/webhook/unique-id?suiteql=SELECT%20*%20FROM%20account%20LIMIT%2010" The workflow will execute the query and return JSON data. Customization Change the Query**: Simply adjust the suiteql parameter in your HTTP request to run different SuiteQL statements. Data Transformation**: Insert nodes (e.g., Function, Set, or Format) to modify or reformat the data before returning it. Extend Integration**: Chain additional nodes to push the retrieved data to other services (Google Sheets, Slack, custom dashboards, etc.). Additional Notes Remember that this template is only compatible with self-hosted n8n because it uses a community node. - [netsuite community node](https://www.npmjs.com/package/n8n-nodes-netsuite ) If you have questions, suggestions, or need support, contact us at support@dataants.org.
by Ficky
Build a Redis-Powered CRUD App with HTML Frontend This workflow demonstrates how to use n8n to build a complete, self-contained CRUD (Create, Read, Update, Delete) application without relying on any external server or hosting. It not only acts as the backend, handling all CRUD operations through Webhook endpoints, but also serves a fully functional HTML Single Page Application (SPA) directly via a webhook response. Redis is used as a lightweight data store, providing fast and simple key-value storage with auto-incremented IDs. Because both the frontend (HTML app) and backend (API endpoints) are managed entirely within a single n8n workflow, you can quickly prototype or deploy small tools without additional infrastructure. This approach is ideal for: Rapidly creating no-code or low-code applications Running fully browser-based tools served directly from n8n Teaching or demonstrating n8n + Redis integration in a single workflow Features Add new item with auto-incremented ID Edit existing item Delete specific item Reset all data (clear storage and reset autoincrement id) Single HTML frontend for demonstration (no framework required) Setup Instructions 1. Prerequisites Before importing and running the workflow, make sure you have: A running n8n instance (self-hosted or cloud) A running Redis server (local or remote) 2. API Path Setup For the REST API, use a consistent path. For example, if you choose items as the path: 2a. Get All Items** Method: GET Endpoint: items 2b. Add Item** Method: POST Endpoint: items 2c. Edit Item** Method: PUT Endpoint: items 2d. Delete Item** Method: DELETE Endpoint: items 2e. Reset Items** Method: POST Endpoint: items-reset 3. Configure the API URL Set the API URL in the SET API URL node. Use your n8n webhook URL, for example: https://yourn8n.com/webhook/items 4. Run the HTML App Once everything is set: Open the webhook URL for the HTML app in a browser. The CRUD interface will load and connect to the API endpoints automatically. You can now add, edit, delete, or reset items directly from the web interface. Workflows 1. Render the HTML CRUD App This webhook serves a self-contained HTML Single Page Application (SPA) for basic CRUD operations. The HTML content is returned directly in the webhook response. This setup is ideal for lightweight, browser-based tools without external hosting. How to Use Open the webhook URL in a browser The CRUD interface will load and connect to the data source via API calls Before using, make sure to edit the api_url in the SET API URL node to match your webhook endpoint 2a. REST API: Get All Items This webhook handles retrieving all saved items from Redis. Each item is returned with its corresponding ID and associated data (e.g., name). This endpoint is used by the HTML CRUD App to display the full list of items. Method**: GET Function**: Fetches all items stored in Redis and returns them as a JSON array 2b. REST API: Add Item This webhook handles the Add Item functionality. This endpoint is typically called by the HTML CRUD App when adding a new item. Method**: POST Request Body**: { "name": "item name" } Function**: Generates an auto-incremented ID using Redis and saves the data under that ID 2c. REST API: Edit Item This webhook handles updating an existing item in Redis. Method**: PUT Request Body**: { "id": 1, "name": "Updated Item Name" } Function**: Finds the item by the given id and updates its data in Redis 2d. REST API: Delete Item This webhook handles deleting a specific item from Redis. Method**: DELETE Request Body**: { "id": 1 } Function**: Removes the item with the given id from Redis 2e. REST API: Reset Items This webhook handles resetting all data in the application. Method**: POST Function**: Deletes all stored items from Redis Resets the auto-increment ID by deleting the data in Redis
by David Olusola
🎯 JavaScript Master Class - Interactive Code Tutorial 📚 How It Works This tutorial is designed as a self-paced learning experience where you explore working JavaScript code examples. Unlike traditional tutorials, you learn by examining real implementations and understanding how they work. 🔍 The Learning Method: Execute first - See the workflow in action Open each node - This is where the real learning happens! Study the code - Read JavaScript implementations and comments Understand the flow - See how data transforms between nodes Experiment - Modify code to test your understanding 🎮 The "Game" Concept: It's not a real game - it's a gamified learning experience Uses RPG elements (XP, levels, achievements) to make learning engaging Simulates progression through 3 difficulty levels Main learning happens when you open nodes and read the code!** 🚀 Setup Steps Step 1: Import the Template Copy the JSON template provided Open your n8n instance Create a new workflow Press Ctrl+A (or Cmd+A on Mac) to select all Press Ctrl+V (or Cmd+V) to paste the JSON Click "Save" and name it: JavaScript Master Class - Interactive Tutorial Step 2: Execute the Workflow Click "Test workflow" or "Execute workflow" Watch it run through all nodes automatically See the final results and progression simulation Step 3: Start Learning (The Important Part!) Now the real learning begins - you must open each node manually: 🔍 For Each Code Node: Double-click the node to open it Read the JavaScript code carefully Study the comments - they explain key concepts Understand the logic - how input becomes output Note the techniques used in each challenge 📖 For Each Sticky Note: Read the explanations and context Understand the learning objectives Note the skills being taught 🎯 Learning Path Level 1: Data Warrior (Beginner) 📂 Open Node: 🎲 Level 1: Data Warrior Focus:** Data deduplication using filter() and findIndex() Key Skills:** Array methods, duplicate detection What to Study:** How the deduplication algorithm works Level 2: API Ninja (Intermediate) 📂 Open Node: ⚔️ Level 2: API Ninja Focus:** Data transformation and validation Key Skills:** String manipulation, validation logic, error handling What to Study:** How to clean and validate messy API data Level 3: Automation Master (Advanced) 📂 Open Node: 🏆 Final Boss: Automation Master Focus:** Complex workflow processing Key Skills:** Task orchestration, priority sorting, error handling What to Study:** How to build robust automation systems 💡 Learning Tips 🔍 Active Exploration: Don't just run it** - open every single node! Read all comments** - they contain key insights Compare approaches** - see how complexity increases Try modifications** - change values and see what happens 📝 Study Techniques: Take notes** on patterns you see Copy interesting code** snippets for reference Try to explain** each function to yourself Test your understanding** by modifying the code 🧪 Experimentation: Change filter conditions** in Level 1 Modify validation rules** in Level 2 Adjust workflow logic** in Level 3 Break something** and fix it - great for learning! ⚠️ Important Notes 🎮 "Game" Reality Check: This is NOT an interactive game where you make choices It's a code tutorial with game-like progression themes The "game" runs automatically when executed Real learning happens when you manually open and study each node** 📚 Educational Value: Primary learning:** Understanding JavaScript implementations Secondary learning:** n8n workflow patterns Bonus learning:** Problem-solving approaches 🔧 Technical Requirements: Working n8n instance Basic JavaScript knowledge helpful but not required Willingness to explore and experiment 🎯 Success Metrics You'll know you're learning when you can: ✅ Explain how each deduplication algorithm works ✅ Identify the validation patterns used ✅ Understand the workflow orchestration logic ✅ Modify the code to handle different scenarios ✅ Apply these patterns to your own projects 🤔 Next Steps After completing this tutorial: Apply the patterns to your own workflows Experiment with variations Build something using these techniques Share your learnings with the community Remember: The magic happens when you open each node and study the code! 🔍