by Oneclick AI Squad
This workflow utilizes Philips IntelliVue Device details to automatically track patient vitals, such as heart rate and oxygen levels. It quickly spots critical health issues and sends alerts to healthcare staff for fast action. The system saves data for records and helps improve patient care with real-time updates. It’s simple to set up and adjust for different needs. Essential Information Processes data from Philips IntelliVue Devices to monitor vitals instantly. Filters and categorizes conditions as critical or non-critical based on thresholds. Sends clinical alerts for critical conditions and logs data for review. Runs every 30 seconds to ensure timely updates. System Architecture Data Collection Pipeline: Poll Device Data Every 30s**: Continuously retrieves vitals from Philips IntelliVue Devices. Fetch from IntelliVue Gateway**: Retrieves data via HTTP GET requests. Processing Pipeline: Process Device Data**: Analyzes and validates the data stream. Alert Generation Flow: Validate & Enrich Data**: Ensures accuracy and adds patient context. Save to Patient Database**: Stores data for records. Check Alert Status**: Applies rules to trigger alerts. Send Clinical Alert**: Notifies staff for critical conditions. Implementation Guide Import workflow JSON into n8n. Configure the Philips IntelliVue Devices gateway URL and test with sample data. Set up alert credentials (e.g., email). Test and adjust rule thresholds. Technical Dependencies Philips IntelliVue Devices for vitals data. n8n for automation. Email or messaging API for alerts. Database for data storage. Customization Possibilities Adjust Switch node rules for critical thresholds. Customize alert messages. Modify database schema. Add logging for analysis.
by Rahul Joshi
Description This workflow acts as a CI/CD-style test harness for validating other n8n workflows. It executes a target workflow (here: Archive Payment Receipts), evaluates pass/fail outcomes, and generates structured reports. Results are automatically archived to Google Drive, logged in Google Sheets, and synced with ClickUp for visibility. Both success and failure scenarios are handled with standardized formatting. What This Template Does (Step-by-Step) ⚡ Manual Trigger – Start the test run manually. ▶️ Execute Target Workflow Under Test – Calls the specified workflow (Archive Payment Receipts) and captures its output, even if it errors. ✅ Test Result Evaluation (If Node) – Checks if the output contains errors. Pass Path → success formatting + archival. Fail Path → failure formatting + logging. 📄 Format Success Test Result (Set Node) – Creates a structured result object with: Status: ✅ Passed Workflow Name Timestamp 📄 Format Failed Test Result (Set Node) – Same as above, but with ❌ Failed status. 📝 Generate Success/Failure Report Text (Code Node) – Converts structured data into a human-readable report string. 📦 Convert Report to Text File – Transforms the text into a .txt file for archiving. ☁️ Archive Reports to Google Drive – Saves .txt files (success/failure) into the resume store folder with timestamped filenames. ✏️ Update ClickUp Task (Success/Failure) – Posts results directly into a ClickUp task for visibility. 📊 Log Error Details to Error Tracking Sheet (Google Sheets) – Appends raw error objects to an error log sheet for debugging and trend analysis. Prerequisites Target workflow to test (e.g., Archive Payment Receipts) Google Drive folder for report storage Google Sheets (Error Log tab) ClickUp API credentials n8n instance Key Benefits ✅ Automates workflow regression testing ✅ Captures pass/fail outcomes with full audit trail ✅ Maintains error logs for debugging and reliability improvements ✅ Keeps stakeholders updated via ClickUp integration ✅ Supports compliance with archived test reports Perfect For Teams running workflow QA & testing Organizations needing audit-ready test reports DevOps pipelines with continuous validation of automations Stakeholders requiring real-time visibility into workflow health
by Jitesh Dugar
Automate your post-event Instagram carousel using a fan-out and merge pattern. One Code node splits the photos array into individual n8n items. Every photo then flows through HTTP Fetch, Upload to URL, and a child container Code node independently. A Merge node collects all completed items, a final Code node sorts them and builds the children parameter, then the carousel is assembled and published. Upload to URL is a real visible node on the canvas, running once per photo. What This Workflow Does Intake and Fan-Out Webhook - Receive Event Payload** -- accepts a POST to /ig-event-recap. Responds after the full workflow completes. Typically triggered by a photographer tool, event CMS, or manual call immediately after event photos are ready. Code - Validate Caption and Split Photos** -- validates the photos array (2-10 items, HTTPS URLs, deduplicated), builds a multi-block storytelling caption from all event metadata, then returns one item per photo. Each item carries the photo URL, slideIndex, fullCaption, eventName, and IG credentials. This fans the downstream nodes out to process all photos in parallel. Per-Photo Pipeline (runs once per item) HTTP - Fetch Photo Binary** -- downloads each photoUrl as a binary file. Runs once per item. Works with any public HTTPS image source. Upload to URL** -- real node on the canvas, runs once per photo item. Uploads the binary and returns a stable public CDN URL. This step is mandatory because Instagram child container endpoints require a direct public HTTPS image URL and reject binary payloads. Code - Create Child Container** -- calls the Instagram Graph API per item with is_carousel_item set to true. Returns childContainerId and cdnUrl alongside the slideIndex for sort ordering downstream. Fan-In and Carousel Assembly Merge All Child Items** -- waits for all per-photo items to complete and collects them into a single execution. This is the fan-in point. Mode: Merge by Position. Code - Build Children Param** -- receives all merged items, sorts by slideIndex to guarantee photo order is preserved, joins child container IDs as a comma-separated string, and re-attaches the caption and event metadata from the first item. HTTP - Create Carousel Container** -- POSTs to the Instagram Graph API /media endpoint with media_type CAROUSEL, the sorted children ID string, and the caption. Caption is set only on the parent container. Wait 8s** -- buffer before the publish call to allow Instagram to validate all child assets. HTTP - Publish Carousel** -- calls /media_publish. Returns the live Post ID. HTTP - Fetch Post Metadata** -- retrieves permalink and timestamp. Logging and Notification Airtable - Log Published Post** -- creates a record with event name, Post ID, permalink, slide count, and publish timestamp. For client reporting and campaign tracking. Slack - Notify Team** -- sends event name, slide count, permalink, and publish timestamp. Respond to Webhook** -- returns a structured JSON payload with mediaId, permalink, slideCount, eventName, and publishedAt. Key Features Fan-out and merge pattern** -- no Split In Batches loop. Photos are processed in parallel via item splitting, not serial iteration. Upload to URL is a real canvas node** -- runs once per photo item between HTTP Fetch and Code Create Child. Fully visible and configurable in the workflow editor. SlideIndex sorting** -- the final Code node sorts merged items by slideIndex before joining IDs, preserving the exact photo order from the original payload regardless of completion order. Conditional caption blocks** -- every metadata field is optional. Missing fields are skipped without errors. Caption auto-truncated to 2200 characters. Airtable logging** -- every published carousel is recorded for reporting without writing back to a source system. What You Will Need Credentials Upload to URL** -- configured in n8n Instagram Graph API** -- Business or Creator account access token Airtable Personal Access Token** -- for the event posts log base Slack OAuth2** -- for team notifications Perfect For Nightlife venues and clubs** -- post the recap carousel automatically after the photographer delivers the final selects Music festivals and brand activations** -- trigger from any webhook-capable CMS or content tool the moment photos are approved Event photographers and agencies** -- deliver the published carousel as part of the photo handoff, not as a separate manual step Corporate events teams** -- keep an Airtable log of every carousel post with full metadata for quarterly reporting
by 寳田 武
Turn your n8n instance into a personal "Planetary Defense System." This workflow monitors NASA's data daily for hazardous asteroids, generates sci-fi style warnings using OpenAI, translates them via DeepL, and notifies you through LINE. Who is it for This template is perfect for space enthusiasts, sci-fi fans, or anyone interested in learning how to combine data analysis with AI text generation and translation services in n8n. What it does Fetches Data: Retrieves the daily "Near Earth Objects" list from the NASA NeoWs API. Analyzes Threats: A Code node filters for "potentially hazardous" asteroids and calculates their distance relative to the Moon. Smart Branching: If a threat exists: OpenAI generates a dramatic, sci-fi style warning based on the asteroid's size and distance. DeepL translates this alert into your preferred language (default: Japanese). If no threat exists: A pre-set "Peace Report" is prepared. Notifies: Sends the final message to your LINE account via LINE Notify. How to set up NASA API: Sign up for a free API key at api.nasa.gov and configure the Get Asteroid Data node credential. OpenAI & DeepL: Add your API keys to the respective nodes. LINE Notify: Generate an access token from the LINE Notify website and add it to the Send Danger Alert and Send Peace Report nodes. Configure Language: In the Translate Alert node, set the "Translate To" field to your desired language code (e.g., JA, EN, DE). Requirements n8n version 1.0 or later NASA API Key (Free) OpenAI API Key (Paid) DeepL API Key (Free or Pro) LINE Account & Notify Token How to customize Change the Vibe:* Edit the System Prompt in the *Generate SF Alert** node to change the persona (e.g., "Scientific Analyst" instead of "Sci-Fi System"). Switch Messenger:** Replace the LINE nodes with Slack, Discord, or Email nodes to receive alerts on your preferred platform. Adjust Thresholds:* Modify the JavaScript in the *Filter & Calculate Distance** node to change the definition of a "threat" (e.g., closer than 10 lunar distances).
by Jitesh Dugar
Branded Social Proof Automation via Bannerbear and uploadtourl Convert your customer satisfaction into high-converting social media content with this fully automated social proof pipeline. This workflow scans your database for top-tier reviews, generates a branded quote card, and publishes it directly to Instagram, ensuring a consistent stream of credibility for your brand. 🎯 What This Workflow Does This template manages the entire lifecycle of a testimonial post, from data retrieval to final notification: 🔄 Review Dispatch Automation Schedule Trigger:** Automatically fires daily at 10:00 AM; cadence can be adjusted via cron expression. Airtable — Fetch Review:** Retrieves the oldest 5-star, unposted record using a specific filter formula to prevent duplicates. IF — Has Valid Review?:** Validates the data; the workflow exits gracefully if no new reviews are found and only proceeds when a 5-star review is ready. ✍️🎨 Dynamic Asset Generation Code — Prepare Payload:** Formats review data into a JSON body, mapping fields like name and truncated text to Bannerbear layers while generating the final Instagram caption. HTTP — Create Image Job:** Submits the request to the Bannerbear API and retrieves a unique job uid for asynchronous processing. 🔁 Status Verification & Media Hosting HTTP — Poll Status:** Regularly checks the job status via the Bannerbear API to see if the rendering is finished. IF — Image Ready?:** Confirms completion; if still processing, it triggers a "Wait 3s + re-poll" loop for up to 5 retries before passing the image_url forward. uploadtourl Bridge:** Mandatory CDN step that uploads the rendered image binary and returns a stable public URL, which is required for Instagram's API to access the file. 📸 Instagram Publishing & Tracking IG — Create & Publish:** Executes the two-step Instagram Graph API flow to create a media container and publish it to your feed after a safe 6-second buffer. Airtable — Mark as Posted:** Updates the original record with the Post ID and timestamp to prevent duplicate posting. Slack Notification:** Sends a final team alert with a preview of the card and the live link. ✨ Key Features Adaptive Polling:** Instead of a static wait time, the workflow intelligently polls Bannerbear until the image is confirmed ready. Automated CDN Bridge:** Uses uploadtourl to bypass Instagram's rejection of base64/binary payloads by providing a direct public URL. Intelligent Truncation:** Automatically shortens long reviews to 180 characters to ensure perfect readability on your branded quote card. Full Audit Trail:** Every post is logged back to Airtable with its live Instagram ID and CDN URL for easy reporting. 💼 Perfect For SaaS Companies:** Showcasing user feedback and "Love letters" from customers. E-commerce Brands:** Sharing 5-star product reviews to build buyer confidence. Service Providers:** Highlighting client testimonials on a regular schedule. Digital Marketers:** Automating the "Social Proof" pillar of a social media strategy. 🔧 What You'll Need Required Integrations Bannerbear:** API key and a Template ID with layers named reviewer_name, review_text, and star_label. Instagram Graph API:** A Business or Creator account access token. uploadtourl:** Credentials configured in n8n for mandatory media hosting. Airtable:** A base with a Reviews table containing fields for the name, text, and rating.
by HoangSP
Who’s it for Teams that want to turn a chat prompt into a researched, ready-to-post social update—optionally published to Facebook. What it does / How it works Chat Trigger receives the user prompt Topic Agent optionally calls a research sub-workflow for fresh sources Outputs are validated into a structured JSON Post Writing Agent crafts a concise Vietnamese post (Optional) Facebook Graph API publishes to your Page How to set up Connect OpenAI & Facebook in Credentials (no API key inside nodes). In Tool: Call Perplexity Researcher, set your research workflowId. In Publish: Facebook Graph API, set your Page ID and edge. Adjust prompts/tone and the LANGUAGE in CONFIG. Test the flow with sample prompts in the chat. Requirements n8n (Cloud or self-hosted) OpenAI API key (stored in Credentials) Facebook Page publish permissions (Optional) a research workflow for Perplexity How to customize the workflow Add moderation/review gates before publishing. Duplicate the publish path for other platforms. Store outputs in Sheets/Notion/DB for auditing. Tune model choice & temperature for your brand voice. Security Avoid hardcoding secrets in HTTP or Code nodes. Keep identifiers (Page IDs, workflowIds) configurable in CONFIG.
by Asuka
Who is this for This template is designed for e-commerce businesses, customer support teams, and marketing professionals who need to monitor and analyze customer reviews at scale. It's especially useful for teams dealing with multilingual reviews (Japanese to English) and those who want instant alerts for critical feedback. What it does This workflow automatically processes customer reviews stored in Google Sheets using OpenAI GPT. For each review, it performs: Translation** from Japanese to English Sentiment analysis** with a score from -1.0 to +1.0 Importance classification** (High/Medium/Low) based on urgency Category tagging** (Quality, Price, Shipping, Support, Features, Usability, Other) Key phrase extraction** for quick summary Results are written back to the spreadsheet, and Telegram notifications are sent based on priority level. How to set up Connect your Google Sheets account and select your review spreadsheet Configure OpenAI API credentials Set up Telegram Bot and enter your Chat ID in both notification nodes Adjust the schedule trigger interval as needed Requirements Google Sheets with columns: ReviewID, Keyword (review text), ProcessStatus OpenAI API key Telegram Bot Token and Chat ID How to customize Modify the AI prompt in "AI Agent - Review Analysis" to change analysis criteria or add new fields Adjust the sentiment threshold (-0.5) in "Check Importance & Sentiment" node Customize notification messages in Telegram nodes Change the source/target language by editing the prompt
by Blake Wise
Web Page to Brand Identity Markers This n8n template retrieves verbal brand identity markers from any web site. Customer Perception Review How does a customer coming to your web site perceive your content? Use this to check against the key messages you want the customer to understand after visiting your web site. Brand Identity Extraction Marketing Agencies need to understand for each customers their specific tone and communication style to replicate it for updates and new content. How it works Execute the workflow and enter any web site address The HTTP node retrieves the web page The core logic is the Gemini AI prompt to review the web page content and respond with a pre-defined JSON data structure containing the verbal brand identity markers The JSON structure returned from the AI node is parsed and displayed on a web page. How to customize Inject your own processing logic after the "Extract AI Response" node You can directly access the JSON fields returned. For an example see how the HTML node creates the web page with the JSON data. Requirements Gemini account for LLM Need Help? Join the Discord or ask in the Forum! For support reach out to the creator at blakewise.com Happy Hacking!
by Quinten Alexander
Your Personal RSS Feed of YouTube Videos! This workflow creates an RSS feed containing the most recent videos published by your favorite channels. Use it in combination with your favorite RSS reader and don't miss out on any of your favorite creators' content without all the distractions of YouTube. You can even play the video right from your RSS reader without ever having to visit YouTube itself! Who's it for This workflow is for everyone who likes to keep updated about videos from their favorite creator through their preferred RSS app. How it works The RSS client triggers the webhook of this workflow The RSS feeds from your selected channels are pulled from YouTube The resulting feeds are filtered so only the normal videos (no shorts), posted in the last week, remain For each video, the video player and the full video description are pulled from the YouTube API For each video, an RSS item is created containing this video player and the video description as the content The RSS items are cached in a Redis database to prevent pulling the same information from the YouTube API on each webhook call A full RSS feed is built and returned to the calling webhook How to set up Follow the steps in the red notes (from 1 to 4) to configure the workflow: Set the IDs of the channels you want to watch Configure your Redis credentials Configure your Google/YouTube API credentials Copy the webhook URL and paste it into your RSS reader Don't forget to activate the workflow! Only the nodes inside a red node need configuration; all other nodes are good to go. You are, however, free to change those nodes to your liking! Requirements This workflow has 2 requirements: A Redis database used to cache the RSS items (see the blue note on how to set up a Redis database yourself) Google API credentials to access the YouTube API Customizing this workflow Add any YouTube channel you want by adding their channel ID in the "Set Channels" node at the start of this workflow. If you aren't afraid of some XML RSS code, you can dive into the code blocks and change the resulting RSS feed. You can change the feed's title, description, or image. Or go all in on text processing and process the video description before it is added to the RSS items (such as removing sponsors or links to social media). You can also extend this workflow by adding RSS items from other feeds or sources.
by Elvis Sarvia
The full end-to-end workflow that chains all patterns together. This template processes customer feedback from intake to team routing, with normalization, validation, native guardrails, AI classification, and confidence-based branching at every step. What you'll do Send customer feedback through a webhook and watch it flow through every stage. See the data get normalized, validated, and scanned by n8n's native Guardrails node for jailbreak attempts and PII. Watch the AI classify feedback (bug report, feature request, praise, complaint, question) with a confidence score and generate a personalized response draft. See AI-generated responses pass through output guardrails that check for NSFW content and secret keys before reaching users. Watch high-confidence results route automatically: bug reports and feature requests to the product team, complaints to customer success, and praise to marketing as testimonial candidates. What you'll learn How to chain normalization, validation, native guardrails, AI, and routing into a single pipeline How to use n8n's Guardrails node for both input screening (jailbreak, PII, secret keys) and output screening (NSFW, secret keys) How confidence-based branching separates high-confidence results from items that need human review How Switch nodes route classified feedback to the right destination (product team, customer success, or marketing) How every step between AI nodes is deterministic and inspectable How all these patterns work together in a production-ready workflow Why it matters This is the complete picture. Individual patterns are useful on their own, but the real power comes from combining them into a pipeline where AI handles the judgment calls and everything else follows explicit, testable rules. Import this template as your starting point and connect your own integrations. This template is a learning companion to the Production AI Playbook, a series that explores strategies, shares best practices, and provides practical examples for building reliable AI systems in n8n. https://go.n8n.io/PAP-D&A-Blog
by InfyOm Technologies
✅ What problem does this workflow solve? Manual checking of OMR (Optical Mark Recognition) answer sheets is time-consuming, error-prone, and difficult to scale—especially for schools, coaching institutes, and exam centers. This workflow automates OMR evaluation end-to-end using AI, from reading a scanned answer sheet image to calculating scores and storing structured results in Google Sheets. ⚙️ What does this workflow do? Accepts a scanned OMR answer sheet image via webhook. Uses AI vision to extract only the marked answers from the sheet. Extracts basic student details (Name, Roll Number, Class). Compares extracted answers with a predefined answer key. Calculates: Total questions Correct answers Incorrect answers Score percentage Generates question-wise binary results (1 = correct, 0 = incorrect). Stores the complete result in Google Sheets. Returns a structured JSON response to the calling system. 🧠 How It Works – Step by Step 1. 📥 Webhook Trigger (Student OMR Upload) A client uploads the OMR image via a POST request. Image is received as form-data (key: file). 2. 👁️ AI-Based OMR Image Analysis An AI vision model analyzes the image. Strict rules ensure: Only answer bubbles are considered Multiple markings → darkest option is selected Unmarked questions are skipped No guessing or hallucination Output includes: Student details Question–answer pairs 3. 🔄 Answer Formatting Raw AI output is converted into a clean, structured format: 1:A, 2:B, 3:C, ... Student metadata is preserved separately. 4. 🧮 Answer Key Setup Correct answers are defined inside the workflow (editable anytime). Supports any number of questions. 5. 📊 Result Calculation User answers are compared with the answer key. Generates: Correct / Incorrect counts Percentage score Detailed per-question result Binary output (Q.1 = 1 / 0) for analytics 6. 📄 Google Sheets Logging Results are appended to a Google Sheet with columns such as: Student Name Roll No Class Correct Incorrect Score Percentage Q.1 → Q.n (binary values) 7. 📤 API Response Workflow responds with a JSON payload containing: Student details Full evaluation summary Per-question analysis 📂 Sample Google Sheet Output | Student Name | Roll No | Class | Correct | Incorrect | Score % | Q.1 | Q.2 | Q.3 | ... | |-------------|--------|-------|---------|-----------|---------|-----|-----|-----|-----| | Rahul Shah | 1023 | 10-A | 16 | 4 | 80% | 1 | 0 | 1 | ... | 🛠 Integrations Used 🤖 AI Vision Model – for accurate OMR detection ⚙️ n8n Webhook – to accept image uploads 🧠 Custom Code Nodes – for parsing and evaluation logic 📊 Google Sheets – for persistent result storage 👤 Who can use this? This workflow is ideal for: 🏫 Schools & Colleges 📚 Coaching Institutes 🧪 Online Exam Platforms 🧑💻 EdTech Developers 📝 Mock Test Providers If you need fast, reliable, and scalable OMR checking without expensive hardware—this workflow delivers. 🚀 Benefits ⏱ Saves hours of manual checking 🎯 Eliminates human error 📊 Produces analytics-ready data 🔄 Easy to update answer keys 🌐 API-ready for integration with any system 📦 Ready to Deploy? Just configure: ✅ AI model credentials ✅ Google Sheets access ✅ Your correct answer key …and start evaluating OMR sheets automatically at scale.
by DataForSEO
This weekly workflow automatically identifies new ranked keywords for your domain within Google’s top 10 results without manual SERP monitoring. On each run, the workflow fetches the latest ranking and search volume data using the DataForSEO Labs API and stores a fresh results snapshot in Airtable. It then compares this data with the previous set to identify any new keywords your domain started ranking for, focusing on queries that rank in the top 10. All newly ranked top-10 keywords are logged in Airtable, along with their ranking position and search volume. Once new terms and rankings are identified, the workflow sends you a Slack summary highlighting the latest changes. Who’s it for SEO experts and marketers who want an automated way to track new top-ranking keywords on Google for their domain(s). What it does This workflow automatically detects when your domain enters top-10 results for new keywords on Google, records them in Airtable, and sends a weekly summary via Slack. How it works Runs on a predefined schedule (default: weekly). Reads your keywords and target domains from Airtable. Fetches the latest Google rankings and keyword metrics via DataForSEO API. Compares the latest data with the previous run. Logs newly ranked top-10 keywords to Airtable. Sends a Slack summary with key changes. Requirements DataForSEO account and API credentials Airtable table with your keywords, following the required column structure. Airtable table with your target domains, following the required column structure. Slack account Customization You can easily customize this workflow by adjusting the run schedule, changing the minimum ranking threshold (e.g., top 5 or top 20), exporting results to other tools, and tailoring the Slack message content to your team’s workflow.