by 寳田 武
Turn your n8n instance into a personal "Planetary Defense System." This workflow monitors NASA's data daily for hazardous asteroids, generates sci-fi style warnings using OpenAI, translates them via DeepL, and notifies you through LINE. Who is it for This template is perfect for space enthusiasts, sci-fi fans, or anyone interested in learning how to combine data analysis with AI text generation and translation services in n8n. What it does Fetches Data: Retrieves the daily "Near Earth Objects" list from the NASA NeoWs API. Analyzes Threats: A Code node filters for "potentially hazardous" asteroids and calculates their distance relative to the Moon. Smart Branching: If a threat exists: OpenAI generates a dramatic, sci-fi style warning based on the asteroid's size and distance. DeepL translates this alert into your preferred language (default: Japanese). If no threat exists: A pre-set "Peace Report" is prepared. Notifies: Sends the final message to your LINE account via LINE Notify. How to set up NASA API: Sign up for a free API key at api.nasa.gov and configure the Get Asteroid Data node credential. OpenAI & DeepL: Add your API keys to the respective nodes. LINE Notify: Generate an access token from the LINE Notify website and add it to the Send Danger Alert and Send Peace Report nodes. Configure Language: In the Translate Alert node, set the "Translate To" field to your desired language code (e.g., JA, EN, DE). Requirements n8n version 1.0 or later NASA API Key (Free) OpenAI API Key (Paid) DeepL API Key (Free or Pro) LINE Account & Notify Token How to customize Change the Vibe:* Edit the System Prompt in the *Generate SF Alert** node to change the persona (e.g., "Scientific Analyst" instead of "Sci-Fi System"). Switch Messenger:** Replace the LINE nodes with Slack, Discord, or Email nodes to receive alerts on your preferred platform. Adjust Thresholds:* Modify the JavaScript in the *Filter & Calculate Distance** node to change the definition of a "threat" (e.g., closer than 10 lunar distances).
by Xavier Tai
🎨 TikTok Carousel Replicator & Translator An end-to-end automation system that monitors TikTok accounts for new 3-image carousel posts, extracts text overlays and visual layouts using AI vision analysis, translates content into English, and automatically regenerates brand-new carousel images ready for review and posting. What It Does This workflow eliminates the manual process of: Daily monitoring for new carousel content Screenshot capture and image extraction Text transcription and translation Layout recreation in design tools Manual formatting and brand consistency checks Instead, it delivers 3 production-ready images to your inbox every morning—complete with translated text, matched composition, and side-by-side comparisons for quick approval. Key Features Automated Daily Monitoring** - Checks target TikTok accounts on schedule AI-Powered Vision Analysis** - Extracts text, layout, and composition with Gemini Vision Smart Translation** - Converts text to natural English while preserving intent Intelligent Image Generation** - Recreates carousels with Midjourney/DALL-E based on analyzed layouts Review Dashboard** - Organized Google Sheets with original vs. new comparisons Email Notifications** - Morning digest with clickable previews Who It's For Content creators, social media managers, and marketing teams who need to adapt high-performing carousel content from other languages into English—without spending hours in Canva every day. Time Saved From 3+ hours of manual work → 2 minutes of review per carousel set. Workflow Breakdown Monitor → Extract & Analyze → Translate → Generate → Review & Deliver Each section runs automatically, processing images sequentially and delivering organized results to your review dashboard with email notifications. 🚀 SETUP INSTRUCTIONS Required Credentials: Google Gemini API - For vision analysis Midjourney API (or alternative: DALL-E, Stable Diffusion) OpenAI API - For prompt generation and translation enhancement Google Sheets - For review dashboard Gmail - For notifications Configuration Steps: Replace @USERNAME in TikTok RSS node with target account Set your Google Sheet ID in "Save to Review Sheet" node Update email addresses in notification node Test with a single post before enabling daily schedule Alternative Approaches: Can use TikTok API instead of RSS (if available) Can use Canva API instead of Midjourney for generation Can integrate with Airtable for more advanced review workflows Can add approval workflow with interactive buttons
by Asuka
Who is this for This template is designed for e-commerce businesses, customer support teams, and marketing professionals who need to monitor and analyze customer reviews at scale. It's especially useful for teams dealing with multilingual reviews (Japanese to English) and those who want instant alerts for critical feedback. What it does This workflow automatically processes customer reviews stored in Google Sheets using OpenAI GPT. For each review, it performs: Translation** from Japanese to English Sentiment analysis** with a score from -1.0 to +1.0 Importance classification** (High/Medium/Low) based on urgency Category tagging** (Quality, Price, Shipping, Support, Features, Usability, Other) Key phrase extraction** for quick summary Results are written back to the spreadsheet, and Telegram notifications are sent based on priority level. How to set up Connect your Google Sheets account and select your review spreadsheet Configure OpenAI API credentials Set up Telegram Bot and enter your Chat ID in both notification nodes Adjust the schedule trigger interval as needed Requirements Google Sheets with columns: ReviewID, Keyword (review text), ProcessStatus OpenAI API key Telegram Bot Token and Chat ID How to customize Modify the AI prompt in "AI Agent - Review Analysis" to change analysis criteria or add new fields Adjust the sentiment threshold (-0.5) in "Check Importance & Sentiment" node Customize notification messages in Telegram nodes Change the source/target language by editing the prompt
by 寳田 武
⚠️ Disclaimer: Community Node Required This workflow uses the Apify community node. Please ensure it is installed in your n8n instance before importing this workflow. This workflow automates the product development lifecycle for Print-on-Demand (POD) businesses by turning NASA's public domain images into actionable merchandise candidates. It combines AI-driven design, competitive market research, and profit calculation into a single automated pipeline. Who is it for Print-on-Demand Entrepreneurs: Automate the sourcing and validation of new designs. E-commerce Managers: Streamline the "Idea to Product" workflow. Content Creators: Generate space-themed merchandise assets automatically. What it does Sourcing: Fetches the daily image from NASA's Astronomy Picture of the Day (APOD) API. Design & Visualization: Uses OpenAI to generate SEO keywords and Cloudinary to create a visual T-shirt mockup. Market Intelligence: Scrapes Etsy (via Apify) to analyze current market prices for similar products. Profit Analysis: Calculates potential profit margins based on your production costs and competitor data. AI Consultation: OpenAI analyzes the data to provide a "Go/No-Go" marketing recommendation. Human Approval: Sends a formatted proposal to Slack. You can approve or reject the product directly from the chat. Asset Management: If approved, the high-resolution image is saved to Google Drive, and product details are logged in Notion. How to set up Install Community Node: Go to Settings > Community Nodes in n8n and install n8n-nodes-apify. Configure Credentials: Ensure you have connected credentials for NASA, OpenAI, Cloudinary, Apify, Slack, Google Drive, and Notion. Set Variables: Open the node named "Workflow Configuration". Here you must define: cloudinaryCloudName & cloudinaryApiKey productionCost (Your cost to print a shirt) minProfitThreshold (Minimum profit required to consider a product "viable") Update IDs: Slack Node: Select your target channel. Google Drive Node: Select the destination folder. Notion Node: Select your product database. Requirements NASA API Key: (Free) Cloudinary Account: For image transformation and mockups. Apify Account: Required to run the dtrungtin/etsy-scraper actor. OpenAI API: For keyword generation and marketing advice. Slack, Google Drive, Notion: For the approval loop and storage.
by Elvis Sarvia
The full end-to-end workflow that chains all patterns together. This template processes customer feedback from intake to team routing, with normalization, validation, native guardrails, AI classification, and confidence-based branching at every step. What you'll do Send customer feedback through a webhook and watch it flow through every stage. See the data get normalized, validated, and scanned by n8n's native Guardrails node for jailbreak attempts and PII. Watch the AI classify feedback (bug report, feature request, praise, complaint, question) with a confidence score and generate a personalized response draft. See AI-generated responses pass through output guardrails that check for NSFW content and secret keys before reaching users. Watch high-confidence results route automatically: bug reports and feature requests to the product team, complaints to customer success, and praise to marketing as testimonial candidates. What you'll learn How to chain normalization, validation, native guardrails, AI, and routing into a single pipeline How to use n8n's Guardrails node for both input screening (jailbreak, PII, secret keys) and output screening (NSFW, secret keys) How confidence-based branching separates high-confidence results from items that need human review How Switch nodes route classified feedback to the right destination (product team, customer success, or marketing) How every step between AI nodes is deterministic and inspectable How all these patterns work together in a production-ready workflow Why it matters This is the complete picture. Individual patterns are useful on their own, but the real power comes from combining them into a pipeline where AI handles the judgment calls and everything else follows explicit, testable rules. Import this template as your starting point and connect your own integrations. This template is a learning companion to the Production AI Playbook, a series that explores strategies, shares best practices, and provides practical examples for building reliable AI systems in n8n. https://go.n8n.io/PAP-D&A-Blog
by Sergei Byvshev
Overview This workflow helps automatically analyze alerts occurring in the infrastructure and suggest solutions even before the on-duty engineer sees the alert. How it work Workflow receives alert from Alertmanager via Webhook. The variables required for operation are set Preparing a prompt for the agent containing only the data necessary for analysis Optional step to deduplicating duplicate alerts Getting the trigger condition The agent performs diagnostics as described in the system prompt. During operation, it can access various systems via MCP to obtain additional information. Search for a message in a Slack channel corresponding to a processed alert Send report to Slack thread. How to use Generate webhook credentials and use it in Alertmanager Add Alert fingerprint into Slack message template Set variables it SetVars node Add your own Rules and recomendations to system promt 5 Run mcp servers Choose Slack channel with alerts
by Sergei Byvshev
Overview This workflow helps automatically analyze alerts occurring in the infrastructure and suggest solutions even before the on-duty engineer sees the alert. How It Works The workflow receives an alert from Alertmanager via Webhook. The variables required for operation are set. A prompt is prepared for the agent containing only the data necessary for analysis. The agent performs diagnostics as described in the system prompt. During operation, it can access various systems via MCP to obtain additional information. A message in a Slack channel corresponding to the processed alert is found. A report is sent to the Slack thread. How to Use Generate webhook credentials and use them in Alertmanager. Add the alert fingerprint to the Slack message template. Set variables in the SetVars node. Add your own rules and recommendations to the system prompt. Run MCP servers. Choose the Slack channel with alerts.
by Madame AI
Translate and dub YouTube videos using BrowserAct, Telegrma & Gemini This workflow transforms any YouTube video into a localized audio experience. It scrapes the video content, translates the transcript into your target language using AI, generates high-quality dubbed audio using ElevenLabs, and delivers the audio files and a summary directly to your Telegram chat. Target Audience Content creators, language learners, and educators looking to make video content accessible in multiple languages. How it works Receive Link: You send a YouTube video link to your Telegram bot. Extract URL: An AI Agent extracts the clean YouTube URL from your message. Scrape Content: BrowserAct executes a background task to fetch the video's transcript, description, and metadata. Translate & Script: A specialized AI Agent (using Google Gemini) translates the transcript into your chosen target language (e.g., Spanish). It also segments the text into logical parts for dubbing. Generate Audio: ElevenLabs synthesizes the translated text segments into natural-sounding speech. Deliver: The workflow sends the dubbed audio files and a translated summary post to your Telegram chat. How to set up Configure Credentials: Connect your Telegram, BrowserAct, ElevenLabs, and Google Gemini accounts in n8n. Prepare BrowserAct: Ensure the YouTube Translator & Auto Dubber template is saved in your BrowserAct account. Configure Telegram: Ensure your bot is created via BotFather and the API token is added to the Telegram credentials. Set Language: Open the Define Language node to set your desired target language (default is "Spanish"). Activate: Turn on the workflow. Test: Send a YouTube link to your bot to start the dubbing process. Requirements BrowserAct* account with the *YouTube Translator & Auto Dubber** template. ElevenLabs** account. Telegram** account (Bot Token). Google Gemini** account. How to customize the workflow Change Voice: Open the Convert text to speech node and select a different ElevenLabs voice model. Add More Languages: Add logic to the Define Language node to let the user select a language via a Telegram menu. Change Output: Replace the Telegram output with a Google Drive node to save the audio files for later use. Need Help? How to Find Your BrowserAct API Key & Workflow ID How to Connect n8n to BrowserAct How to Use & Customize BrowserAct Templates Workflow Guidance and Showcase Video One-Click YouTube Translator: Auto-Dub Your YouTube Videos with n8n & ElevenLabs 🌍
by InfyOm Technologies
✅ What problem does this workflow solve? Manual checking of OMR (Optical Mark Recognition) answer sheets is time-consuming, error-prone, and difficult to scale—especially for schools, coaching institutes, and exam centers. This workflow automates OMR evaluation end-to-end using AI, from reading a scanned answer sheet image to calculating scores and storing structured results in Google Sheets. ⚙️ What does this workflow do? Accepts a scanned OMR answer sheet image via webhook. Uses AI vision to extract only the marked answers from the sheet. Extracts basic student details (Name, Roll Number, Class). Compares extracted answers with a predefined answer key. Calculates: Total questions Correct answers Incorrect answers Score percentage Generates question-wise binary results (1 = correct, 0 = incorrect). Stores the complete result in Google Sheets. Returns a structured JSON response to the calling system. 🧠 How It Works – Step by Step 1. 📥 Webhook Trigger (Student OMR Upload) A client uploads the OMR image via a POST request. Image is received as form-data (key: file). 2. 👁️ AI-Based OMR Image Analysis An AI vision model analyzes the image. Strict rules ensure: Only answer bubbles are considered Multiple markings → darkest option is selected Unmarked questions are skipped No guessing or hallucination Output includes: Student details Question–answer pairs 3. 🔄 Answer Formatting Raw AI output is converted into a clean, structured format: 1:A, 2:B, 3:C, ... Student metadata is preserved separately. 4. 🧮 Answer Key Setup Correct answers are defined inside the workflow (editable anytime). Supports any number of questions. 5. 📊 Result Calculation User answers are compared with the answer key. Generates: Correct / Incorrect counts Percentage score Detailed per-question result Binary output (Q.1 = 1 / 0) for analytics 6. 📄 Google Sheets Logging Results are appended to a Google Sheet with columns such as: Student Name Roll No Class Correct Incorrect Score Percentage Q.1 → Q.n (binary values) 7. 📤 API Response Workflow responds with a JSON payload containing: Student details Full evaluation summary Per-question analysis 📂 Sample Google Sheet Output | Student Name | Roll No | Class | Correct | Incorrect | Score % | Q.1 | Q.2 | Q.3 | ... | |-------------|--------|-------|---------|-----------|---------|-----|-----|-----|-----| | Rahul Shah | 1023 | 10-A | 16 | 4 | 80% | 1 | 0 | 1 | ... | 🛠 Integrations Used 🤖 AI Vision Model – for accurate OMR detection ⚙️ n8n Webhook – to accept image uploads 🧠 Custom Code Nodes – for parsing and evaluation logic 📊 Google Sheets – for persistent result storage 👤 Who can use this? This workflow is ideal for: 🏫 Schools & Colleges 📚 Coaching Institutes 🧪 Online Exam Platforms 🧑💻 EdTech Developers 📝 Mock Test Providers If you need fast, reliable, and scalable OMR checking without expensive hardware—this workflow delivers. 🚀 Benefits ⏱ Saves hours of manual checking 🎯 Eliminates human error 📊 Produces analytics-ready data 🔄 Easy to update answer keys 🌐 API-ready for integration with any system 📦 Ready to Deploy? Just configure: ✅ AI model credentials ✅ Google Sheets access ✅ Your correct answer key …and start evaluating OMR sheets automatically at scale.
by Jimleuk
There's a clear need for an easier way to manage attendee photos from live events, as current processes for collecting, sharing, and categorizing them are inefficient. n8n can indeed help to solve this challenge by providing the data input interface via its forms and orchestrate AI-powered classification of images using AI nodes. However, in some cases - say you run regular events or with high attendee counts - the volume of photos may result in unsustainably high inference fees (token usage based billing) which could make the project unviable. To work around this, Featherless.ai is an AI/LLM inference service which is subscription-based and provides unlimited tokens instead. This means costs are essentially capped for AI usage offering greater control and confidence on AI project budgets. Check out the final result here: https://docs.google.com/spreadsheets/d/1TpXQyhUq6tB8MLJ3maeWwswjut9wERZ8pSk_3kKhc58/edit?usp=sharing How it works A form trigger is used share a form interface to guests to upload their photos from their device. The photos are in one branch, are optimised in size before sending to a vision-capable LLM to classify and categorise against a set list of tags. The model inference service is provided by Featherless and takes advantage of their unlimited token usage subscription plan. The photos in another branch are copied into Google Drive for later reference. Once both branches are complete, the classification results and Google Drive link are appended to a Google Sheets table allowing for quick sorting and filtering of all photos. How to use Use this workflow to gain an incredible productivity boost for social media work. When all photos are organised and filter-ready, editors spend a fraction of the time to get community posts ready and delivered. Sharing the completed Google sheet with attendees helps them to better share memories within their own social circles. Requirements FeatherLess.ai) account for Open Source Multimodal LLMs and unlimited token usage. Google Drive for file storage Google Sheet for organising photos into categories Customising this workflow Feel free to refine the form with custom styles to match your branding. Swap out Google services with equivalents to match your own environment. eg. Sharepoint and Excel.
by Davide
This workflow automates the process of creating cloned voices in ElevenLabs using audio extracted from YouTube videos. It processes a list of video URLs from Google Sheets, converts them to audio, submits them to ElevenLabs for voice cloning*, and records the generated voice IDs back to the spreadsheet. *ONLY FOR STARTER, CREATOR, PRO PLAN Important Considerations for Best Results: For optimal voice cloning quality with ElevenLabs, carefully select your source YouTube videos: Duration**: Choose videos that are sufficiently long (preferably 1-5 minutes of clear speech) to provide enough audio data for accurate voice modeling. Audio Quality**: Select videos with high-quality audio, minimal background noise, and clear vocal recording. Single Speaker: Use videos featuring only **one primary speaker. Multiple voices in the same audio will confuse the cloning algorithm and produce poor results. Consistent Voice**: Ensure the speaker maintains a consistent tone and speaking style throughout the clip for the most faithful reproduction. Key Features 1. ✅ Fully Automated Voice Creation Workflow No manual downloading, converting, or uploading is required. Just paste the YouTube link and voice name into the sheet—everything else happens automatically. 2. ✅ Seamless Audio Extraction Using RapidAPI ensures: High success rate in extracting audio Support for virtually any YouTube video Consistent output format required by ElevenLabs 3. ✅ Hands-Off ElevenLabs Voice Creation The workflow handles all the steps required by the ElevenLabs API, including: Uploading binary audio Naming voices Capturing and storing the resulting voice ID This is much faster than the manual method inside the ElevenLabs dashboard. 4. ✅ Centralized, Reusable Setup Once the API keys are added: The same workflow can be reused indefinitely Users don’t need technical skills Updating only requires editing the sheet How it works: Data Retrieval: The workflow starts by fetching data from a Google Sheets spreadsheet that contains YouTube video URLs in the "YOUTUBE VIDEO" column and desired voice names in the "VOICE NAME" column. It specifically targets rows where the "ELEVENLABS VOICE ID" field is empty, ensuring only unprocessed videos are handled. Video Processing Pipeline: Video ID Extraction: Each YouTube URL is parsed to extract the unique video identifier using a regular expression. Audio Conversion: The video ID is sent to the RapidAPI "YouTube MP3 2025" service, which converts the YouTube video to an audio file (M4A format). Audio Download: The resulting audio file is downloaded locally for processing. Voice Creation: The downloaded audio file is submitted to ElevenLabs API via a POST request to the /v1/voices/add endpoint. This creates a new voice clone based on the audio sample. The voice name is currently hardcoded as "Teresa Mannino" in the workflow but should be dynamically configured to use the value from the "VOICE NAME" spreadsheet column. Data Update: The workflow captures the voice_id returned by ElevenLabs and writes it back to the corresponding row in the Google Sheets spreadsheet in the "ELEVENLABS VOICE ID" column, completing the processing cycle for that video. Set up steps: Prepare the Data Sheet: Duplicate the provided Google Sheets template. Fill in the "YOUTUBE VIDEO" column with YouTube URLs and the "VOICE NAME" column with your desired names for the cloned voices. Ensure your videos meet the quality criteria mentioned above. Configure APIs: RapidAPI: Sign up for a free trial API key from the "YouTube MP3 2025" service on RapidAPI. Enter this key into the x-rapidapi-key header field in the "From video to audio" node. ElevenLabs: Generate an API key from your ElevenLabs account. Configure the "Create voice" node's HTTP Header Authentication with the name xi-api-key and your ElevenLabs API key as the value. Optional Customization: Modify the "Create voice" node to use the dynamic voice name from your spreadsheet instead of the hardcoded "Teresa Mannino" value for more flexible operation. Execute: Run the workflow. It will automatically process each qualifying row, create voices in ElevenLabs, and populate the spreadsheet with the new Voice IDs. Monitor the workflow execution to ensure successful processing of each video. 👉 Subscribe to my new YouTube channel. Here I’ll share videos and Shorts with practical tutorials and FREE templates for n8n. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by PDF Vector
Overview Businesses and freelancers often struggle with the tedious task of manually processing receipts for expense tracking and tax purposes. This workflow automates the entire receipt processing pipeline, extracting detailed information from receipts (including scanned images, photos, PDFs, JPGs, and PNGs) and intelligently categorizing them for tax deductions. What You Can Do Automatically process receipts from various formats (PDFs, JPGs, PNGs, scanned images) Extract detailed expense information with OCR technology Intelligently categorize expenses for tax deductions Maintain compliance with accounting standards and tax regulations Track expenses efficiently throughout the year Who It's For Accountants, small business owners, freelancers, finance teams, and individual professionals who need to process large volumes of receipts efficiently for expense tracking and tax preparation. The Problem It Solves Manual receipt processing is time-consuming and error-prone, especially during tax season. People struggle to organize receipts, extract accurate data from various formats, and categorize expenses properly for tax deductions. This template automates the entire process while ensuring compliance with accounting standards and tax regulations. Setup Instructions: Configure Google Drive credentials for receipt storage access Install the PDF Vector community node from the n8n marketplace Configure PDF Vector API credentials Set up tax category definitions based on your jurisdiction Configure accounting software integration (QuickBooks, Xero, etc.) Set up validation rules for expense categories Configure reporting and export formats Key Features: Automatic retrieval of receipts from Google Drive folders OCR support for photos and scanned receipts Intelligent tax category assignment based on merchant and expense type Multi-currency support for international transactions Automatic detection of meal expenses with deduction percentages Financial validation to catch calculation errors Audit trail maintenance for compliance Integration with popular accounting software Customization Options: Define custom tax categories specific to your business type Set up automated rules for recurring merchants Configure expense approval workflows for team members Add mileage tracking integration for travel expenses Set up automated notifications for high-value expenses Customize export formats for different accounting systems Add multi-language support for international receipts Implementation Details: The workflow uses advanced OCR technology to extract information from various receipt formats, including handwritten receipts and low-quality scans. It applies intelligent categorization rules based on merchant type, expense amount, and business context. The system includes built-in validation to ensure data accuracy and tax compliance. Note: This workflow uses the PDF Vector community node. Make sure to install it from the n8n community nodes collection before using this template.