by Joseph
Here is your refined template description with detailed step-by-step instructions, markdown formatting, and customization guidance. YouTube Transcript Extraction Workflow This n8n workflow extracts and processes transcripts from YouTube videos using the YouTube Transcript API on RapidAPI. It allows users to retrieve subtitles from YouTube videos, clean them up, and return structured transcript data for further processing. Table of Contents Problem Statement & Target Audience Pre-conditions & API Requirements Step-by-Step Workflow Explanation Customization Guide How to Set Up This Workflow Problem Statement & Target Audience Who is this for? This workflow is ideal for content creators, researchers, and developers who need to: Extract subtitles from YouTube videos automatically. Format and clean** transcript data for readability. Use transcripts for summarization, content repurposing, or language analysis. Pre-conditions & API Requirements API Required YouTube Transcript API** (RapidAPI) n8n Setup Prerequisites A running n8n instance (Installation Guide) A RapidAPI account to access the YouTube Transcript API An API key from RapidAPI to authenticate requests Step-by-Step Workflow Explanation 1. Input YouTube Video URL (Trigger) This step provides a simple input form where users enter a YouTube video URL. 2. HTTP Request Node (Retrieve Transcript Data) Makes a POST request to the YouTube Transcript API via RapidAPI. Passes the video URL received from the input form. Uses an environment variable to store the API key securely. 3. Function Node (Process Transcript) Receives* the API response containing the *raw transcript**. Processes and cleans** the transcript: Removes unwanted characters. Formats text for readability. Handles errors** when no transcript is available. Outputs* both the *raw and cleaned transcript** for further use. 4. Set Field Node (Response Formatting) Structures** the processed transcript data into a user-friendly format. Returns** the final transcript data to the client. Customization Guide 1. Modify Transcript Cleaning Rules Update the Function Node to apply custom text processing, such as: Removing timestamps. Changing the output format (e.g., JSON, plain text). 2. Store Transcripts in a Database Add a Database Node (e.g., MySQL, PostgreSQL, or Firebase) to save transcripts. 3. Generate Summaries from Transcripts Integrate AI services (e.g., OpenAI, Google Gemini) to summarize transcripts. 4. Convert Transcripts into Speech Use ElevenLabs API to generate an AI-powered voiceover from transcripts. How to Set Up This Workflow Step 1: Import the Workflow into n8n Download or copy the workflow JSON file. Import it into your n8n instance. Step 2: Set Up the API Key Sign up for the YouTube Transcript API. Subscribe to the api. Copy and paste your api key where the "your_api_key" is. Step 3: Activate the Workflow Start the workflow in n8n. Enter a YouTube video URL in the input form. The workflow will return a cleaned transcript. This workflow ensures seamless YouTube transcript extraction and processing with minimal manual effort. 🚀
by Ferenc Erb
Overview An automation workflow that creates a complete REST API for digitally signing PDF documents using n8n webhooks. This service demonstrates how to implement secure document signing functionality through standardized API endpoints with file upload and download capabilities. Use Case This workflow is designed for developers and automation specialists who need to implement digital document signing. It's particularly useful for: Integrating PDF signing capabilities into existing document workflows API-based automation of signature processes Creating proof-of-concept implementations for document verification systems Learning n8n's webhook capabilities and file handling techniques Testing PDF signing in development environments before production implementation What This Workflow Does API-Based Document Management Exposes RESTful webhook endpoints for all document operations Handles multipart/form-data uploads for PDF documents Processes JSON payloads for signing configuration Provides download functionality for completed documents Digital Certificate Handling Uploads existing PFX/PKCS#12 digital certificates Generates new certificates with customizable attributes Securely manages certificate storage and access Associates certificates with signing operations Cryptographic PDF Signing Applies digital signatures using industry-standard cryptographic methods Embeds signature information within PDF document structure Validates document integrity through cryptographic verification Preserves original document while adding signature elements Webhook Integration System Routes different API methods to appropriate handlers Validates request payloads and file content Manages authentication through webhook paths Returns structured responses for integration with other systems Technical Architecture Components API Gateway: n8n webhook nodes that receive external requests Request Router: Switch nodes that direct operations based on method parameters Document Processor: Function nodes for PDF manipulation and verification Certificate Manager: Specialized nodes for cryptographic key operations Storage Interface: File operation nodes for document persistence Response Formatter: Nodes that structure API responses Integration Flow Client Request → Webhook Endpoint → Method Router → Processing Engine → Digital Signing → Storage → Response Generation → Client Response Setup Instructions Prerequisites n8n installation (minimum version 0.214.0) Node.js 14 or higher Required environment variable: NODE_FUNCTION_ALLOW_EXTERNAL: "node-forge,@signpdf/signpdf,@signpdf/signer-p12,@signpdf/placeholder-plain" Configuration Steps Import Workflow Import the workflow JSON into your n8n instance Activate the workflow to enable the webhooks Configure Storage Set the storage path variables in the workflow Ensure proper permissions on the storage directories Test API Endpoints Use the included test scripts to verify functionality Test PDF upload, certificate generation, and signing Integration Document the webhook URLs for integration with other systems Configure error handling according to your requirements Testing Methods Test the workflow functionality using various HTTP requests and JSON data: Upload PDF documents to the document processing endpoint Upload or generate digital certificates Execute PDF signing operations Download signed documents from the download endpoint Webhook Endpoints The workflow exposes two primary webhook endpoints that form a complete API for PDF digital signing operations: 1. Document Processing Endpoint (/webhook/docu-digi-sign) This endpoint handles all document and certificate operations: Method: Upload PDF HTTP: POST Content-Type: multipart/form-data Parameters: method, uploadType, fileName, fileData Method: Upload Certificate HTTP: POST Content-Type: multipart/form-data Parameters: method, uploadType, fileName, fileData Method: Generate Certificate HTTP: POST Content-Type: application/json Parameters: method, subjectCN, issuerCN, serialNumber, validFrom, validTo, password Method: Sign PDF HTTP: POST Content-Type: application/json Parameters: method, inputPdf, pfxFile, pfxPassword 2. Document Download Endpoint (/webhook/docu-download) This endpoint handles the retrieval of processed documents: Method: Download Signed PDF HTTP: GET Content-Type: application/json Parameters: method, fileType, fileName Key Workflow Sections The workflow is organized into logical sections with clear responsibilities: Request Processing**: Parses incoming webhook data Method Routing**: Directs requests to appropriate handlers Document Management**: Handles file operations and storage Cryptographic Operations**: Manages signing and certificate functions Response Formatting**: Structures and returns results
by Akash Kankariya
🚀 Discover trending and viral YouTube videos easily with this powerful n8n automation! This workflow helps you perform bulk research on YouTube videos related to any search term, analyzing engagement data like views, likes, comments, and channel statistics — all in one streamlined process. ✨ Perfect for: Content creators wanting to find viral video ideas Marketers analyzing competitor content YouTubers optimizing their content strategy How It Works 🎯 1️⃣ Input Your Search Term — Simply enter any keyword or topic you want to research. 2️⃣ Select Video Format — Choose between short, medium, or long videos. 3️⃣ Choose Number of Videos — Define how many videos to analyze in bulk. 4️⃣ Automatic Data Fetch — The workflow grabs video IDs, then fetches detailed video data and channel statistics from the YouTube API. 5️⃣ Performance Scoring — Videos are scored based on engagement rates with easy-to-understand labels like 🚀 HOLY HELL (viral) or 💀 Dead. 6️⃣ Export to Google Sheets — All data, including thumbnails and video URLs, is appended to your Google Sheet for comprehensive review and easy sharing. Setup Instructions 🛠️ Google API Key Get your YouTube Data API key from Google Developers Console. Add it securely in the n8n credentials manager (do not hardcode). Google Sheets Setup Create a Google Sheet to store your results (template link is provided). Share the sheet with your Google account used in n8n. Update the workflow with your sheet's Document ID and Sheet Name if needed. Run the Workflow Trigger the form webhook via browser or POST call. Enter search term, format, and number of videos. Let it process and check your Google Sheet for insights! Features ✨ Bulk fetches the latest and top-viewed YouTube videos. Intelligent video performance scoring with emojis for quick insights 🔥🎬. Organizes data into Google Sheets with thumbnail previews 🖼️. Easy to customize search parameters via an intuitive form. Fully automated, no manual API calls needed. Get Started Today! 🌟 Boost your YouTube content strategy and stay ahead with this powerful viral video research automation! Try it now on your n8n instance and tap into the world of viral content like a pro 🎥💡
by Yang
📄 What this workflow does This workflow captures a full-page screenshot of any website added to a Google Sheet and automatically uploads the screenshot to a designated Google Drive folder. It uses Dumpling AI’s screenshot API to generate the image and manages file storage through Google Drive. 👤 Who is this for This is ideal for: Marketers and outreach teams capturing snapshots of client or lead websites Lead generation specialists tracking landing page visuals Researchers or analysts who need to archive website visuals from URLs Anyone looking to automate website screenshot collection at scale ✅ Requirements A Google Sheet with a column labeled Website where URLs will be added Dumpling AI** API access for screenshot capture A connected Google Drive account with an accessible folder to store screenshots ⚙️ How to set up Replace the Google Sheet and folder IDs in the workflow with your own. Connect your Dumpling AI and Google credentials in n8n. Make sure your sheet contains a Website column with valid URLs. Activate the workflow to begin watching for new entries. 🔁 How it works (Workflow Steps) Watch New Row in Google Sheets: Triggers when a new row is added to the sheet. Request Screenshot from Dumpling AI: Sends the website URL to Dumpling AI and gets a screenshot URL. Download Screenshot: Fetches the image file from the returned URL. Upload Screenshot to Google Drive: Uploads the file to a selected folder in Google Drive. 🛠️ Customization Ideas Add timestamped filenames using the current date or domain name Append the Google Drive URL back to the same row in the sheet for easy access Extend the workflow to send Slack or email notifications when screenshots are saved Add filters to validate URLs before sending them to Dumpling AI
by Davide
This workflow automates the generation of AI-enhanced, contextualized images using FLUX Kontext, based on prompts stored in a Google Sheet. The generated images are then saved to Google Drive, and their URLs are written back to the spreadsheet for easy access. Example Image: Prompt: The girl is lying on the bed and sleeping Result: Perfect for E-commerce and Social Media This workflow is especially useful for e-commerce businesses: Generate product images with dynamic backgrounds based on the use-case or season. Create contextual marketing visuals for ads, newsletters, or product pages. Scale visual content creation without the need for manual design work. How It Works Trigger**: The workflow can be started manually (via "Test workflow") or scheduled at regular intervals (e.g., every 5 minutes) using the "Schedule Trigger" node. Data Fetch**: The "Get new image" node retrieves a row from a Google Sheet where the "RESULT" column is empty. It extracts the prompt, image URL, output format, and aspect ratio for processing. Image Generation**: The "Create Image" node sends a request to the FLUX Kontext API (fal.run) with the provided parameters to generate a new AI-contextualized image. Status Check**: The workflow waits 60 seconds ("Wait 60 sec." node) before checking the status of the image generation request via the "Get status" node. If the status is "COMPLETED," it proceeds; otherwise, it loops back to wait. Result Handling**: Once completed, the "Get Image Url" node fetches the generated image URL, which is then downloaded ("Get Image File"), uploaded to Google Drive ("Upload Image"), and the Google Sheet is updated with the result ("Update result"). Set Up Steps To configure this workflow, follow these steps: Google Sheet Setup: Create a Google Sheet with columns for PROMPT, IMAGE URL, ASPECT RATIO, OUTPUT FORMAT, and RESULT (leave this empty). Link the sheet in the "Get new image" and "Update result" nodes. API Key Configuration: Sign up at fal.ai to obtain an API key. In the "Create Image" node, set the Header Auth with: Name: Authorization Value: Key YOURAPIKEY Google Drive Setup: Specify the target folder ID in the "Upload Image" node where generated images will be saved. Schedule Trigger (Optional): Adjust the "Schedule Trigger" node to run the workflow at desired intervals (e.g., every 5 minutes). Test Execution: Run the workflow manually via the "Test workflow" node to verify all steps function correctly. Once configured, the workflow will automatically process pending prompts, generate images, and update the Google Sheet with results. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by David Levesque
Here's the corrected English text: Dropbox Folder Monitoring Workflow As we don't have (yet?) a Dropbox node "Watching new files" or "Watching folder", I created this central workflow to do it. How it works Triggered by Dropbox webhook I respond immediately to Dropbox to avoid webhook disabling Then I add/duplicate one branch per monitored folder, according to my needs In my case, I need to monitor several folders, like "vocal notes to process", "transcriptions to LinkedIn posts" or "quotes to add". This workflow shows 2 types of folder monitoring: Way #1: Each file in the monitored folder calls a sub-workflow Way #2: We get all files from the monitored folder and compare them to a database. If the file is not listed in DB, i supposed it's new one. Way #1 - We get all files from the monitored folder I set a variable folder_to_watch to indicate which folder to monitor. This step is here just to be homogeneous and allow setting the folder path only once in this branch. I list the folder files We keep only files (exclude folders) Then I call the specialized sub-workflow Way #2 - We want only new files from the monitored folder I set a variable folder_to_watch to indicate which folder to monitor I list the folder files and keep only files Meanwhile, I query my DB to get known files about this folder (I send the query to NocoDB (folder_to_watch,eq,{{ $json.folder_to_watch }})) Now I can exclude old files and keep only new ones by merging (I compare from Dropbox file id - as the file could be renamed by the user) I add the new file in DB to be sure to recognize it next time - I save the JSON Dropbox data: { "id":"{{ $json.id }}", "name":"{{ $json.name }}", "lastModifiedClient": "{{ $json.lastModifiedClient }}", "lastModifiedServer": "{{ $json.lastModifiedServer }}", "rev": "{{ $json.rev }}", "contentSize": {{ $json.contentSize }}, "type": "{{ $json.type }}", "contentHash": "{{ $json.contentHash }}", "pathLower": "{{ $json.pathLower }}", "pathDisplay": "{{ $json.pathDisplay }}", "isDownloadable": {{ $json.isDownloadable }} } And now I can call my sub-workflow :) My DB Columns details: folder_to_watch data (json/text) timestamp file_id (Dropbox file ID, to ease future searches) My vision: I have only one workflow in my n8n that monitors Dropbox folders/files This workflow calls the required sub-workflow specialized for the tasks required I will have as many branches as I have folders to monitor (if I have 5 different folders to watch, I will get 5 branches and 5 sub-workflows)
by John Pranay Kumar Reddy
✨ Summary Efficiently monitor Kubernetes environments by sending only unique error logs from Grafana Loki to Slack. Reduces alert fatigue while keeping your team informed about critical log events. 🧑💻 Who’s it for DevOps or SRE engineers running EKS/GKE/AKS Anyone using Grafana Loki and Promtail for centralized logging Teams that want Slack alerts but hate alert spam 🔍 What it does This n8n workflow queries your Loki logs every 5 minutes, filters only the critical ones (error, timeout, exception, etc.), removes duplicate alerts within the batch, and sends clean alerts to a Slack channel with full metadata (pod, namespace, node, container, log, timestamp). 🧠 How it works 🕒 Schedule Trigger Every 5 minutes (customizable) 🌐 Loki HTTP Query Pulls logs from the last 10 minutes Keyword match: error, failed, oom, etc. 🧹 Log Parsing Extracts log fields (pod, container, etc.) Skips empty/malformed results 🧠 Deduplication Removes repeated error messages (within query window) 📤 Slack Notification Sends nicely formatted message to Slack ⚙️ Requirements Tool Notes Loki- Exposed internally or externally Slack App- With chat:write OAuth n8n- Cloud or self-hosted 🔧 How to Set It Up Import the JSON file into n8n Update: Loki API URL (e.g., http://loki-gateway.monitoring.svc.cluster.local) Slack Bearer Token (via credentials) Target Slack channel (e.g., #k8s-alerts) (Optional) Change keywords in the query regex Activate the workflow Ensure n8n pod/container is having access to your kubernetes cluster/pods/namespaces 🛠 How to Customize Want more or fewer keywords? Adjust the regex in the Query Loki for Error Logs node. Need to increase deduplication logic? Enhance the Remove Duplicate Alerts node. Want 5-log summaries every 5 min? Fork this and add a Batch + Slack group sender. Grafana Loki logs to Slack Output
by David Ashby
🛠️ Demio Tool MCP Server Complete MCP server exposing all Demio Tool operations to AI agents. Zero configuration needed - all 4 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Demio Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Demio Tool tool with full error handling 📋 Available Operations (4 total) Every possible Demio Tool operation is included: 📅 Event (3 operations) • Get an event • Get many events • Register an event 🔧 Report (1 operations) • Get a report 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Demio Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Demio Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Lucía Maio Brioso
🧑💼 Who is this for? This workflow is for anyone with two YouTube channels who wants to copy playlists from one to the other — no technical skills required. Whether you're a content creator, hobbyist, educator, or just someone managing multiple channels, this workflow helps you save time and avoid the manual work of recreating playlists video by video. 🧠 What problem is this workflow solving? YouTube doesn't provide an option to transfer or duplicate playlists between accounts or channels. That means if you want the same playlists in two places, you're stuck: Creating new playlists manually Searching for each video again Copy-pasting links one by one This workflow automates the entire process for you — accurately, quickly, and with no manual work. ⚙️ What this workflow does Retrieves all playlists from a source YouTube channel (excluding private ones) For each playlist: Gets all its videos Filters out private or unavailable videos Creates a new playlist in the target channel with the same title Adds the videos to the new playlist Continues smoothly even if some videos fail to copy (e.g., if they’re restricted or deleted) 🛠️ Setup Create two YouTube OAuth2 credentials in n8n: One for your source channel One for your target channel Assign the credentials to the correct nodes as indicated in the sticky notes: Source nodes → source credentials Target nodes → target credentials Click “Test workflow” to run it. > ⚠️ Note: If you have many playlists or videos, you may hit YouTube’s API quota. You can request a quota increase in your Google Cloud Console if needed. 🧩 How to customize this workflow to your needs ✂️ Copy only specific playlists Use a Filter node after the playlist fetch to include only certain titles or IDs. 📝 Change the title of the copied playlists Modify the title in the Create playlist node (e.g., add “(Copy)” or a prefix). 🔄 Automate it regularly Replace the Manual Trigger with a Cron node if you want to run this periodically. 🧪 Test safely If you're unsure, use a secondary channel as your test target before applying changes to your main account.
by Airtop
Monitoring Job Changes on LinkedIn Use Case This automation tracks job changes among your LinkedIn connections and extracts relevant details. It's ideal for triggering timely outreach, updating CRM records, or feeding lead scoring workflows based on new roles. What This Automation Does It scrapes your LinkedIn "Job Changes" feed and returns: Name of the person Their new position LinkedIn profile URL Functional category (e.g., marketing, sales, HR, executive) Each run processes 5 job changes at a time. How It Works Manual Trigger: Starts the workflow when the user clicks "Test workflow." Airtop Enrichment: Navigates to the LinkedIn job changes page and extracts: name new_position linkedin_profile_url position_function (classification such as marketing, sales, HR, etc.) Formatting: Output is structured into clean JSON for use in further workflows. Setup Requirements Airtop Profile connected to LinkedIn Airtop API key configured in n8n A LinkedIn account with a populated “Job Changes” feed Next Steps Automate Alerts**: Add Slack, email, or CRM integrations to notify your team. Enrich and Score Leads**: Chain this with your ICP scoring workflow to evaluate new roles. Customize Scope**: Expand extraction to more than 5 job changes or add filters based on job titles or functions. Read more about Monitoring Job Changes on Linkedin.
by MattF
This workflow generates a weekly performance summary from Google Search Console, focused on brand-level SEO metrics and week-over-week trends. It provides a structured view of how each brand segment is performing, with clean formatting for quick insights. Key Features Sends a weekly email with a table showing clicks, impressions, CTR, and position — along with % change vs. the previous week. Highlights both brand and non-brand clicks separately. Color-coded % changes make it easy to spot wins (green) and losses (red) at a glance. It’s designed to give SEO teams a consistent overview of performance by brand, helping to track directional shifts and support deeper analysis when needed. How it works Runs weekly (e.g. every Monday) to compare “Last Week” vs. “2 Weeks Ago” from GSC data. Includes both brand + non-brand click breakdown. Calculates raw values and week-over-week % change for clicks, impressions, CTR, and position. Outputs a clean, formatted table with labeled rows and color-coded changes. Sends the table as part of a scheduled email (can also be adapted for Slack or other channels). Setup steps Requires connected Google Search Console data (per brand segment). Email delivery is included by default (customizable to other platforms). Update brand segmentation logic to match your tracking needs (e.g. domain, label, or custom filters). Typical setup time: ~5-10 minutes with structured input data.
by Darien Kindlund
If you have multiple users managing workflows, there may come a time where a user “accidentally” turns off a workflow. Or, if you have workflows that automatically turn off other workflows, that code might “accidentally” turn off the wrong one. In either case, here’s a workflow that can attempt to “auto-start” accidentally disabled workflows: How it works: When activated, then every 4 hours, the workflow will search all other workflows that have the auto_resume:true tag present. If any other workflow has auto_resume:true set but is currently turned off, then this workflow will turn it back on. Of course, this watchdog won’t work if the watchdog workflow is turned off. That said, we’ve found this useful in recovering from accidental actions that cause production workflows to be turned off.