by Arjan ter Heegde
n8n Placeholdarr for Plex (BETA) This flow creates dummy files for every item added in your *Arrs (Radarr/Sonarr) with the tag unprocessed-dummy. It’s useful for maintaining a large Plex library without needing the actual movies or shows to be present on your Debrid provider. How It Works When a dummy file is played, the corresponding item is automatically monitored in *Arr and added to the download queue. This ensures that the content becomes available within ~3 minutes for playback. If the content finishes downloading while the dummy is still being played, Tautulli triggers a webhook that stops the stream and notifies the user. Requirements Each n8n node must have the correct URL and authorization headers configured. The SSH host (used to create dummy files) must have FFmpeg installed. A Trakt.TV API key is required if you're using Trakt collections. Warning > ⚠️ This flow is currently in BETA and under active development. > It is not recommended for users without technical experience. > Keep an eye on the GitHub repository for updates. https://github.com/arjanterheegde/n8n-workflows-for-plex
by Cooper
DMARC Reporter Gmail and Yahoo send DMARC reports as .zip or .gz xml attachments that can be hard to read. This workflow unpacks them on a schedule, turns the data into a simple table, and emails you an easy-to-read report. DMARC insights at a glance: Confirm that your published policy is correct and consistent. Quickly spot unknown or suspicious IPs trying to send as you. Distinguish between legitimate high-volume senders (e.g. your ESP) and one-off or small-scale abuse. Makes it easy to confirm your legitimate servers are authenticating correctly, and to detect spoofed mail that fails DKIM/SPF. Who is this For? Email Marketing Team Mailchimp, Sensorpro, Omnisend users Compliance Team Customize: Adjust the Gmail node to include other DMARC reporters by changing the search parms. If not using Gmail you can use any of the n8n email nodes. To keep a record, add an Airtable node after the Set node.
by Oneclick AI Squad
Description Automates weekly checks for broken links on a website. Scans the site using HTTP requests and filters broken links. Sends Slack alerts for detected broken URLs and creates a list for tracking. Essential Information Runs weekly to monitor website link integrity. Identifies broken links and notifies the team via Slack. Generates a list of broken links for further action. System Architecture Link Checking Pipeline**: Weekly Cron Trigger: Schedules the workflow to run weekly. Scan Blog with HTTP: Performs HTTP GET requests to check website links. Alert and Tracking Flow**: Filter Broken Links: Identifies and separates broken links. Send Slack Alert: Notifies the team via Slack about broken URLs. Create Broken Links List: Compiles a list of broken links. Non-Critical Handling**: No Action for Valid Links: Skips valid links with no further action. Implementation Guide Import the workflow JSON into n8n. Configure the HTTP node with the target website URL (e.g., https://yourblog.com). Set up Slack credentials for alerts. Test the workflow with a sample website scan. Monitor link checking accuracy and adjust HTTP settings if needed. Technical Dependencies HTTP request capability for link scanning. Slack API for team notifications. n8n for workflow automation and scheduling. Database & Sheet Structure No specific database or sheet required; relies on HTTP response data. Example payload: {"url": "https://yourblog.com/broken", "status": 404, "time": "2025-07-29T20:21:00Z"} Customization Possibilities Adjust the Cron trigger to run at a different frequency (e.g., daily). Customize HTTP node to scan specific pages or domains. Modify Slack alert messages in the Send Slack Alert node. Enhance the Create Broken Links List node to save results to a Google Sheet or Notion. Add email notifications for additional alert channels.
by curseduca.com
📘 Curseduca – User Creation & Access Group Assignment How it works This workflow automates the process of creating a new user in Curseduca and granting them access to a specific access group. It works in two main steps: Webhook – Captures user details (name, email, and group information). HTTP Request – Sends the data to the Curseduca API, creating the user, assigning them to the correct access group, and sending an email notification. Setup steps Deploy the workflow Copy the webhook URL generated by n8n. Send a POST request with the required fields: name email groupId Configure API access Add your API Key and Bearer token in the HTTP Request node headers (replaCurseducace placeholders). Replace <GroupId> in the body with the correct group ID. Notifications By default, the workflow will trigger an email notification to the user once their account is created. Example use cases Landing pages**: Automatically register leads who sign up on a product landing page and grant them immediate access to a course, training, or bundle. Product bundles**: Offer multiple products or services together and instantly give access to the correct group after purchase. Chatbot integration: Connect tools like **Manychat to capture name and email via chatbot conversations and create the user directly in Curseduca. 📘 Curseduca – Criação de Usuário e Liberação de Grupo de Acesso Como funciona Este fluxo de trabalho automatiza o processo de criação de um novo usuário no Curseduca e a liberação de acesso a um grupo específico. Ele funciona em duas etapas principais: Webhook – Captura os dados do usuário (nome, e-mail e informações de grupo). HTTP Request – Envia os dados para a API do Curseduca, criando o usuário, atribuindo-o ao grupo correto e disparando uma notificação por e-mail. Passos de configuração Publicar o workflow Copie a URL do webhook gerada pelo n8n. Envie uma requisição POST com os campos obrigatórios: name email groupId Configurar o acesso à API Adicione sua API Key e Bearer token nos headers do nó HTTP Request (substitua os placeholders). Substitua <GroupId> no corpo da requisição pelo ID correto do grupo. Notificações Por padrão, o fluxo dispara uma notificação por e-mail para o usuário assim que a conta é criada. Casos de uso Landing pages**: Registre automaticamente leads que se inscrevem em uma landing page de produto e libere acesso imediato a um curso, treinamento ou pacote. Pacotes de produtos**: Ofereça múltiplos produtos ou serviços em conjunto e conceda acesso instantâneo ao grupo correto após a compra. Integração com chatbots: Conecte ferramentas como o **Manychat para capturar nome e e-mail em conversas e criar o usuário diretamente no Curseduca.
by Stéphane Heckel
Copy n8n workflows to a slave n8n repository Inspired by Alex Kim's workflow, this version adds the ability to keep multiple versions of the same workflow on the destination instance. Each copied workflow’s name is prefixed with the date (YYYY_MM_DD_), enabling simple version tracking. Process details and workflow counts are recorded centrally in Notion. How it works Workflows from the source n8n instance are copied to the destination using the n8n API node. On the destination, each workflow name is prefixed with the current date (e.g., 2025_08_03_PDF Summarizer), so you can keep multiple daily versions. The workflow tracks and saves: The date of execution. Number of workflows processed. Both details are recorded in Notion. Rolling retention policy example: Day 1:** Workflows are saved with 2025_08_03_ prefix. Day 2:** New set saved with 2025_08_04_. Day 3:** Day 1’s set is deleted, new set saved as 2025_08_05_. To keep more days, adjust the “Subtract From Date” node. How to use Create a Notion database with one page and three fields: sequence: Should contain "prefix". Value: Today's date as YYYY_MM_DD_. Comment: Number of saved workflows. Configure the Notion node: Enter your Notion credentials. Link to the created database/page. Update the "Subtract From Date" node: Set how many days’ versions you want to keep (default: 2 days). Set the limit to 1 in the "Limit" node for testing. Input credentials for both source and destination n8n instances. Requirements Notion** for tracking execution date and workflow count. n8n API Keys* for both source and destination instances. Ensure you have the necessary *API permissions** (read, create, delete workflows) n8n version** this workflow was tested on 1.103.2 (Ubuntu) Need Help? Comment this post or contact me on LinkedIn Ask in the Forum!
by Fahmi Fahreza
Auto-clip long videos into viral short clips using Vizard AI This workflow turns long-form YouTube or video URLs into short, high-viral-potential clips, then automatically publishes them to social platforms and logs results in Google Sheets. Who’s it for? Content creators, social media managers, and marketers who want to scale short-form video production automatically. How it works Collect a video URL and viral score via form or schedule. Create a clipping project in Vizard AI. Poll project status until processing is complete. Filter clips by viral score and limit quantity. Publish selected clips and log results to Google Sheets. How to set up Connect Vizard AI, Google Sheets, and social credentials. Configure thresholds and limits in the Set Configuration node, then activate the workflow.
by Davide
This automated workflow generates a video featuring a talking AI avatar from a single image and automatically publishes it to TikTok with Postiz. The process involves two main AI services chained together: Elevenlabs v3 and Infinitalk. Key Benefits ✅ Full Automation – From text input to TikTok publication, the process is completely automated. ✅ Time-Saving – Eliminates manual video editing, voice-over recording, and social media posting. ✅ Scalable – Can generate multiple avatar videos daily with minimal effort. ✅ Customizable – Flexible inputs (image, voice, text, prompts) allow adaptation for different content types (weather forecasts, product promos, tutorials, etc.). ✅ Engagement-Oriented – Uses AI to optimize video titles for TikTok, increasing chances of visibility and audience interaction. ✅ Consistent Branding – Ensures uniform style and messaging across multiple video posts. How It Works Text-to-Speech (TTS) Generation: The workflow starts by sending a predefined text script and a selected voice (e.g., "Alice") to the Fal.ai service, which utilizes ElevenLabs' AI to generate a high-quality audio file. The workflow then polls the API until the audio generation is COMPLETED and retrieves the URL of the generated audio file. Talking Avatar Video Generation: The workflow takes a static image URL and the newly created audio URL and sends them to another Fal.ai service (Infinitalk). This AI model animates the avatar in the image to lip-sync and match the provided audio. A prompt guides the avatar's expression (e.g., "You are a girl giving a weather forecast and you must be expressive"). The workflow again polls for status until the video is COMPLETED. Title Generation & Publishing: Once the video is ready, its URL is fetched. Simultaneously, an OpenAI (GPT-4o-mini) node generates an optimized, engaging title (under 60 characters) for the TikTok post based on the original script and avatar prompt. The final video file is downloaded and uploaded to Postiz (a social media scheduling service), which finally posts it to a pre-configured TikTok account. Set Up Steps Before executing this workflow, you must configure the following third-party service credentials and node parameters within n8n: Fal.ai API Credentials: Create an account on Fal.ai and obtain an API key. Create a new credential of type "HTTP Header Auth" in n8n named "Fal.run API". The key should be placed in the Value field, and the Header Name must be set to Authorization. The value should be Key <YOUR_FAL_AI_API_KEY>. OpenAI API Credentials: You need an OpenAI API key. Create a credential in n8n of type "OpenAI API", name it (e.g., "OpenAi account"), and enter your API key. Postiz API Credentials: Create an account on Postiz, connect your TikTok account, and get your API key from the Postiz dashboard. In n8n, create an "HTTP Header Auth" credential named "Postiz". Set the Header Name to X-API-Key and the Value to your Postiz API key. Also, create a "Postiz API" credential in n8n and enter the same API key. Configure Postiz Node: In the "TikTok" (Postiz) node, you must replace "XXX" in the integrationId field with the actual ID of your connected TikTok account from your Postiz dashboard. (Optional) Customize Inputs: You can modify the default values in the "Set text input" node (the script and voice) and the "Set Video Params" node (the image_url and the prompt for the avatar's expression) to create different videos without changing the workflow's structure. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Gegenfeld
This workflow automatically removes backgrounds from images using the APImage API. Simply provide an image URL, and the workflow will process it through AI-powered background removal, then download the processed image for use in your projects. Who's it for This template is perfect for: E-commerce businesses needing clean product images Content creators who need transparent background images Marketing teams processing large batches of images Developers building image processing applications Anyone who regularly needs background-free images How it works The workflow uses APImage's AI-powered background removal service to automatically detect and remove backgrounds from images. You provide an image URL through a form interface, the API processes the image using advanced AI algorithms, and returns a clean image with the background removed. The processed image is then downloaded and ready for use. How to set up Get your APImage API key: Sign in to the APImage Dashboard 🡥 (or create a new APImage account) Copy your API key from the dashboard Configure the API connection: Double-click the APImage Integration node Replace YOUR_API_KEY with your actual API key (keep the Bearer prefix) Test the workflow: Click the Remove Background form trigger Enter an image URL in the form Submit to process the image Set up output destination (optional): Add nodes after the Download node to save images to your preferred storage Options include Google Drive, Dropbox, databases, or cloud storage Requirements n8n instance (cloud or self-hosted) APImage 🡥 account and valid API key Images accessible via public URLs for processing How to customize the workflow Replace Input Source: Swap the Form Trigger with data from other sources like: Database queries (MySQL, PostgreSQL, SQLite) Cloud storage (Google Drive, Dropbox, S3) Other APIs or webhooks Airtable, Notion, or other productivity tools Add Output Destinations: Connect additional nodes after the Download step to save processed images to: Cloud storage services (Google Drive, Dropbox, S3) Databases for organized storage Content management systems Social media platforms Email attachments Batch Processing: Modify the workflow to process multiple images by connecting it to data sources that provide arrays of image URLs. Add Image Validation: Include nodes to validate image URLs or file formats before processing to avoid API errors. Workflow Structure Form Trigger → APImage Integration → Download → [Your Output Destination] The Form Trigger collects image URLs, APImage Integration processes the background removal via API, Download retrieves the processed image, and you can add any output destination for the final images. API Details The workflow sends a POST request to https://apimage.org/api/ai-remove-background with: Authorization header:** Your API key image_url:** The URL of the image to process async:** Set to false for immediate processing The processed image is returned with a transparent background and downloaded automatically.
by Yaron Been
Generate Images with Realistic Inpainting using Simbrams Ri AI This n8n workflow integrates with Replicate’s simbrams/ri model to generate images. It takes an input image and mask, applies transformations based on your parameters, and returns the final generated output automatically. 📌 Section 1: Trigger & Authentication ⚡ On Clicking ‘Execute’ (Manual Trigger) Purpose**: Starts the workflow manually. Benefit**: Useful for testing and running on demand. 🔑 Set API Key (Set Node) Purpose: Stores your **Replicate API key inside the workflow. Benefit**: Keeps credentials secure and ensures other nodes can reuse them. 📌 Section 2: Sending the Image Generation Request 🖼️ Create Prediction (HTTP Request Node) Purpose**: Sends a POST request to Replicate’s API to start generating an image. Input Parameters**: image: Input image URL mask: Mask image URL seed: Randomness control (for reproducibility) steps: Number of refinement steps strength: Intensity of modification (0–1) blur\_mask: Whether to blur the mask edges merge\_m\_s: Whether to merge mask with source Benefit**: Gives full control over how the model modifies your image. 🆔 Extract Prediction ID (Code Node) Purpose: Extracts the **Prediction ID, status, and URL from Replicate’s response. Benefit**: Required to check the status of the generation later. 📌 Section 3: Polling & Waiting ⏳ Wait (Wait Node) Purpose**: Pauses the workflow for 2 seconds before rechecking. Benefit**: Prevents hitting Replicate’s API too quickly. 🔄 Check Prediction Status (HTTP Request Node) Purpose**: Checks whether the prediction is complete using the stored Prediction ID. Benefit**: Automates monitoring of job progress. ✅ Check If Complete (If Node) Purpose**: Decides if the prediction has finished. Paths**: True → Sends result to processing. False → Loops back to Wait and keeps checking. Benefit**: Ensures the workflow only ends when a valid image is ready. 📌 Section 4: Processing the Result 📦 Process Result (Code Node) Purpose**: Cleans up the completed API response and extracts: Status Output (final generated image) Metrics Created & completed timestamps Model name (simbrams/ri) Final image URL Benefit**: Delivers a structured and ready-to-use result for display, storage, or further automation. 📊 Workflow Overview Table | Section | Node Name | Purpose | | ----------------- | ----------------------- | ------------------------------------ | | 1. Trigger & Auth | On Clicking ‘Execute’ | Starts the workflow manually | | | Set API Key | Stores API credentials | | 2. AI Request | Create Prediction | Sends image generation request | | | Extract Prediction ID | Extracts ID + status for tracking | | 3. Polling | Wait | Adds delay between checks | | | Check Prediction Status | Monitors job progress | | | Check If Complete | Routes based on job completion | | 4. Result | Process Result | Extracts and cleans the final output | 🎯 Key Benefits 🔐 Secure authentication with API key management. 🖼️ Custom image generation with parameters like mask, strength, and steps. 🔄 Automatic polling ensures results are fetched only when ready. 📦 Clean structured output with final image URL for easy use.
by Țugui Dragoș
How it works This workflow checks the health of your web services or APIs on a schedule, prevents false alerts with a second verification, and sends confirmed failure alerts directly to Slack. Performs scheduled HTTP health checks Waits and retries before confirming failure Sends alerts only if the service fails twice in a row Reduces false positives and avoids alert fatigue Setup steps Add your service URL(s) in the HTTP Request nodes Configure your Slack Bot Token in n8n Deploy the workflow Get real-time Slack alerts when services go down 🚨 Use case Perfect for IT teams, DevOps engineers, and developers who need reliable uptime monitoring without noise.
by David Soden
Extract and Upload Files from Zip to Google Drive How it works This workflow automatically extracts all files from an uploaded zip archive and uploads each file individually to Google Drive. Flow: User submits a zip file via form Zip file is temporarily saved to disk (workaround for compression node limitation) Zip file is read back and decompressed Split Out node separates each file into individual items Each file is uploaded to Google Drive with its original filename Key features: Handles zip files with any number of files dynamically Preserves original filenames from inside the zip No hardcoded file counts - works with 1 or 100 files Set up steps Connect Google Drive: Add your Google Drive OAuth2 credentials to the "Upload to Google Drive" node Select destination folder: In the Google Drive node, choose which folder to upload files to (default is root) Update temp path (optional): Change the temporary file path in "Read/Write Files from Disk" node if needed (default: c:/temp_n8n.zip) Requirements Google Drive account and OAuth2 credentials Write access to local filesystem for temporary zip storage Tags automation, file processing, google drive, zip extraction, file upload
by System Admin
Tagged with: , , , ,