Translate and dub spokesperson videos using Anthropic and deAPI

Who is this for?

Marketing teams localizing video content for international markets E-commerce brands creating product videos for multiple regions Agencies producing multilingual ad campaigns for global clients Educators and trainers adapting video courses for different language audiences Anyone who wants to localize a spokesperson video without re-filming

What problem does this solve?

Localizing a video for a new market usually means hiring a local presenter, re-filming the entire video, or settling for subtitles that nobody reads. This workflow takes an existing spokesperson video, transcribes it, translates the speech into a target language, generates dubbed audio, and produces a lip-synced talking-head video with a locally-relevant face — all without a camera or a casting call.

What this workflow does

Reads the original spokesperson video and a reference image of the local presenter in parallel Transcribes the video's audio to text using deAPI (Whisper Large V3) Extracts the raw transcript text from the transcription result AI Agent translates the transcript into the target language, preserving tone and pacing Generates dubbed speech in the target language using deAPI text-to-speech (Qwen3 TTS Custom Voice) Generates a lip-synced talking-head video from the dubbed audio using deAPI audio-to-video generation (LTX-2.3 22B), with the local presenter image as the first frame

Setup

Requirements

deAPI account for transcription, TTS, and video generation Anthropic account for the AI Agent (translation) A spokesperson video A reference image of the local presenter (JPG, JPEG, PNG, GIF, BMP, WebP — max 10 MB)

Installing the deAPI Node

n8n Cloud: Go to **Settings → Community Nodes and install n8n-nodes-deapi Self-hosted: Go to **Settings → Community Nodes and install n8n-nodes-deapi

Configuration

Add your deAPI credentials (API key + webhook secret) Add your Anthropic credentials (API key) Update the File Path in the "Read Source Video" node to point to your spokesperson video Update the File Path in the "Read Local Presenter Image" node to point to the reference image Edit the Set Fields node to set the target language (e.g., "Spanish", "Japanese", "French") Ensure your n8n instance is on HTTPS

How to customize this workflow

Change the AI model**: Swap Anthropic for OpenAI, Google Gemini, or any other LLM provider for translation Change the TTS model**: Switch Qwen3 TTS Custom Voice for Kokoro or Chatterbox for different voice characteristics Use voice cloning**: Replace the Generate Speech node with Clone a Voice to preserve the original speaker's voice in the target language Batch processing**: Replace the Manual Trigger with a Google Sheets or Airtable trigger containing rows for each target language and local presenter image Add delivery**: Append a Gmail, Slack, or Google Drive node to automatically deliver the localized video

0
Downloads
0
Views
7.34
Quality Score
beginner
Complexity
Author:deAPI Team(View Original →)
Created:4/5/2026
Updated:4/14/2026

🔒 Please log in to import templates to n8n and favorite templates

Workflow Visualization

Loading...

Preparing workflow renderer

Comments (0)

Login to post comments