by Gilbert Onyebuchi
Complete YouTube video automation workflow that creates ready-to-upload videos from start to finish. No manual editing required. How it works: This n8n automation fetches stock videos from Pixabay, generates AI-powered voiceover scripts with OpenAI, creates professional narration using ElevenLabs text-to-speech, merges all clips with beautiful transitions using Shotstack rendering, and automatically uploads your finished video to Google Drive. What you'll achieve: Create 5-10 minute videos automatically Generate unlimited faceless YouTube content Save hours of manual video editing Build a consistent content pipeline Scale your YouTube channel effortlessly Requirements: Pixabay API (free tier available) ElevenLabs API (text-to-speech) Shotstack API (video rendering) OpenAI API (script generation) Google Drive API credentials Perfect for content creators, YouTube automation, educational channels, social media marketers, and faceless channel owners. 📧 Questions? Need customization? Connect with me on LinkedIn: Click here 👀 Check out my other automation workflows on my n8n creator profile for more productivity tools!
by Budi SJ
Automated Brand DNA Generator Using JotForm, Google Search, AI Extraction & Notion The Brand DNA Generator workflow automatically scans and analyzes online content to build a company’s Brand DNA profile. It starts with input from a form, then crawls the company’s website and Google search results to gather relevant information. Using AI-powered extraction, the system identifies insights such as value propositions, ideal customer profiles (ICP), pain points, proof points, brand tone, and more. All results are neatly formatted and automatically saved to a Notion database as a structured Brand DNA report, eliminating the need for manual research. 🛠️ Key Features Automated data capture, collects company data directly from form submissions and Google search results. Uses AI-powered insight extraction with LLMs to extract and summarize brand-related information from website content. Fetches clean text from multiple web pages using HTTP requests and a content extractor. Merges extracted data from multiple sources into a single Brand DNA JSON structure. Automatically creates a new page in Notion with formatted sections (headings, paragraphs, and bullet points). Handles parsing failures and processes multiple pages efficiently in batches. 🔧 Requirements JotForm API Key, to capture company data from form submissions. SerpAPI Key, to perform automated Google searches. OpenRouter / LLM API, for AI-based language understanding and information extraction. Notion Integration Token & Database ID, to save the final Brand DNA report to Notion. 🧩 Setup Instructions Connect your JotForm account and select the form containing the fields Company Name and Company Website. Add your SerpAPI Key. Configure the AI model using OpenRouter or LLM. Enter your Notion credentials and specify the databaseId in the Create a Database Page node. Customize the prompt in the Information Extractor node to modify the tone or structure of AI analysis (Optional). Activate the workflow, then submit data through the JotForm to test automatic generation and Notion integration. 💡 Final Output A complete Brand DNA Report containing: Company Description Ideal Customer Profile Pain Points Value Proposition Proof Points Brand Tone Suggested Keywords All generated automatically from the company’s online presence and stored in Notion with no manual input required.
by Jimleuk
Generating contextual summaries is an token-intensive approach for RAG embeddings which can quickly rack up costs if your inference provider charges by token usage. Featherless.ai is an inference provider with a different pricing model - they charge a flat subscription fee (starting from $10) and allows for unlimited token usage instead. If you're typically spending over $10 - $25 a month, you may find Featherless to be a cheaper and more manageable option for your projects or team. For this template, Featherless's unlimited token usage is well suited for generating contextual summaries at high volumes for a majority of RAG workloads. LLM: moonshotai/Kimi-K2-Instruct Embeddings: models/gemini-embedding-001 How it works A large document is imported into the workflow using the HTTP node and its text extracted via the Extract from file node. For this demonstration, the UK highway code is used an an example. Each page is processed individually and a contextual summary is generated for it. The contextual summary generation involves taking the current page, preceding and following pages together and summarising the contents of the current page. This summary is then converted to embeddings using Gemini-embedding-001 model. Note, we're using a http request to use the Gemini embedding API as at time of writing, n8n does not support the new API's schema. These embeddings are then stored in a Qdrant collection which can then be retrieved via an agent/MCP server or another workflow. How to use Replace the large document import with your own source of documents such as google drive or an internal repo. Replace the manual trigger if you want the workflow to run as soon as documents become available. If you're using Google Drive, check out my Push notifications for Google Drive template. Expand and/or tune embedding strategies to suit your data. You may want to additionally embed the content itself and perform multi-stage queries using both. Requirements Featherless.ai Account and API Key Gemini Account and API Key for Embeddings Qdrant Vector store Customising this workflow Sparse Vectors were not included in this template due to scope but should be the next step to getting the most our of contextual retrieval. Be sure to explore other models on the Featherless.ai platform or host your own custom/finetuned models.
by Guy
🎯General Principles This workflow automates the import of leads into the Company table of a CRM built with Airtable. Its originality lies in leveraging the new "Data Table" node (an internal table within n8n) to generate an execution report. 📚 Why Data Tables: This approach eliminates the need for reading/writing operations on a Google Sheet file or an external database. 🧩 It is structured on 3 main key steps: Reading leads for which email address validity has been verified. Creating or updating company information. Generating of execution report. This workflow enables precise tracking of marketing actions while facilitating the historical record of interactions with prospects and clients. Prerequisites Leads file: A prior validation check on email address accuracy is required. Airtable: Must contain at least a Company table with the following fields: Company: company name Business Leader: name of the executive Activity: business sector (notary, accountant, plumber, electrician, etc.) Address: main company address Zip Code: postal code City: city Phone Number: phone number Email: email address of a manager URL Site: company website URL Opt-in: company’s consent for commercial prospecting Campaign: reserved for future marketing campaigns Valid Email: indicator confirming email verification ⚙️ Step-by-Step Description 1️⃣ Initialization and Lead Selection Data Table Initialization: An internal n8n table is created to build the execution report. Lead Selection: The workflow selects leads from the Google Sheet file (Sheet1 tab) where the condition "Valid Email" is equal to OK. 2️⃣ Iterative Loop Company Existence Check: The Search Company node is configured with Always Output Data enabled. A JavaScript code node distinguishes three possibilities: Company does not exist: create a new record and increment the created records counter. Company exists once: update the record and increment the updated records counter. Company appears multiple times: log the issue in the Leads file under the Logs tab, requiring a data quality procedure. 3️⃣ Execution Report Generation An execution report is generated and emailed, example format: Leads Import Report: Number of records read: 2392 Number of records created: 2345 Number of records updated: 42 If the sum of records created and updated differs from the total records read, it indicates the presence of duplicates. A counter for duplicated companies could be added. ✅ Benefits of this template Exception Management and Logging: Identification and traceability of inconsistencies during import with dedicated logs for issues. Data Quality and Structuring: Built-in checks for duplicate detection, validation, and mapping to ensure accurate analysis and compliance. Automated Reporting: Systematic production and delivery of a detailed execution report covering records read, created, and updated. 📬 Contact Need help customizing this (e.g., expanding Data Tables, connecting multiple surveys, or automating follow-ups)? 📧 smarthome.smartelec@gmail.com 🔗 guy.salvatore 🌐 smarthome-smartelec.fr
by Madame AI
Scrape & Import Products to Shopify from Any Site (with Variants & Images)-(Optimized for shoes) This advanced n8n template automates e-commerce operations by scraping product data (including variants and images) from any URL and creating fully detailed products in your Shopify store. This workflow is essential for dropshippers, e-commerce store owners, and anyone looking to quickly import product catalogs from specific websites into their Shopify store. Self-Hosted Only This Workflow uses a community contribution and is designed and tested for self-hosted n8n instances only. How it works The workflow reads a list of product page URLs from a Google Sheet. Your sheet, with its columns for Product Name and Product Link, acts as a database for your workflow. The Loop Over Items node processes products one URL at a time. Two BrowserAct nodes run sequentially to scrape all product details, including the Name, price, description, sizes, and image links. A custom Code node transforms the raw scraped data (where fields like sizes might be a single string) into a structured JSON format with clean lists for sizes and images. The Shopify node creates the base product entry using the main details. The workflow then uses a series of nodes (Set Option and Add Option via HTTP Request) to dynamically add product options (e.g., "Shoe Size") to the new product. The workflow intelligently uses HTTP Request nodes to perform two crucial bulk tasks: Create a unique variant for each available size, including a custom SKU. Upload all associated product images from their external URLs to the product. A final Slack notification confirms the batch has been processed. Requirements BrowserAct** API account for web scraping BrowserAct* "Bulk Product Scraping From (URLs) and uploading to Shopify (Optimized for shoe - NIKE -> Shopify)*" Template BrowserAct** n8n Community Node -> (n8n Nodes BrowserAct) Google Sheets** credentials for the input list Shopify** credentials (API Access Token) to create and update products, variants, and images Slack** credentials (optional) for notifications Need Help? How to Find Your BrowseAct API Key & Workflow ID How to Connect n8n to Browseract How to Use & Customize BrowserAct Templates How to Use the BrowserAct N8N Community Node Workflow Guidance and Showcase Automate Shoe Scraping to Shopify Using n8n, BrowserAct & Google Sheets
by Naitik Joshi
🚀 AI-Powered LinkedIn Post Generator with Automated Image Creation 📋 Overview Transform any topic into professional LinkedIn posts with AI-generated content and custom images! This workflow automates the entire process from topic input to published LinkedIn post, including professional image generation using Google's Imagen 4 API. ✨ Key Features 🤖 AI Content Generation: Uses Google Gemini to create engaging LinkedIn posts 🎨 Professional Image Creation: Automatically generates images using Google Imagen 4 📱 Direct LinkedIn Publishing: Posts content and images directly to your LinkedIn feed 🔄 Form-Based Input: Simple web form to submit topics 📝 Content Formatting: Converts markdown to LinkedIn-friendly format 🔧 What This Workflow Does 📝 Form Submission: User submits a topic through a web form 🗺️ Data Mapping: Maps the topic for AI processing 🧠 AI Content Generation: Google Gemini creates post content and image prompt 🎯 Content Normalization: Cleans and formats the AI output 🖼️ Image Generation: Creates professional images using Google Imagen 4 📤 LinkedIn Registration: Registers image upload with LinkedIn API 🔄 Binary Conversion: Converts base64 image to binary buffer ⬆️ Image Upload: Uploads image to LinkedIn 📋 Content Curation: Converts markdown to LinkedIn format ⏳ Processing Wait: Ensures image is fully processed 🚀 Post Publishing: Publishes the complete post to LinkedIn 🛠️ Prerequisites & Setup 🔑 Required Credentials 1. LinkedIn OAuth 2.0 Setup 🔗 You'll need to create a LinkedIn app with the following OAuth 2.0 scopes: ✅ openid - Use your name and photo ✅ profile - Use your name and photo ✅ w_member_social - Create, modify, and delete posts, comments, and reactions on your behalf ✅ email - Use the primary email address associated with your LinkedIn account Steps to get LinkedIn credentials: Go to LinkedIn Developer Portal Create a new app or use existing one Configure OAuth 2.0 settings with the scopes above Get your access token from the authentication flow 2. Google Cloud Platform Setup ☁️ Required GCP Services to Enable: 🎯 Vertex AI API - For Imagen 4 image generation 🔐 Cloud Resource Manager API - For project management 🛡️ IAM Service Account Credentials API - For authentication Steps to get GCP token: Install Google Cloud SDK Authenticate: gcloud auth login Set project: gcloud config set project YOUR_PROJECT_ID Get access token: gcloud auth print-access-token > 💡 Note: The access token expires after 1 hour. For production use, consider using service account credentials. 🔧 n8n Node Credentials Setup LinkedIn OAuth2 API: Configure with your LinkedIn app credentials HTTP Bearer Auth (LinkedIn): Use your LinkedIn access token HTTP Bearer Auth (Google Cloud): Use your GCP access token Google Gemini API: Configure with your Google AI API key 📊 Workflow Structure graph LR A[📝 Form Trigger] --> B[🗺️ Mapper] B --> C[🤖 AI Agent] C --> D[🎯 Normalizer] D --> E[🖼️ Text to Image] E --> F[📤 Register Upload] F --> G[🔄 Binary Converter] G --> H[⬆️ Upload Image] H --> I[📋 Content Curator] I --> J[⏳ Wait] J --> K[🚀 Publish to LinkedIn] 🎨 Image Generation Details The workflow uses Google Imagen 4 with these parameters: 📐 Aspect Ratio: 1:1 (perfect for LinkedIn) 🎯 Sample Count: 1 options generated 🛡️ Safety Setting: Block few (content filtering) 💧 Watermark: Enabled 🌍 Language: Auto-detect 📝 Content Processing The AI generates content in this JSON structure: { "post_content": { "text": "Your engaging LinkedIn post content with hashtags" }, "image_prompt": { "description": "Professional image generation prompt" } } 🔄 LinkedIn API Integration Image Upload Process: Register Upload: Creates upload session with LinkedIn Binary Upload: Uploads image as binary data Post Creation: Creates post with text and image reference API Endpoints Used: 📤 POST /v2/assets?action=registerUpload - Register image upload 📝 POST /v2/ugcPosts - Create LinkedIn post ⚠️ Important Notes 🕐 Rate Limits: LinkedIn has API rate limits - monitor your usage ⏱️ Processing Time: Image generation can take 10-30 seconds 🔄 Token Refresh: GCP tokens expire hourly in development 📏 Content Length: LinkedIn posts have character limits 🖼️ Image Size: Generated images are optimized for LinkedIn 🚀 Getting Started Import the workflow into your n8n instance Configure all credentials as described above Enable required GCP services in your project Test the form trigger with a sample topic Monitor the execution for any errors Adjust the AI prompt if needed for your content style 🛠️ Customization Options 🎨 Modify image style in the system prompt 📝 Adjust content tone in the AI agent configuration 🔄 Change wait time between upload and publish 🎯 Add content filters for brand compliance 📊 Include analytics tracking for post performance 💡 Tips for Best Results 🎯 Be specific with your topic inputs 🏢 Use professional language for business content 🔍 Review generated content before publishing 📈 Monitor engagement to refine your prompts 🔄 Test thoroughly before production use 🐛 Troubleshooting Common Issues: ❌ "Invalid credentials": Check token expiration ❌ "Image upload failed": Verify LinkedIn API permissions ❌ "Content generation error": Check Gemini API quota ❌ "Post creation failed": Ensure proper wait time after image upload 📚 Additional Resources 📖 LinkedIn Marketing API Documentation 🤖 Google Vertex AI Imagen Documentation 🔧 n8n Documentation 🚀 Google Gemini API Guide 💬 Need Help? Join the n8n community forum or check the troubleshooting section above! 🌟 Found this useful? Give it a star and share your improvements with the community!
by Dahiana
Description Who's it for: Content creators, marketers, and businesses who publish on both YouTube and blog platforms. What it does: Monitors your YouTube channel for new videos and automatically creates SEO-optimized blog posts using AI, then publishes to WordPress or Webflow. How it works: RSS Feed Trigger polls YouTube videos (every X amount of time) Extracts video metadata (title, description, thumbnail) YouTube node extracts full description for extra context Uses OpenAI (you can choose any model) to generate 600-800 word blog post Publishes to WordPress AND/OR Webflow with error handling Sends notifications to Telegram if publishing fails Requirements: YouTube channel ID (avoid tutorial channels for better results) OpenAI API key (or similar) WordPress OR Webflow credentials Telegram bot (optional, for error notifications) Setup steps: Replace YOUR_CHANNEL_ID in RSS Feed Trigger Add OpenAI credentials in AI generation node Configure WordPress and/or Webflow credentials Add Telegram bot for error notifications (optional). If you choose to set up Telegram, you need to input your channel ID. Test with manual execution first Customization: Modify AI prompt for different content styles Adjust polling frequency (30-60 minutes recommended) Add more CMS platforms Add content verification (is content larger than 600 characters? if not, improve)
by noda
Price Anomaly Detection & News Alert (Marketstack + HN + DeepL + Slack) Overview This workflow monitors a stock’s closing price via Marketstack. It computes a 20-day moving average and standard deviation (±2σ). If the latest close is outside ±2σ, it flags an anomaly, fetches related headlines from Hacker News, translates them to Japanese with DeepL, and posts both original and translated text to Slack. When no anomaly is detected, it sends a concise “normal” report. How it works 1) Daily trigger at 09:00 JST 2) Marketstack: fetch EOD data 3) Code: compute mean/σ and classify (normal/high/low) 4) IF: anomaly? → yes = news path / no = normal report 5) Hacker News: search related items 6) DeepL: translate EN → JA 7) Slack: send bilingual notification Requirements Marketstack API key DeepL API key Slack OAuth2 (bot token / channel permission) Notes Edit the ticker in Get Stock Data. Adjust N (days) and k (sigma multiplier) in Calculate Deviation. Keep credentials out of HTTP nodes (use n8n Credentials).
by Rahul Joshi
Description Automatically detect customer churn risks from Zendesk tickets, log them into Google Sheets for tracking, and send instant Slack alerts to your customer success team. This workflow helps you spot unhappy customers early and take proactive action to reduce churn. 🚨📊💬 What This Template Does Fetches Zendesk tickets daily on schedule (8:00 PM). ⏰ Processes and formats ticket data into clean JSON (priority, age, urgency). 🧠 Identifies churn risks based on negative satisfaction ratings. ⚠️ Logs churn risk tickets into Google Sheets for analysis and reporting. 📈 Sends formatted Slack alerts with ticket details to the CS team channel. 📢 Key Benefits Detects unhappy customers before they churn. 🚨 Centralized churn tracking for reporting and team reviews. 🧾 Proactive alerts to reduce response delays. ⏱️ Clean, structured ticket data for analytics and filtering. 🔄 Strengthens customer success strategy with real-time visibility. 🌐 Features Schedule Trigger – Runs every weekday at 8:00 PM. 🗓️ Zendesk Integration – Fetches all tickets automatically. 🎫 Smart Data Processing – Adds ticket age, urgency, and priority mapping. 🧮 Churn Risk Filter – Flags tickets with negative satisfaction scores. 🚩 Google Sheets Logging – Saves churn risk details with metadata. 📊 Slack Alerts – Sends formatted messages with ID, subject, rating, and action steps. 💬 Requirements n8n instance (cloud or self-hosted). Zendesk API credentials with ticket read access. Google Sheets OAuth2 credentials with write permissions. Slack Bot API credentials with channel posting permissions. Pre-configured Google Sheet for churn risk logging. Target Audience Customer Success teams monitoring churn risk. 👩💻 SaaS companies tracking customer health. 🚀 Support managers who want proactive churn alerts. 🛠️ SMBs improving retention through automation. 🏢 Remote CS teams needing instant notifications. 🌐 Step-by-Step Setup Instructions Connect your Zendesk, Google Sheets, and Slack credentials in n8n. 🔑 Update the Schedule Trigger (default: daily at 8:00 PM) if needed. ⏰ Replace the Google Sheet ID with your churn risk tracking sheet. 📊 Confirm the Slack channel ID for alerts (default: zendesk-churn-alerts). 💬 Adjust churn filter logic (default: satisfaction_score = "bad"). 🎯 Run a test to fetch Zendesk tickets and validate Sheets + Slack outputs. ✅
by Sk developer
🚀 All-In-One Video Downloader to Google Drive (via RapidAPI best All-In-One Video Downloader) Description: This n8n workflow automates the process of downloading videos from any supported platform (like LinkedIn, Facebook, or Instagram) using the RapidAPI best All-In-One Video Downloader. It then uploads the video to your Google Drive and shares it publicly, while logging any failures in Google Sheets for tracking. 📦 Node-by-Node Breakdown | 🧩 Node Name | 📝 One‑Line Explanation | |-------------------------------|-------------------------------------------------------------------------------| | On form submission | Triggers the workflow when a user submits a video URL through a web form. | | All in one video downloader | Sends a POST request to RapidAPI best All-In-One Video Downloader to fetch downloadable video links. | | If | Checks whether the API response includes an error and routes accordingly. | | Download mp4 | Downloads the video using the direct media URL received from the API. | | Upload To Google Drive | Uploads the MP4 file to a designated folder in your Google Drive. | | Google Drive Set Permission | Makes the uploaded file publicly shareable with a viewable link. | | Wait | Adds a short delay before logging errors to prevent duplicate entries. | | Google Sheets Append Row | Logs failed download attempts with the original URL and status as N/A. | ✅ Benefits of This Flow 🔁 End-to-End Automation: From user input to shareable video link—no manual steps required. 🌐 Supports Multiple Platforms: The RapidAPI best All-In-One Video Downloader supports sites like Instagram, Facebook, Twitter, LinkedIn, and more. ⚠️ Smart Error Handling: Automatically logs failed download attempts into Google Sheets for retry or audit. ☁️ Cloud Ready: Videos are stored in Google Drive with instant public access. 📊 Trackability: Logs failures, timestamps, and source URLs for easy debugging or analytics. 🧩 Modular Setup: Easily expand this in n8n to include Slack notifications, email alerts, or tagging. 🔁 Use Cases 🎬 Social Media Video Archiving: Download and store content (Reels, posts, stories) into Drive for future use. 🧑🏫 Educational Sharing: Teachers can collect useful videos and share links with students. 📚 Content Curation: Bloggers or content managers can create a media archive from multiple platforms. 🤝 Team Automation: Teams submit links, and the workflow handles download + Drive share link generation. 📉 Error Tracking for Ops: Failed URLs are tracked in Google Sheets for retry, monitoring, or debugging. 🧠 Final Thoughts This workflow leverages the power of n8n and RapidAPI best All-In-One Video Downloader to create a fully automated pipeline for capturing video content from across the web. It’s ideal for educators, marketers, content curators, or developers who want to streamline video storage and access using Google Drive. 🔑 How to Get API Key from RapidAPI Best All-In-One Video Downloader Follow these steps to get your API key and start using it in your workflow: Visit the API Page 👉 Click here to open Best All-In-One Video Downloader on RapidAPI Log in or Sign Up Use your Google, GitHub, or email account to sign in. If you're new, complete a quick sign-up. Subscribe to a Pricing Plan Go to the Pricing tab on the API page. Select a plan (free or paid, depending on your needs). Click Subscribe. Access Your API Key Navigate to the Endpoints tab. Look for the X-RapidAPI-Key under Request Headers. Copy the value shown — this is your API key. Use the Key in Your Workflow In your n8n workflow (HTTP Request node), replace: "x-rapidapi-key": "your key" with: "x-rapidapi-key": "YOUR_ACTUAL_API_KEY" ✅ You’re now ready to use the Best All-In-One Video Downloader with your automated workflows!
by Sabrina Ramonov 🍄
Description This AI Agent Carousel Maker uses ChatGPT and Blotato to write, generate, and auto-post social media carousels to 5 social platforms: Instagram, Tiktok, Facebook, Twitter, and Pinterest. Simply chat with the AI agent, confirm which prebuilt viral carousel template you want to use, then the AI Agent populates the template with your personalized information and quotes, and posts to social media on autopilot. Who Is This For? This is perfect for entrepreneurs, small businesses, content creators, digital marketing agencies, social media marketing agencies, and influencers. How It Works 1. Chat: AI Agent Carousel Maker Chat with AI agent about your desired carousel Confirm quotes and carousel template to use 2. Carousel Generation AI agent calls corresponding Blotato tool to generate carousel Wait and fetch completed carousel 3. Publish to Social Media via Blotato Choose your social accounts Either post immediately or schedule for later Setup Sign up for OpenAPI API access and create credential Sign up for Blotato.com Generate Blotato API Key by going to Settings > API > Generate API Key (paid feature only) Create Blotato credential If you're using n8n, ensure you have ""Verified Community Nodes"" enabled in your n8n Admin Panel. Then, install ""Blotato"" verified community node. Click ""Open chat"" to test workflow Complete SETUP sticky notes in BROWN in this template Tips & Tricks AFTER your first successful run, open each carousel template tool call (i.e. pink nodes attached to AI Agent Carousel Maker) and tweak the parameters, but DO NOT change the ""quotes"" parameter unless you're an n8n expert. DO NOT edit the 'quotes' parameter unless you're an n8n expert. When adding a new template, DO NOT duplicate an existing node. Instead, click '+ Tool' > Blotato Tool > Video > Create > select new template. This ensures template parameters are correctly loaded. While testing: enable only 1 social platform, and deactivate the rest. Add optional parameter 'scheduledTime' so that you don't accidentally post to social media. Check your content calendar here: https://my.blotato.com/queue/schedules 📄 Documentation Full Tutorial Troubleshooting Check Blotato API Dashboard and logs to review requests, responses, and errors. Verify template parameters and n8n node configuration if runs fail. You can also: View all video/carousel templates available Check how your carousels look Need Help? In the Blotato web app, click the orange button on the bottom right corner. This opens the Support messenger where I help answer technical questions. Connect with me: Linkedin | Youtube
by Tomohiro Goto
🧠 How it works This workflow automatically translates messages between Japanese and English inside Slack — perfect for mixed-language teams. In our real-world use case, our 8-person team includes Arif, an English-speaking teammate from Indonesia, while the rest mainly speak Japanese. Before using this workflow, our daily chat often included: “Can someone translate this for Arif?” “I don’t understand what Arif wrote — can someone summarize it in Japanese?” “I need to post this announcement in both languages, but I don’t know the English phrasing.” This workflow fixes that communication gap without forcing anyone to change how they talk. Built with n8n and Google Gemini 2.5 Flash, it automatically detects the input language, translates to the opposite one, and posts the result in the same thread, keeping every channel clear and contextual. ⚙️ Features Unified translation system with three Slack triggers: 1️⃣ Slash Command /trans – bilingual posts for announcements. 2️⃣ Mention Trigger @trans – real-time thread translation for team discussions. 3️⃣ Reaction 🇯🇵 / 🇺🇸 – personal translation view for readers. Automatic JA ↔ EN detection and translation via Gemini 2.5 Flash 3-second instant ACK to satisfy Slack’s response timeout Shared Gemini translation core across all three modes Clean thread replies using chat.postMessage 💼 Use Cases Global teams** – Keep Japanese and English speakers in sync without switching tools. Project coordination** – Use mentions for mixed-language stand-ups and updates. Announcements** – Auto-generate bilingual company posts with /trans. Cross-cultural communication** – Help one-language teammates follow along instantly. 💡 Perfect for Global companies** with bilingual or multilingual teams Startups** collaborating across Japan and Southeast Asia Developers** exploring Slack + Gemini + n8n automation patterns 🧩 Notes You can force a specific translation direction (JA→EN or EN→JA) inside the Code node. Adjust the system prompt to match tone (“business-polite”, “casual”, etc.). Add glossary replacements for consistent terminology. If the bot doesn’t respond, ensure your app includes the following scopes: app_mentions:read, chat:write, reactions:read, channels:history, and groups:history. Always export your workflow with credentials OFF before sharing or publishing. ✨ Powered by Google Gemini 2.5 Flash × n8n × Slack API A complete multilingual layer for your workspace — all in one workflow. 🌍