by Alex
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. How It Works This template orchestrates a multi-step workflow that constructs a comprehensive four-zone automation matrix—Green, Yellow, Red, and White—grounded in the Human Agency Scale (HAS). When a user sends a job title via Telegram, the workflow routes both text and voice messages appropriately. Voice messages are transcribed via OpenAI's Whisper, while text inputs bypass transcription. Both streams merge into a single data flow. The AI Agent node, powered by GPT-4, analyzes the user's profession and core tasks. It also leverages live context by calling the Tavily search tool, ensuring the analysis incorporates up-to-date information. After the evaluation, the workflow formats and returns the completed matrix, with detailed task examples and rationales for each zone, back to the user via Telegram. Setup Instructions Create an OpenAI credential in n8n (model: GPT-4.1 mini). Add a Tavily credential with your API key (FREE plan available). Configure a Telegram Bot credential: API bot token. Import this JSON as a new workflow in n8n and map credentials in each node. Activate the workflow; test by sending sample job titles; adjust node timeouts and webhook settings as needed. Requirements n8n v1.0.0 or higher Active OpenAI API key (GPT-4.1 mini access) Tavily API key for web context search Telegram Bot token with correctly configured webhook Stable internet connectivity Audience & Problem This template is designed for consultants, HR professionals, and analysts who need a scalable, standardized approach to evaluate which routine tasks in a given profession can be automated, which require human oversight, and which should remain manual to preserve strategic judgment, creativity, and expertise.
by Ria
This is a demo workflow to showcase how to use Supabase to embed a document, retrieve information from the vector store via chat and update the database. Setup steps: set your credentials for Supabase set your credentials for an AI model of your choice set credentials for any service you want to use to upload documents please follow the guidelines in the workflow itself (Sticky Notes) Feedback & Questions If you have any questions or feedback about this workflow - Feel free to get in touch at ria@n8n.io
by Viktor
Nightly Discord Channel Cleanup This workflow runs every day at 9:00 p.m. and: Retrieves all Discord channels using your provided credentials. Pauses briefly to respect Discord API rate limits. Loops through each channel and fetches messages. Filters out messages older than seven days. Deletes those older messages, again pausing to stay within deletion rate limits. By setting up this workflow on a schedule, you can automatically keep Discord channels tidy and compliant with retention policies. 👨🎤 Setup Add your Discord credentials Change the server in each Discord node to the correct one Click the Test Workflow button Activate the workflow to run on a schedule
by DanielV
This workflow is designed to translate SRT subtitle files from one language to another using Google Translate. The workflow follows these main steps: Accept an SRT file upload and target language selection Extract and parse the SRT file content Split the content into translatable segments Translate each segment using Google Translate Reassemble the translated content into a proper SRT format Return the translated file to the user You'll need a Google Console Cloud account to access the Translate API. Who is this for? This workflow is designed for content creators, video editors, translators, and anyone who needs to translate subtitle files (.srt) from one language to another. It's particularly useful for those working with international content, educational materials, or preparing videos for global audiences. What problem does this workflow solve? Translating subtitle files manually is time-consuming and error-prone. Professional translation services can be expensive, especially for multiple videos or long content. This workflow automates the translation process while maintaining the proper SRT format including timestamps and subtitle numbering. Setup Set up Google Translate credentials: -- Create a Google Cloud project and enable the Google Translate API -- Create OAuth credentials and configure them in the Google Translate node Customize language options: -- The default workflow includes English (EN) and Japanese (JP) options -- Add more language options by editing the dropdown field in the "Receive SRT File to Translate" node -- Use standard language codes that Google Translate supports Add more languages: -- Edit the form trigger node to include additional language options in the dropdown
by Avkash Kakdiya
🔁 What This Workflow Does This automation fetches daily AI-related articles from trusted RSS feeds, summarizes them using OpenAI (GPT), and generates a ready-to-post LinkedIn update in your writing style. It then emails the post to you every morning for review and publishing. High-Level Steps: Triggers every morning via Cron. Fetches latest AI news from multiple RSS sources. Filters recent articles (last 24 hrs). Summarizes each article using OpenAI (ChatGPT). Generates a LinkedIn-style post using your tone. Sends the post to your Gmail for review. ⚙️ Setup Steps Estimated setup time: 15–30 minutes You’ll need: OpenAI API key Gmail account connected in n8n RSS feed URLs (defaults are provided) Add your email in the Gmail node to receive daily posts. Add your tone/style prompt in the ChatGPT nodes (instructions inside workflow).
by Xiaoyuan Zhang
Description This workflow creates a sophisticated bilingual dictionary that provides literary-style definitions and examples for English and German words. The system automatically detects the input language, generates comprehensive definitions in Chinese, creates three literary-style example sentences with translations, and stores everything in a Supabase database for future reference. Who Is This For? Language Learners & Students: Perfect for those studying English or German who want to understand words in literary contexts with Chinese translations. Writers & Content Creators: Ideal for bilingual writers working with English, German, and Chinese who need rich, literary examples for their work. Educators & Translators: Excellent tool for language teachers and professional translators who need comprehensive word definitions with contextual examples. Literary Enthusiasts: Great for readers of literature who encounter unfamiliar words and want to understand their poetic or literary usage. What Problem Does This Workflow Solve? Traditional dictionaries often provide basic definitions without literary context or cross-language examples. This workflow addresses several key challenges: Limited Literary Context: Most dictionaries lack poetic, expressive, or literary-style examples that help understand how words are used in sophisticated writing. Cross-Language Learning: Provides seamless translation between English/German and Chinese with culturally appropriate examples. Data Persistence: Automatically saves all lookups to a database, creating a personalized vocabulary collection over time. API Accessibility: Provides a clean webhook interface that can be integrated into apps, websites, or other tools. How It Works Main Dictionary Lookup Flow Input Processing: Receives a word via webhook POST request and automatically detects if it's English or German AI Analysis: Uses OpenAI GPT-4o-mini to generate comprehensive definitions with literary context Response Formatting: Processes the AI response to extract structured data (word, meaning, examples) Quality Control: Validates the response and handles unclear or invalid inputs gracefully Database Storage: Saves the word, Chinese meaning, and examples to Supabase for future reference API Response: Returns formatted JSON with the complete dictionary entry Data Storage Flow Parallel Processing: Simultaneously returns the dictionary data to the user and saves it to the database Structured Storage: Organizes data in Supabase with fields for words, Chinese meanings, and example arrays Success Confirmation: Provides confirmation when data is successfully stored Setup Instructions Prerequisites & Accounts You'll need accounts and API access for: n8n (Cloud or self-hosted) OpenAI (API key required) Supabase (Database and API credentials) Webhook Configuration The workflow uses two webhook endpoints with the same path for different operations Note the webhook URL provided by n8n for API integration Test the webhook endpoints to ensure they're accessible approach Customization Options Extend to support additional input languages by modifying the AI prompt Add support for other target languages beyond Chinese Customize the literary style for different cultural contexts This workflow transforms simple word lookups into rich, contextual learning experiences while building a personalized vocabulary database over time.
by Robert Breen
This no-code n8n workflow finds recent Instagram posts by hashtag, scrapes profile data, and uses an AI agent to evaluate whether each account is a good collaboration lead. The workflow filters based on the number of followers and the content of their bio, and outputs structured reasoning for outreach decisions. Perfect for creators, marketers, or business developers looking to automate influencer or community partnership prospecting—especially in niche ecosystems like n8n. ✅ Key Features 🔍 Hashtag Discovery**: Finds recent Instagram posts from a specified hashtag (e.g., #n8n) 👤 Account Scraping**: Retrieves profile details such as follower count and biography 🧠 AI Evaluation**: Uses OpenAI and LangChain to determine if the profile is a good fit for outreach 📦 Structured Output**: Returns a JSON object with "Yes/No" lead status and reasoning 🛠️ Manual Execution**: Run on demand using the manual trigger 🧰 What You'll Need | Tool / API | Purpose | Setup Steps | |-------------------------|------------------------------------------|-------------| | Apify Account | To access Instagram scraping actors | Create account → Generate API Token → Use in httpQueryAuth credential in n8n | | OpenAI API Key | To power the AI decision-making agent | Sign up at OpenAI → Create API key → Paste into OpenAI credential in n8n | | LangChain Plugin for n8n | AI Orchestration with System Message | Install LangChain nodes from Community Nodes (already installed in this workflow) | 🔧 Step-by-Step Setup 1️⃣ Manual Trigger Node**: When clicking ‘Execute workflow’ Use**: Allows you to run the workflow manually while testing. 2️⃣ Define Hashtag Node**: Create Search Term Value**: Sets "n8n" as the default Instagram hashtag to scan. You can edit this to any other hashtag you'd like. 3️⃣ Find Recent Posts Node**: Find Recent Posts API**: Apify Instagram Hashtag Scraper Auth Setup**: Go to your Apify Console Click “Create new token” In n8n, create a new HTTP Query Auth credential Set token in the token query param (e.g., ?token=yourTokenHere) Choose the credential in this node 4️⃣ Scrape Each Profile Node**: Scrape Accounts API**: Apify Instagram Profile Scraper Body**: JSON with usernames from the hashtag search Note**: Uses the same httpQueryAuth credential as the previous node. 5️⃣ Extract Fields Node**: Set bio and follower count What it does**: Extracts biography and followersCount from the profile JSON and stores them in clean variables for AI input. 6️⃣ AI Lead Scoring Node**: AI Agent Purpose**: Uses GPT-4o-mini to analyze the bio and follower count Prompt Details**: 7️⃣ AI Model Node**: OpenAI Chat Model Model**: gpt-4o-mini Credential**: Connect your OpenAI account via API Key. Go to OpenAI API Keys Copy your key and create a new OpenAI API credential in n8n. 8️⃣ Output Parser Node**: Structured Output Parser What it does**: Parses the response from the AI into structured JSON for further use (e.g., storing leads, sending to Airtable, etc.) 🧪 Sample Output { "lead status": "Yes", "Reasoning": "The user has 3.5k followers and their bio shows they build automations with n8n." } 📬 Need More Help? If you'd like assistance setting this up, customizing it to your niche, or expanding it to score and store leads automatically — I can help! 👤 Robert Breen Automation Consultant | AI Workflow Designer | n8n Expert 📧 robert@ynteractive.com 🌐 ynteractive.com 🔗 LinkedIn
by Yang
🧾 What this workflow does This workflow takes a reference ad image and brand website, then uses GPT-4, LangChain, and Dumpling AI to generate 10 high-quality image variations for ad testing. These image variations are visually consistent but subtly different in background, mood, lighting, and tone — perfect for performance testing on platforms like Meta Ads or TikTok. 👤 Who is this for DTC marketers and brand designers testing ad creatives Creative teams automating visual experimentation Content agencies using AI for fast ad mockups Performance marketers running multivariate testing ⚙️ How to set up ✅ Requirements You’ll need the following tools set up in n8n: Google Drive (OAuth2 credential) Google Sheets (OAuth2 credential) OpenAI API (for GPT-4 or GPT-4o) Dumpling AI API (via HTTP header authentication) 🛠️ Steps to configure Google Sheet Setup Create a sheet with one column: Image URL Update the Sheet ID and tab name in the final Google Sheets node. Drive Setup Create a folder in Google Drive for storing the reference image. Replace the folderId in the “Upload Ad Image to Google Drive” node. Dumpling AI API Key Use n8n’s credential manager (HTTP Header Auth) — do not hardcode the key. OpenAI API Key Required for both image description and LangChain agent prompt generation. Form Inputs Required Brand Name Brand Website Ad Image (upload field) 🧠 How it works A user submits the brand name, website, and a reference ad image through a form. The image is uploaded to Google Drive. GPT-4o describes the image’s visual style (e.g., mood, lighting, composition). GPT-4 analyzes the brand’s website to define its visual aesthetic. A LangChain agent uses both analyses to create 10 tightly scoped variation prompts. Dumpling AI generates a new image for each prompt using its “FLUX.1-pro” model. Each new image’s link is logged into Google Sheets. 🛠️ How to customize 🧪 Change prompt logic to experiment with different variations (e.g., theme, season). 🎨 Switch image model in Dumpling AI to one that supports your desired style. 🔗 Log additional metadata (prompt, timestamp) to Google Sheets. 📤 Connect output images to Airtable, Notion, or a review tool like Figma. 🎯 Modify GPT system message to reflect a different tone or brand strategy. This workflow gives creative teams and marketers an instant, AI-powered ad image testing system — built on real brand visuals, not generic stock content.
by Onur
Effortless Task Management: Create Todoist Tasks Directly from Telegram with AI This n8n workflow empowers you to seamlessly manage your tasks by creating Todoist entries directly from Telegram, using the power of AI. Simply send a voice or text message to your Telegram bot, and this workflow will transform it into actionable tasks in your Todoist account. Who is this for? Busy professionals** who need a quick and easy way to capture tasks on the go. Students** looking to streamline their assignments and project management. Anyone** who wants to leverage AI for effortless task management. What Problem Does it Solve? This workflow eliminates the need to manually enter tasks into Todoist. It automates the process of capturing, organizing, and prioritizing tasks, saving you time and effort. What are the Benefits? Seamless Integration:** Connect your Telegram and Todoist accounts for a frictionless workflow. AI-Powered Task Breakdown:** LLM AI intelligently analyzes your messages and breaks them down into manageable sub-tasks. Voice-to-Task:** Create tasks with voice messages for hands-free convenience. Increased Productivity:** Capture and organize tasks quickly, keeping you focused and productive. Accessibility:** Access your tasks from anywhere with Todoist's mobile app and Google extension. How it Works Send a message: Send a voice or text message describing your task to your Telegram bot. AI analysis: The workflow uses an LLM (OpenAI Chat Model) to analyze your message and break it down into sub-tasks. Task creation: The workflow creates tasks in your Todoist account based on the AI's analysis. Notification: You receive a Telegram notification with a link to your newly created tasks in Todoist. Nodes in the Workflow Telegram Trigger:** Listens for incoming messages on Telegram. Switch:** Routes messages based on their type (voice or text). Telegram:** Fetches voice messages from Telegram. OpenAI:** Transcribes voice messages to text using OpenAI's Whisper API. Edit Fields:** Prepares the text for the LLM. Basic LLM Chain:** Analyzes messages and generates sub-tasks using OpenAI's GPT model. Structured Output Parser:** Extracts sub-tasks from the LLM's response. Todoist:** Creates tasks in your Todoist account. Telegram:** Sends a notification with a link to your Todoist tasks. Requirements Active n8n instance. Telegram account with a bot. Todoist account. OpenAI API key. Setup Information Import the workflow JSON into your n8n instance. Configure the Telegram Trigger node with your bot token. Set up the OpenAI credentials with your API key. Connect your Todoist account in the Todoist node. Customize the LLM prompt (optional) to fine-tune task creation. Additional Tips Explore Todoist's features to further organize and manage your tasks. Experiment with different LLM prompts to optimize task breakdown. Use n8n's features to automate other aspects of your workflow. This workflow combines the convenience of Telegram with the power of AI and Todoist to provide a seamless task management experience. Start managing your tasks effortlessly today!
by Joseph
This n8n workflow automates SEO keyword research by querying the Ahrefs API for keyword data and related keyword insights. The enriched data is then processed by an AI agent to format a response and provide valuable SEO recommendations. Perfect for SEO specialists, content marketers, digital agencies, and anyone looking to gain valuable insights into keyword opportunities to boost their rankings. ⚙️ How This Workflow Works This workflow guides you through the entire SEO keyword research process, from entering the initial keyword to receiving detailed insights and related keyword suggestions. 1. 🗣️ User Input (Keyword Query) The user enters a keyword they want to research. This input is captured by the Chat Input Node, ready for analysis. 2. 🤖 AI Agent (Input Verification) The AI Agent reviews the keyword input for any grammatical errors or extra commentary. If necessary, it cleans the input to ensure a seamless query to the API. 3. 🔑 Ahrefs API (Keyword Data Retrieval) The cleaned keyword is sent to the Ahrefs Keyword Tool API. This retrieves a detailed report including metrics like search volume, keyword difficulty, and CPC. 4. 💡 Related Keywords Extraction (Using JavaScript Function) The workflow uses a JavaScript function to extract main keyword data and 10 related keywords data from the Ahrefs response. You can tweak the script to adjust the number of related keywords or the level of detail you want. 5. 🧠 AI Agent (Text Formatting) The aggregated data, including both the main keyword and related keywords, is sent to an AI agent. The AI agent formats the data into a concise, readable format that can be shared with the user. 6. 📨 Final Response The formatted text is delivered to the user with keyword insights, recommendations, and related keyword suggestions. ✅ Smart Retry & Error Handling Each subworkflow includes a fail-safe mechanism to ensure: ✅ Proper error handling for any issues with the API request. 🕒 Failed API requests are retried after a customizable period (e.g., 2 hours or 1 day). 💬 User input validation prevents any incorrect or malformed queries from being processed. 📋 Ahrefs API Setup To use this workflow, you’ll need to set up your Ahrefs API credentials: 🔑 Ahrefs API Sign up for an Ahrefs account and get your key here: Ahrefs Keyword Tool API Once signed up, you'll receive an API key, which you’ll use in the x-rapidapi-key header in n8n. Ensure you check the Ahrefs Keyword Tool API documentation for more details on available parameters. 📥 How to Import This Workflow Copy the json code. Open your n8n instance. Open a new workflow. Paste anywhere inside the workflow. Voila. 🛠️ Customization Options Adjust the number of related keywords extracted (default is 10). Customize the AI agent response formatting or add specific recommendations for users. Modify the JavaScript function to extract different metrics from the Ahrefs API. 🧪 Use Case Example Trying to optimize your blog post around a specific keyword? Query a broad keyword, like “SEO tips”. Get related keyword data and search volume insights. Use the AI agent to provide keyword recommendations and additional topics to target. 💥 Boost your content strategy with fresh keywords and relevant search data!
by Leonard
Unlock AI-Driven Research with Jina AI (No API Key Needed!) Following the success of Open Deep Research 1.0, we are excited to introduce an improved and fully free version: AI-Powered Research with Jina AI Deep Search. This workflow leverages Jina AI’s Deep Search API, a free and powerful AI research tool that requires no API key. It automates querying, analyzing, and formatting research reports, making AI-driven research accessible to everyone. Key Features No API Keys Required** - Start researching instantly without setup hassle. Automated Deep Search* - Uses Jina AI to fetch *relevant and high-quality information**. Structured AI Reports** - Generates clear, well-formatted research documents in markdown. Flexible and Customizable* - Modify the workflow to fit *your specific research needs**. Ideal for Researchers, Writers & Students** - Speed up your research workflow. Use Cases This workflow is particularly useful for: Researchers** - Quickly gather and summarize academic papers, online sources, and deep web content. Writers & Journalists** - Automate background research for articles, essays, and investigative reports. Students & Educators** - Generate structured reports for assignments, literature reviews, or presentations. Content Creators** - Find reliable sources for blog posts, videos, or social media content. Data Analysts** - Retrieve contextual insights from various online sources for reports and analysis. How It Works The user submits a research query via chat. The workflow sends the query to Jina AI’s Deep Search API. The AI processes and generates a well-structured research report. A code node formats the response into clean markdown. The final output is a structured, easy-to-read AI-generated report. Pre-Conditions & Requirements An n8n instance (self-hosted or cloud). No API keys needed** – Jina AI Deep Search is completely free. Basic knowledge of n8n workflow automation is recommended for customization. Customization Options This workflow is fully modular, allowing users to: Modify the query prompt to refine the research focus. Adjust the report formatting to match personal or professional needs. Expand the workflow by adding additional AI tools or data sources. Integrate it with other workflows in n8n to enhance automation. Users are free to connect it with other workflows, add custom nodes, or tweak existing configurations. Getting Started Setup Time: Less than 5 minutes Import the workflow into n8n. Run the workflow and input a research topic. Receive a fully formatted AI-generated research report. Try It Now! Start your AI-powered research with Jina AI Deep Search today! Get the workflow on n8n.io
by David Roberts
Overview This workflow takes some French text, and translates it into spoken audio. It then transcribes that audio back into text, translates it into English and generates an audio file of the English text. To do so, it uses ElevenLabs (which has a free tier) and OpenAI. Setup These steps should only take a few minutes: In ElevenLabs, add a voice to your voice lab and copy its ID. Add it to the 'Set voice ID' node Get your ElevenLabs API key (click your name in the bottom-left of ElevenLabs and choose ‘profile’) In the 'Generate French audio' node, create a new header auth cred. Set the name to xi-api-key and the value to your API key In the 'credential' field of the 'Transcribe audio' node, create a new OpenAI cred with your OpenAI API key Run the workflow by clicking the orange button at the bottom of the canvas