by Rully Saputra
Who’s it for This workflow is ideal for marketing teams, growth analysts, and business owners who need regular Google Analytics insights without manually digging through data. It’s also perfect for organizations that want to ensure positive performance updates reach stakeholders quickly while negative trends get immediate attention from the internal team. How it works / What it does The workflow runs weekly on a set schedule, pulls key performance metrics from Google Analytics, and aggregates the data into a clean summary. An AI Agent (powered by Google Gemini and connected to Simple Memory for historical context) analyzes the data, generates actionable insights, and classifies the sentiment as Positive, Negative, or Neutral. Positive sentiment → Automatically emailed to stakeholders via Gmail. Negative sentiment → Sent instantly to a designated Telegram group for faster response. This ensures wins are celebrated, and issues are addressed promptly. How to set up Configure the Schedule Trigger for your preferred reporting day/time. Connect the Google Analytics node with your property ID and metrics/dimensions. Set up the AI Agent with Google Gemini/others model API credentials. Connect Gmail and Telegram accounts to their respective nodes. Adjust sentiment routing rules. Requirements Google Analytics account with API access Google Gemini API key Gmail account with OAuth connection Telegram bot token and group chat ID How to customize the workflow Modify the AI prompt to include custom KPIs or industry-specific recommendations. Change the schedule frequency (daily, monthly, or on-demand). Add Neutral sentiment handling (e.g., log to Google Sheets). Extend with Slack, Discord, or other notification channels.
by Joseph LePage
🔍 This n8n workflow integrates Tavily's search and extract APIs with AI summarization capabilities to process web content efficiently. Quick Setup Get your Tavily API key from https://app.tavily.com/home Replace tvly-YOUR_API_KEY in the "Tavily API Key" node Connect your OpenAI credentials to the "OpenAI Chat Model" node Deploy the workflow and start the chat trigger Core Features Search & Extract 🎯 Intelligent web searching with relevance filtering Automated content extraction from top results AI-powered content summarization in markdown format User Interaction 💬 Chat-based search topic input Real-time processing pipeline Structured markdown output The workflow demonstrates practical implementation of Tavily's API endpoints while handling the complete process from search to summarization in a single automated pipeline.
by Viktor
Nightly Discord Channel Cleanup This workflow runs every day at 9:00 p.m. and: Retrieves all Discord channels using your provided credentials. Pauses briefly to respect Discord API rate limits. Loops through each channel and fetches messages. Filters out messages older than seven days. Deletes those older messages, again pausing to stay within deletion rate limits. By setting up this workflow on a schedule, you can automatically keep Discord channels tidy and compliant with retention policies. 👨🎤 Setup Add your Discord credentials Change the server in each Discord node to the correct one Click the Test Workflow button Activate the workflow to run on a schedule
by DanielV
This workflow is designed to translate SRT subtitle files from one language to another using Google Translate. The workflow follows these main steps: Accept an SRT file upload and target language selection Extract and parse the SRT file content Split the content into translatable segments Translate each segment using Google Translate Reassemble the translated content into a proper SRT format Return the translated file to the user You'll need a Google Console Cloud account to access the Translate API. Who is this for? This workflow is designed for content creators, video editors, translators, and anyone who needs to translate subtitle files (.srt) from one language to another. It's particularly useful for those working with international content, educational materials, or preparing videos for global audiences. What problem does this workflow solve? Translating subtitle files manually is time-consuming and error-prone. Professional translation services can be expensive, especially for multiple videos or long content. This workflow automates the translation process while maintaining the proper SRT format including timestamps and subtitle numbering. Setup Set up Google Translate credentials: -- Create a Google Cloud project and enable the Google Translate API -- Create OAuth credentials and configure them in the Google Translate node Customize language options: -- The default workflow includes English (EN) and Japanese (JP) options -- Add more language options by editing the dropdown field in the "Receive SRT File to Translate" node -- Use standard language codes that Google Translate supports Add more languages: -- Edit the form trigger node to include additional language options in the dropdown
by Avkash Kakdiya
🔁 What This Workflow Does This automation fetches daily AI-related articles from trusted RSS feeds, summarizes them using OpenAI (GPT), and generates a ready-to-post LinkedIn update in your writing style. It then emails the post to you every morning for review and publishing. High-Level Steps: Triggers every morning via Cron. Fetches latest AI news from multiple RSS sources. Filters recent articles (last 24 hrs). Summarizes each article using OpenAI (ChatGPT). Generates a LinkedIn-style post using your tone. Sends the post to your Gmail for review. ⚙️ Setup Steps Estimated setup time: 15–30 minutes You’ll need: OpenAI API key Gmail account connected in n8n RSS feed URLs (defaults are provided) Add your email in the Gmail node to receive daily posts. Add your tone/style prompt in the ChatGPT nodes (instructions inside workflow).
by Xiaoyuan Zhang
Description This workflow creates a sophisticated bilingual dictionary that provides literary-style definitions and examples for English and German words. The system automatically detects the input language, generates comprehensive definitions in Chinese, creates three literary-style example sentences with translations, and stores everything in a Supabase database for future reference. Who Is This For? Language Learners & Students: Perfect for those studying English or German who want to understand words in literary contexts with Chinese translations. Writers & Content Creators: Ideal for bilingual writers working with English, German, and Chinese who need rich, literary examples for their work. Educators & Translators: Excellent tool for language teachers and professional translators who need comprehensive word definitions with contextual examples. Literary Enthusiasts: Great for readers of literature who encounter unfamiliar words and want to understand their poetic or literary usage. What Problem Does This Workflow Solve? Traditional dictionaries often provide basic definitions without literary context or cross-language examples. This workflow addresses several key challenges: Limited Literary Context: Most dictionaries lack poetic, expressive, or literary-style examples that help understand how words are used in sophisticated writing. Cross-Language Learning: Provides seamless translation between English/German and Chinese with culturally appropriate examples. Data Persistence: Automatically saves all lookups to a database, creating a personalized vocabulary collection over time. API Accessibility: Provides a clean webhook interface that can be integrated into apps, websites, or other tools. How It Works Main Dictionary Lookup Flow Input Processing: Receives a word via webhook POST request and automatically detects if it's English or German AI Analysis: Uses OpenAI GPT-4o-mini to generate comprehensive definitions with literary context Response Formatting: Processes the AI response to extract structured data (word, meaning, examples) Quality Control: Validates the response and handles unclear or invalid inputs gracefully Database Storage: Saves the word, Chinese meaning, and examples to Supabase for future reference API Response: Returns formatted JSON with the complete dictionary entry Data Storage Flow Parallel Processing: Simultaneously returns the dictionary data to the user and saves it to the database Structured Storage: Organizes data in Supabase with fields for words, Chinese meanings, and example arrays Success Confirmation: Provides confirmation when data is successfully stored Setup Instructions Prerequisites & Accounts You'll need accounts and API access for: n8n (Cloud or self-hosted) OpenAI (API key required) Supabase (Database and API credentials) Webhook Configuration The workflow uses two webhook endpoints with the same path for different operations Note the webhook URL provided by n8n for API integration Test the webhook endpoints to ensure they're accessible approach Customization Options Extend to support additional input languages by modifying the AI prompt Add support for other target languages beyond Chinese Customize the literary style for different cultural contexts This workflow transforms simple word lookups into rich, contextual learning experiences while building a personalized vocabulary database over time.
by John Alejandro SIlva
🤖📨 Telegram AI Assistant with Multi-File Media Group Handling, Smart File Processing & PostgreSQL Integration > AI-powered Telegram bot for text, voice, video, documents & media — with database-driven grouping and Telegram-safe formatting. 📋 Description This n8n template creates a next-generation Telegram AI assistant 🧠💬 capable of handling text messages, media files, and documents with advanced processing, PostgreSQL integration, and AI-powered responses. It is designed to solve Telegram’s media group challenge 📦 — when multiple files are sent together, they are stored, processed, and combined into one coherent AI-generated reply. ✨ Key Features 📂 Multi-file media group management with PostgreSQL: media_group media_queue chat_histories 📑 Document parsing for CSV, HTML, ICS, JSON, ODS, PDF (with AI fallback), RTF, TXT, XML, and spreadsheets. 🎤 Voice & video transcription for AI analysis. 🖼️ Image, audio, and video description for richer AI context. 🛡️ Telegram-safe MarkdownV2 formatting with auto-splitting for messages over 4096 chars. ⚠️ Error fallback for unsupported file types. 💡 Acknowledgment A huge thank you to Ezema Gingsley Chibuzo 🙌 for the inspiration of the first version of this workflow: Create a Multi-Modal Telegram Support Bot with GPT-4 and Supabase RAG Your pioneering work laid the foundation for this improved, database-powered multi-modal assistant 🚀 🏷 Tags telegram ai-assistant postgresql multi-file media-group file-processing voice-transcription document-parser pdf-extraction markdown-formatting n8n-template 💼 Use Case Use this template if you need an AI-powered Telegram bot that can: 📦 Handle multiple files sent in a single message (albums, multiple PDFs, etc.). 🧾 Extract & analyze content from many file formats. 🎙️ Transcribe voice and video messages. 🗂️ Maintain chat memory for contextual AI answers. 🛡️ Avoid Telegram formatting errors and length limit issues. This workflow automates the full chain: Receive → Process → AI Analysis → Telegram-safe Reply. 💬 Example User Interactions 📄 Multiple PDFs with a caption** → AI extracts and summarizes all PDFs in one combined reply. 🎤 Voice message** → AI transcribes and replies with a contextual answer. 📊 CSV or spreadsheet file** → AI parses and summarizes the data. 🖼️ Multiple images** → AI describes each image and replies in a single message. 🔑 Required Credentials Telegram Bot API** (Bot Token) PostgreSQL** (Connection credentials) AI Provider API** (OpenAI, Google Gemini, or compatible LLM) ⚙️ Setup Instructions 🗄️ Create the PostgreSQL tables (Gray section SQL): media_group media_queue chat_histories 🔌 Configure the Telegram Trigger with your bot token. 🤖 Connect your AI provider credentials. 🗂️ Set up PostgreSQL credentials in the database nodes. ▶️ Deploy the workflow in n8n. 🎯 Start sending messages and files to your bot. 📌 Extra Notes ✅ Green section ensures only one trigger per media group. 📌 Yellow section guarantees captions and files are stored in the correct sequence. ✨ Purple section formats AI output to be Telegram-safe and split if needed. 🧠 AI prompt is not fixed, allowing full customization. 💡 Need Assistance? If you’d like help customizing or extending this workflow, feel free to reach out: 📧 Email: johnsilva11031@gmail.com 🔗 LinkedIn: John Alejandro Silva Rodríguez
by Niklas Hatje
Use Case When trying to maximize your outreach, website visitors are often an overlooked source of qualified new leads. This workflow allows your to track and enrich new website visitors and saves them to a Google Sheet once they meet a pre-defined criteria. What this workflow does This workflow fires once a day and gets all your leads saved in Leadfeeder. It then takes the leads that meet a pre-defined engagement criteria, e.g. that they visited your site 3 times, and enriches them additionally with Clearbit. From there it filters the leads again by a criteria on the company, e.g. a minimum employee count, and saves matching leads into a Google Sheet document. Setup Add your Leedfeeder credentials. The name should be Authorization and the value Token token=yourapitoken. You can find your token via Settings -> Personal -> API-Token Add your Google Sheet credentials Save the Leedfeeder account names you want to use in the Setup node Copy the Google Sheets Template and add its URL to the Setup node How to adjust this to your needs Adjust and/or remove the engagement and company criteria Add more ways to enrich a company Potential ideas to enhance the use of this workflow Automatically reach out to users that meet the criteria / that get added to the sheet Create a workflow that finds the right employee in companies that are identified by this workflow
by Robert Breen
This n8n workflow finds experts on any topic, scrapes their websites, and pulls out contact emails automatically. Core services used: SerpAPI (google search) · Apify (website crawler) · OpenAI (GPT-4o email extraction). 🛠️ Step-by-Step Setup & Execution 1️⃣ Run Workflow (Manual Trigger) | Node | Type | Purpose | |------|------|---------| | Run Workflow | Manual Trigger | Start the workflow on demand while you test. | 2️⃣ Set Your Topic | Node | Type | How to configure | |------|------|------------------| | Set Topic | Set | Add a string field Topic – e.g. "n8n". This keyword drives every subsequent step. | 3️⃣ Search Google (Results 1-10) | Node | Type | API Credential | |------|------|----------------| | Search Google (top 10) | SerpAPI | Create SerpAPI credential1. Sign up → copy API key → n8n → Credentials → New → SerpAPI → paste.2. Select the credential in this node. | | Key Params | | | | q | | ={{ $json.Topic }} Expert | | location | | Region code (ex 585069efee19ad271e9c9b36) | | additionalFields.start | | "10" (Google position 1-10)| 4️⃣ Search Google (Results 11-20) | Node | Type | Notes | |------|------|-------| | Search Google (11-20) | SerpAPI (same credential) | Remove start or set to 20+ to fetch next page. | 5️⃣ Extract URL Lists | Node | Type | Script Purpose | |------|------|----------------| | Extract Url & Extract Url 2 | Code | Loop data.organic_results → output { title, link, displayed_link } for each result. | 6️⃣ Combine Both Result Sets | Node | Type | Details | |------|------|---------| | Append Results | Merge (combineAll) | Merges arrays from steps 3 & 4 into a single list for processing. | 7️⃣ Loop Over Every URL | Node | Type | Configuration | |------|------|---------------| | Loop Over Items1 | Split In Batches | Default batch = 1 (process one page at a time).onError = continueRegularOutput keeps loop alive on failures. | 8️⃣ Scrape Webpage Content (Apify) | Node | Type | API Credential | |------|------|----------------| | Scrape URL with apify | HTTP Request | Create Apify credential1. Sign up at https://console.apify.com2. Account → API tokens → copy.3. n8n → Credentials → New → HTTP Query Auth → set query param token=YOUR_TOKEN. | | Request Details | | | | Method | POST | | URL | https://api.apify.com/v2/acts/6sigmag~fast-website-content-crawler/run-sync-get-dataset-items | | JSON Body | 9️⃣ Extract Email with OpenAI | Node | Type | API Credential | |------|------|----------------| | Extract Email from webpage | LangChain Agent | Create OpenAI credential1. Generate key at https://platform.openai.com/account/api-keys2. n8n → Credentials → New → OpenAI API → paste key. | | Prompt (system) | | Output Parser | Structured Output Parser2 expects → { "email": "address OR null" } | 🔟 Loop Continues & Final Data The extracted result returns to Loop Over Items1 until every URL is processed. Typical final item JSON**: { "title": "How to Build n8n Workflows", "link": "https://example.com", "email": "info@example.com" } 💡 Optional Enhancements Idea How Save Leads Add a Google Sheets or Airtable node after the loop. Validate Emails Chain a ZeroBounce / Hunter.io verification API before saving. Parallel Crawling Increase SplitInBatches size (watch Apify rate limits). 🙋♂️ Need More Help? Robert Breen – Automation Consultant & n8n Expert 📧 robert.j.breen@gmail.com 🔗 https://www.linkedin.com/in/robert-breen-29429625/ 🌐 https://ynteractive.com
by Davide
This Workflow streamlines the process of publishing posts (image or video) to multiple social media platforms using a unified form and a third-party API service called Upload-Post. The automation starts with a form trigger, allowing users to submit content (text and media) through a simple frontend interface. Users select the platform (Instagram, LinkedIn, Facebook, X, TikTok, Threads), choose the profile name, write a caption, and upload a photo or video. How It Works Automates cross-platform social media posting via Upload-Post, handling both images (JPEG) and videos (MP4). Here’s the process: Trigger**: A form submission captures user inputs: Platform (Instagram, LinkedIn, Facebook, X, TikTok, Threads). Account (pre-configured profile name). Caption and file (image/video). Optional Facebook Page ID for targeted posting. Routing**: The "Video or Photo?" Switch node checks the file’s MIME type: Image: Routes to the "Post photo" HTTP node (uploads via upload_photos API). Video: Routes to the "Post video" HTTP node (uploads via upload API). API Integration**: Both nodes send data to Upload-Post.com’s API, including: Caption, account name, platform, and file binary. Facebook ID (if provided). Success/Failure Handling**: The "Result Photo/Video" nodes parse the API response. Setup Steps Prerequisites: Upload-Post.com API Key**: Get it from the API Keys dashboard. Free tier allows 10 uploads/month. Configuration: API Authentication: In the HTTP Request nodes (Post photo/Post video), set the Authorization header: Name: Authorization Value: Apikey YOUR_API_KEY_HERE. Form Customization: Adjust the "On form submission" node to: Add/remove platforms (e.g., YouTube when approved). Modify file type restrictions (default: .jpg, .mp4). Account Mapping: Ensure the "Account" field matches profiles configured in Upload-Post.com (e.g., test1, test2). Facebook Page Integration: Optional: Add a Facebook Page ID field for page-specific posts. Testing: Submit test forms with images/videos. Verify API responses and success/failure messages. Optional Enhancements: Add error logging (e.g., save failed attempts to a database). Extend to YouTube once API support is confirmed. Key Features: Multi-Platform**: Post to 6+ social networks simultaneously. User-Friendly**: Simple form interface for non-technical users. Error Handling**: Clear feedback for success/failure cases. Need help customizing? Contact me for consulting and support or add me on Linkedin.
by Avkash Kakdiya
How it works This workflow enhances contact intelligence by retrieving new or updated contact data, enriching it using AI and external APIs, and then updating your CRM or contact management system with intelligent insights. It automates the process of gathering, enriching, and organizing contact information to improve targeting, personalization, and engagement. Step-by-step 1. Trigger & Input The workflow is triggered by a scheduler or webhook event. It reads a new contact entry (or an updated one) from your source, such as a spreadsheet or form. Basic fields like name, email, and company are used as the starting point for enrichment. 2. Contact Lookup & Parsing The contact's domain or company is extracted and used to perform a lookup via an external data source. Data such as company details, job title, or LinkedIn profile is retrieved. Parsed and cleaned to remove duplicates, missing values, or invalid results. 3. AI Enrichment The enriched contact is passed through an AI model (such as GPT or another NLP service). The model analyzes job role, seniority, and inferred interests based on available data. Insights like intent, persona category, or engagement score are generated. 4. Validation & Tagging The AI-enriched data is validated to ensure consistency and accuracy. Tags and segments (e.g., "Decision Maker", "Technical Buyer", etc.) are assigned based on rules or AI inference. This enables smart filtering, targeting, and routing later in your CRM or campaigns. 5. Output & Integration The final enriched and validated contact is written back to your CRM, sheet, or marketing platform. The system also: Sends a Slack/Email alert with a summary. Updates the original contact entry with a "Processed" or "Enriched" status. Triggers next steps, such as personalized outreach or nurture sequences. Benefits Enhances Contact Profiles with AI-generated insights and third-party data. Improves Segmentation & Targeting through smart tags and persona classification. Automates Manual Research, saving time and improving accuracy. Easily Extendable by adding more AI models, data sources, or CRM integrations.
by Emmanuel Bernard
This workflow illustrates how to use Perplexity AI in your n8n workflow. Perplexity is a free AI-powered answer engine that provides accurate, trusted, and real-time answers to any question. Credentials Setup 1/ Go to the perplexity dashboard, purchase some credits and create an API Key https://www.perplexity.ai/settings/api 2/ In the perplexity Request node, use Generic Credentials, Header Auth. For the name, use the value "Authorization" And for the value "Bearer pplx-e4...59ea" (Your Perplexity Api Key) AI Model Sonar Pro is the current top model used by perplexity. If you want to use a different one, check this page: https://docs.perplexity.ai/guides/model-cards