by Luís Philipe Trindade
What's up Guys. I'm Luís 🙋🏻♂️ Let me make one thing clear up front: this isn't just another WhatsApp summary workflow. It’s a fully structured automation built for people who actually need to stay informed without wasting time and with total control over what gets summarized. What this workflow does: Receives messages via webhook from Evolution API Checks if the message is from a group or an individual Routes messages by type: text or audio (with automatic transcription using OpenAI) Stores everything in a Google Sheet organized by group, sender, timestamp and message sended Creates a Control Panel with a checkbox for each group. So, you decide which groups should receive summaries (this is the main differentiator about this workflow) Collects all messages from yesterday, groups them by chat, and sends them to GPT to generate a summary Sends the summary in a clean, formatted in Whatsapp every morning (fully automated). 🧩 How the flow is structured This workflow is strategically divided into two independent parts to ensure clarity, organization, and easy scalability: Part 1 – Message Capture and Storage Triggered via webhook, this part: Receives messages from Evolution API Checks if the message is from a group Distinguishes between text and audio (with automatic transcription) Stores the message in Google Sheets -Checks if the group exists in the control tab If it doesn't, it creates a new row with a checkbox so you can enable/disable summaries for that group Part 2 – Summary Generation and Delivery Scheduled to run daily at 08:00 AM or choose your preferred trigger time Pulls all messages from the previous day Groups them by chat and checks if that group is enabled for summaries Sends the messages to OpenAI to generate a digest Delivers the summary directly into the WhatsApp group using Evolution API This structure makes the flow easier to manage, customize, and scale — plug in other tools without breaking the logic. Tools used: ✅ Evolution API (WhatsApp connection API non-official) ✅ Google Sheets (template provided) ✅ OpenAI (for transcription and summarization) How to set it up: Set up the webhook on Evolution and connect it to n8n Use the included Google Sheets template. Click here to make your copy 👉🏻 [[Template] Log - Group Summary](https://docs.google.com/spreadsheets/d/1ymkWd0thcFRTtWdNrenUg1k8lAmn19ebznSHtvKHaoE/edit?usp=sharing) Connect your Google Sheets credentials Add your OpenAI API key (Optional) Customize the prompt and choose your preferred trigger time Why this workflow stands out: 📊 *Real control panel: enable or disable summaries per group with a single click* 🔍 Fully traceable and modular logic with clear branching and error handling ⚙️ Built for scale. Ideal for teams, communities, or educational groups 📬 Automatically delivers structured daily insights straight to your Whatsapp Groups ✅ Works on both n8n Cloud and Self-hosted 🔐 100% secure. No hacks. No shortcuts. Want to adapt this flow for your business, team, or community? 📩 Custom requests: WhatsApp me at +5534992569346 Português <> PT-BR Fala, galera! Eu sou o Luís 🙋🏻♂️ Eu já vou deixar uma coisa clara: esse não é só mais um fluxo de resumo do WhatsApp. É uma automação completa, estruturada do início ao fim, feita pra quem realmente precisa se manter informado sem perder tempo e com controle total sobre o que vai ou não pro resumo. O que esse fluxo faz: Recebe mensagens via webhook da Evolution API Verifica se a mensagem é de grupo ou contato individual Separa as mensagens por tipo: texto ou áudio (com transcrição automática via OpenAI) Armazena tudo no Google Sheets, organizado por grupo, autor, horário e conteúdo Cria um Painel de Controle com checkbox para cada grupo — você decide quais grupos vão ou não receber o resumo (esse é o grande diferencial do fluxo) Coleta todas as mensagens do dia anterior, agrupa por grupo e envia para a IA gerar o resumo Envia o resumo formatado direto no grupo do WhatsApp todas as manhãs (100% automático) 🧩 Como o fluxo está estruturado Esse fluxo foi estrategicamente dividido em duas partes independentes, garantindo clareza, organização e escalabilidade: Parte 1 – Captura e Armazenamento das Mensagens Ativado por webhook: Recebe mensagens da Evolution API Verifica se é de grupo Separa entre texto e áudio (com transcrição automática) Armazena a mensagem no Google Sheets Verifica se o grupo já existe na aba de controle Caso não exista, cria uma nova linha com checkbox para ativar ou não os resumos daquele grupo Parte 2 – Geração e Envio do Resumo Agendado para rodar todo dia às 08:00 (ou no horário que você quiser) Coleta todas as mensagens do dia anterior Agrupa por grupo e valida se o grupo está habilitado no painel de controle Envia as mensagens para o OpenAI gerar o resumo Entrega o resumo diretamente no grupo via Evolution API Essa estrutura torna o fluxo muito mais fácil de manter, adaptar e escalar — pode integrar novas ferramentas sem bagunçar nada Ferramentas utilizadas: ✅ Evolution API (conexão com o WhatsApp, API não oficial) ✅ Google Sheets (modelo incluso) ✅ OpenAI (para transcrição e geração do resumo) Como configurar: Configure o webhook no Evolution e conecte ao n8n Use a planilha modelo que acompanha esse fluxo. Faça sua cópia clicando aqui 👉🏻 [[Template] Log - Group Summary](https://docs.google.com/spreadsheets/d/1ymkWd0thcFRTtWdNrenUg1k8lAmn19ebznSHtvKHaoE/edit?usp=sharing) Conecte suas credenciais do Google Sheets Adicione sua chave da OpenAI (Opcional) Personalize o prompt da IA e defina o melhor horário de execução Por que esse fluxo se destaca: 📊 *Painel de controle real: ative ou desative os resumos por grupo com 1 clique* 🔍 Lógica rastreável e modular, com ramificações claras e tratamento de exceções ⚙️ Pronto pra escalar. Ideal para times, comunidades ou grupos educacionais 📬 Entrega automática de resumos diários direto nos grupos do WhatsApp ✅ Compatível com n8n Cloud e Self-hosted 🔐 100% seguro. Sem gambiarra. Sem atalhos. Quer adaptar esse fluxo para seu negócio, time ou comunidade? 📩 Solicitações personalizadas: me chama no WhatsApp +5534992569346
by Joseph LePage
Transform simple queries into comprehensive, well-structured content with this n8n workflow that leverages Perplexity AI for research and GPT-4 for content transformation. Create professional blog posts and HTML content automatically while maintaining accuracy and depth. Intelligent Research & Analysis 🚀 Automated Research Pipeline Harnesses Perplexity AI's advanced research capabilities Processes complex topics into structured insights Delivers comprehensive analysis in minutes instead of hours 🧠 Smart Content Organization Automatically structures content with clear hierarchies Identifies and highlights key concepts Maintains technical accuracy while improving readability Creates SEO-friendly content structure Content Transformation Features 📝 Dynamic Content Generation Converts research into professional blog articles Generates clean, responsive HTML output Implements proper semantic structure Includes metadata and categorization 🎨 Professional Formatting Responsive Tailwind CSS styling Clean, modern HTML structure Proper heading hierarchy Mobile-friendly layouts Blockquote highlighting for key insights Perfect For 📚 Content Researchers Save hours of manual research by automating the information gathering and structuring process. ✍️ Content Writers Focus on creativity while the workflow handles research and technical formatting. 🌐 Web Publishers Generate publication-ready HTML content with modern styling and proper structure. Technical Implementation ⚡ Workflow Components Webhook endpoint for query submission Perplexity AI integration for research GPT-4 powered content structuring HTML transformation engine Telegram notification system (optional) Transform your content creation process with an intelligent system that handles research, writing, and formatting while you focus on strategy and creativity.
by moosa
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. 🚀 Overview This workflow enables a powerful AI-driven virtual assistant that dynamically responds to website queries using webhook input, Pinecone vector search, and OpenAI agents — all smartly routed based on the source website. 🔧 How It Works Webhook Trigger The workflow starts with a Webhook node that receives query parameters: query: The user's question userId: Unique user identifier site: Website identifier (e.g., test_site) page: Page identifier (e.g., homepage, pricing) Smart Routing A Switch node directs the request to the correct AI agent based on the site value. Each AI agent uses: OpenAI GPT-4/3.5 model Pinecone vector store for context-aware answers SQL-based memory for consistent multi-turn conversation Contextual AI Agent Each agent is customized per website using: Site-specific Pinecone namespaces Predefined system prompts to stay in scope Webhook context including page, site, and userId Final Response The response is sent back to the originating website using the Respond to Webhook node. 🧠 Use Case Ideal for multi-site platforms that want to serve tailored AI chat experiences per domain or page — whether it’s support, content discovery, or interactive agents. ✅ Highlights 🧠 Vector search using Pinecone for contextual responses 🔀 Website-aware logic with Switch node routing 🔐 No hardcoded API keys 🧩 Modular agents for scalable multi-site support
by Konstantin
How it works This workflow creates an intelligent Telegram bot with a knowledge base powered by Qdrant vector database. The bot automatically processes documents uploaded to Google Drive, stores them as embeddings, and uses this knowledge to answer questions in Telegram. It consists of two independent flows: document processing (Google Drive → Qdrant) and chat interaction (Telegram → AI Agent → Telegram). Step-by-step Document Processing Flow: New File Trigger:* The workflow starts when the *New File Trigger** node detects a new file created in the specified Google Drive folder (polling every 15 minutes). Download File:* The *Download File** (Google Drive) node downloads the detected file from Google Drive. Text Splitting:* The *Split Text into Chunks** node splits the document text into chunks of 3000 characters with 300 character overlap for optimal embedding. Load Document Data:* The *Load Document Data** node processes the binary file data and prepares it for vectorization. OpenAI Embeddings:* The *OpenAI Embeddings** node generates vector embeddings for each text chunk. Insert into Qdrant:* The *Insert into Qdrant** node stores the embeddings in the Qdrant vector database collection. Move to Processed Folder:* After successful processing, the *Move to Processed Folder** (Google Drive) node moves the file to a "Qdrant Ready" folder to keep files organized. Telegram Chat Flow: Telegram Message Trigger:* The *Telegram Message Trigger** node receives new messages from the Telegram bot. Filter Authorized User:* The *Filter Authorized User** node checks if the message is from an authorized chat ID (26899549) to restrict bot access. AI Agent Processing:* The *AI Agent** receives the user's message text and processes it using the fine-tuned GPT-4.1 model with access to the Qdrant knowledge base tool. Qdrant Knowledge Base:* The *Qdrant Knowledge Base** node retrieves relevant information from the vector database to provide context for the AI agent's responses. Conversation Memory:* The *Conversation Memory** node maintains conversation history per chat ID, allowing the bot to remember context. Send Response to Telegram:* The *Send Response to Telegram** node sends the AI-generated response back to the user in Telegram. Set up steps Estimated set up time: 15 minutes Google Drive Setup: Add your Google Drive OAuth2 credentials to the New File Trigger, Download File, and Move to Processed Folder nodes. Create two folders in your Google Drive: one for incoming files and one for processed files. Copy the folder IDs from the URLs and update them in the New File Trigger (folderToWatch) and Move to Processed Folder (folderId) nodes. Qdrant Setup: Add your Qdrant API credentials to the Insert into Qdrant and Qdrant Knowledge Base nodes. Create a collection in your Qdrant instance (e.g., "Test-youtube-adept-ecom"). Update the collection name in both Qdrant nodes. OpenAI Setup: Add your OpenAI API credentials to the OpenAI Chat Model and OpenAI Embeddings nodes. (Optional) Replace the fine-tuned model ID in OpenAI Chat Model with your own model or use a standard model like gpt-4-turbo. Telegram Setup: Create a Telegram bot via @BotFather and obtain the bot token. Add your Telegram bot credentials to the Telegram Message Trigger and Send Response to Telegram nodes. Update the authorized chat ID in the Filter Authorized User node (replace 26899549 with your Telegram user ID). Customize System Prompt (Optional): Modify the system message in the AI Agent node to customize your bot's personality and behavior. The current prompt is configured for an n8n automation expert creating social media content. Activate the Workflow: Toggle "Active" in the top-right to enable both the Google Drive trigger and Telegram trigger. Upload a document to your Google Drive folder to test the document processing flow. Send a message to your Telegram bot to test the chat interaction flow.
by Sona Labs
Generate Sora AI videos, save to Google Drive, and update Google Sheets This workflow automates video generation using OpenAI's Sora model: How it works: Reads video prompts from a Google Sheet Submits each prompt to Sora API for video generation Monitors video creation status with automatic retry logic Generates SEO-optimized titles using GPT-4 Downloads completed videos and uploads to Google Drive Updates the sheet with video URLs and status Setup Steps: Create a Google Sheet with columns: PROMPT, DURATION (In Seconds), VIDEO RESOLUTION, VIDEO TITLE, VIDEO URL, STATUS Add your OpenAI API key (with Sora access) to credentials Connect your Google Sheets and Google Drive accounts Update the sheet ID in all Google Sheets nodes Run the workflow manually to process all unprocessed rows Requirements: OpenAI API key with Sora access Google Sheets OAuth2 credentials Google Drive OAuth2 credentials#Header 4
by Naveen Choudhary
Automatically gather hundreds of real customer reviews from five major platforms in one run using Thordata API and Proxy — Trustpilot, Capterra, Chrome Web Store, TrustRadius, and Product Hunt — then let GPT-4.1 perform deep collective sentiment analysis, uncover common praises & complaints, flag critical issues, assess churn risk, and deliver actionable recommendations straight to your inbox as a stunning executive HTML report. Who’s it for Product managers & founders Growth and marketing teams Customer success & support leads Agencies delivering competitor or product review reports How it works Submit product URLs via form, webhook, or use defaults Smart, Cloudflare-safe scraping with automatic pagination Universal parser standardizes every review format Global deduplication using deterministic unique IDs GPT-4.1 analyzes all reviews collectively (not one-by-one) Beautiful responsive HTML email with sentiment badges, stats, and recommendations Requirements Thordata API key (free tier works) → set as HTTP Header Auth credential OpenAI API key Gmail account (or replace with any email node) How to set up Add your Thordata and OpenAI credentials Connect Gmail Click “Execute Workflow” – instantly tests with Thordata’s own reviews How to customize Edit default product in “Prepare Review Sources” node Modify the AI prompt or email design anytime Add more sources or change the output format easily Zero browser automation · Rate-limit safe · Fully deduplicated · Plug-and-play in minutes.
by Jinash Rouniyar
PROBLEM Evaluating and comparing responses from multiple LLMs (OpenAI, Claude, Gemini) can be challenging when done manually. Each model produces outputs that differ in clarity, tone, and reasoning structure. Traditional evaluation metrics like ROUGE or BLEU fail to capture nuanced quality differences. Human evaluations are inconsistent, slow, and difficult to scale. This workflow automates LLM response quality evaluation using Contextual AI’s LMUnit, a natural language unit testing framework that provides systematic, fine-grained feedback on response clarity and conciseness. > Note: LMUnit offers natural language-based evaluation with a 1–5 scoring scale, enabling consistent and interpretable results across different model outputs. How it works A chat trigger node collects responses from multiple LLMs such as OpenAI GPT-4.1, **Claude 4.5 Sonnet, and Gemini 2.5 Flash. Each model receives the same input prompt to ensure fair comparison, which is then aggregated and associated with each test cases We use Contextual AI's LMUnit node to evaluate each response using predefined quality criteria: “Is the response clear and easy to understand?” - Clarity “Is the response concise and free from redundancy?” - Conciseness LMUnit** then produces evaluation scores (1–5) for each test Results are aggregated and formatted into a structured summary showing model-wise performance and overall averages. How to set up Create a free Contextual AI account and obtain your CONTEXTUALAI_API_KEY. In your n8n instance, add this key as a credential under “Contextual AI.” Obtain and add credentials for each model provider you wish to test: OpenAI API Key: platform.openai.com/account/api-keys Anthropic API Key: console.anthropic.com/settings/keys Gemini API Key: ai.google.dev/gemini-api/docs/api-key Start sending prompts using chat interface to automatically generate model outputs and evaluations. How to customize the workflow Add more evaluation criteria (e.g., factual accuracy, tone, completeness) in the LMUnit test configuration. Include additional LLM providers by duplicating the response generation nodes. Adjust thresholds and aggregation logic to suit your evaluation goals. Enhance the final summary formatting for dashboards, tables, or JSON exports. For detailed API parameters, refer to the LMUnit API reference. If you have feedback or need support, please email feedback@contextual.ai.
by David Olusola
📝 Auto-Generate Meeting Notes & Summaries (Zoom → Google Docs + Slack) This workflow automatically captures Zoom meeting data when a meeting ends, generates AI-powered notes, saves them to Google Docs, and instantly posts a summary with a link in Slack. ⚙️ How It Works Zoom Webhook → Triggers on meeting.ended or recording.completed. Normalize Data → Extracts meeting details (topic, host, duration, transcript). AI Notes (GPT-4) → Summarizes transcript into key decisions, action items, and next steps. Google Docs → Saves formatted meeting notes + transcript archive. Slack Post → Shares summary + link to notes in #team-meetings. 🛠️ Setup Steps 1. Zoom App Go to Zoom Developer Console → create App. Enable event meeting.ended. Paste workflow webhook URL. 2. Google Docs Connect Google OAuth in n8n. Docs auto-saved in your Google Drive. 3. Slack Connect Slack OAuth. Replace channel #team-meetings. 4. OpenAI Add your OpenAI API key. Uses GPT-4 for accurate summaries. 📊 Example Output Slack Message: 📝 Auto-Generate Meeting Notes & Summaries (Zoom → Google Docs + Slack) This workflow automatically captures Zoom meeting data when a meeting ends, generates AI-powered notes, saves them to Google Docs, and instantly posts a summary with a link in Slack. ⚙️ How It Works Zoom Webhook → Triggers on meeting.ended or recording.completed. Normalize Data → Extracts meeting details (topic, host, duration, transcript). AI Notes (GPT-4) → Summarizes transcript into key decisions, action items, and next steps. Google Docs → Saves formatted meeting notes + transcript archive. Slack Post → Shares summary + link to notes in #team-meetings. 🛠️ Setup Steps 1. Zoom App Go to Zoom Developer Console → create App. Enable event meeting.ended. Paste workflow webhook URL. 2. Google Docs Connect Google OAuth in n8n. Docs auto-saved in your Google Drive. 3. Slack Connect Slack OAuth. Replace channel #team-meetings. 4. OpenAI Add your OpenAI API key. Uses GPT-4 for accurate summaries. 📊 Example Output Slack Message: 📝 New Meeting Notes Available Topic: Marketing Sync Host: david@company.com Duration: 45 mins 👉 Read full notes here: https://docs.google.com/document/d/xxxx Google Doc: Executive Summary Key Decisions Action Items w/ Owners Next Steps Full Transcript ⚡ With this workflow, your team never scrambles for meeting notes again.
by Avkash Kakdiya
How it works This workflow automates the classification and routing of incoming Intercom conversations. When a new customer message arrives, it is analyzed by AI to determine category, sentiment, urgency, and tags. Based on this classification, the workflow creates tasks in ClickUp for Support or Product requests, or sends real-time alerts to Slack for Sales inquiries. Step-by-step Webhook Intake Triggered when Intercom sends a new conversation payload. Captures customer details, message content, and metadata. AI Classification Sends the conversation JSON to OpenAI (gpt-4o-mini) with a structured prompt. AI returns a JSON object with category (Support, Product, Sales, Other), sentiment, urgency, reasoning, and tags. Processing & Structuring A Code node parses the AI output and merges it with conversation details. Prepares formatted task fields such as title, description, customer info, and priority. Conditional Routing Support requests → Task created in ClickUp with urgency and tags. Product requests → Task created in ClickUp with structured details. Sales inquiries → Slack alert sent to the Sales channel with context and AI reasoning. Other → No task/action triggered. Benefits Automates Intercom ticket triage and routing in real time. Ensures consistent, AI-driven classification of all customer conversations. Reduces manual review time for Support, Product, and Sales teams. Creates structured tasks with enriched metadata for faster resolution. Keeps Sales teams instantly informed with Slack alerts for urgent leads.
by kiran adhikari
📝 Description This workflow automates the collection, filtering, and scoring of trending AskReddit posts for viral potential. It pulls posts from Reddit, removes duplicates, calculates a custom virality score, and writes the final candidates into Google Sheets for later use in content creation. This is Phase 1 of the AskReddit → YouTube Shorts automation pipeline. It prepares clean, high-quality data that can be used in the next phases (script generation, AI video creation, and publishing). ⚙️ Setup Steps Import Workflow into your n8n instance. Reddit API: Add your Reddit API credentials in the "Get AskReddit Posts" node. Google Sheets: Connect your Google account. Point the "Write Candidates" node to your target Google Sheet. Virality Scoring: The "Add Virality Score" node assigns weights (e.g., upvotes, comments). Adjust the scoring logic as needed for your niche. Run Workflow: Execute manually or schedule with Cron. Verify that trending AskReddit posts appear in your sheet, scored and cleaned.
by Robert Breen
This beginner-friendly n8n workflow teaches essential data manipulation techniques using Google Sheets and AI. You'll learn how to: ✅ Merge two datasets by a shared column (Channel) 🔍 Filter rows based on performance metrics (Clicks, Spend) 🔀 Branch logic into "Great" vs. "Poor" outcomes 📊 Summarize results by team leader 🤖 Use an OpenAI-powered agent to generate a written analysis highlighting the best and worst performers Perfect for marketers, analysts, or anyone learning how to clean, transform, and interpret data inside n8n. Includes: 📁 Sample Google Sheet to copy 🛠 Setup instructions for Google Sheets & OpenAI ✨ AI summary powered by GPT-4o-mini 👋 Questions or Feedback? Feel free to reach out — I’m happy to help! Robert Breen Founder, Ynteractive 🌐 ynteractive.com 📧 robert@ynteractive.com 📺 YouTube: YnteractiveTraining 🔗 LinkedIn: linkedin.com/in/robertbreen
by Yar Malik (Asfandyar)
Who’s it for This template is for users who want to combine the power of AI with Google Sheets for managing and calculating data quickly. It’s ideal for small businesses, data entry teams, and anyone who tracks lists, orders, or tasks in Google Sheets and needs AI-driven insights or calculations. How it works The workflow connects an AI agent with Google Sheets and a calculator tool. When a user sends a chat message, the AI interprets the request, retrieves or updates rows in the connected sheet, and performs calculations when needed. For example, it can read a list of orders from a sheet and calculate totals or averages instantly. It also supports creating, updating, and deleting rows from the sheet through natural language instructions. How to set up Copy the provided Google Sheet into your Google Drive. Connect your Google Sheets credentials in n8n. Add your OpenAI credentials for the AI agent. Deploy the workflow and start interacting with it by sending chat prompts. Requirements OpenAI account (for AI responses) Google Sheets account with a spreadsheet n8n instance with LangChain nodes enabled How to customize the workflow Change the spreadsheet fields (ID, Name, etc.) to match your own data structure. Modify the AI prompt to guide the agent’s tone or behavior. Extend the workflow by adding more Google Sheets operations or AI tools for advanced tasks.