by Nghia Nguyen
This AI Agent helps you create short links from your original URLs. Each generated short link is automatically stored in a database table for easy management and tracking. How It Works Provide a long URL to the Agent. The Agent saves your original link in the database. It generates a short link in the following format: Short link: https://{webhook_url}/webhook/shortLink?q={shortLinkId} When users open the short link, they are automatically redirected to your original link. How to Use Send your link to the Agent. The Agent will respond with a generated short link. Requirements Add your your_webhook_url to the Config Node. OpenAI account Create a database table named ShortLink with the following columns: | Column Name | Description | |----------------|------------------------------| | originalLink | Stores the full original URL. | | shortLinkId | Stores the unique short link ID. | Customization Options Add traffic tracking or analytics for each short link. Customize the redirect page to display your logo, message, or branding.
by Rahul Joshi
📊 Description Automate your eCommerce content workflow by generating AI-optimized product descriptions directly from Airtable. 🛍️🤖 This automation checks for pending products every 15 minutes, processes them in batches, and uses GPT-4o-mini to create structured long descriptions, short answer blocks, bullet features, and feature tables. All AI-generated fields are then written back into the same Airtable record, ensuring clean, consistent, and SEO-friendly product copy. Perfect for stores looking to scale product listings without manual writing. ✨📄 🔁 What This Template Does 1️⃣ Triggers every 15 minutes to look for products marked as “pending”. ⏰ 2️⃣ Fetches product data from Airtable for processing. 📦 3️⃣ Splits results into batches to avoid API rate limits. 🔁 4️⃣ Sends product attributes to GPT-4o-mini for AI-generated content. 🤖 5️⃣ Uses a structured JSON parser to ensure clean, valid AI output. 📐 6️⃣ Converts the AI JSON into Airtable-friendly fields using a Code node. 🧩 7️⃣ Updates the original Airtable product record with generated descriptions. ✍️ 8️⃣ Marks each item as “done” with a timestamp. ✔️ ⭐ Key Benefits ✅ Creates consistent, high-quality product descriptions automatically ✅ Produces AI-friendly content for search engines & answer engines ✅ Eliminates manual copywriting for large product catalogs ✅ Ensures structured, valid, and predictable output every time ✅ Runs reliably on schedule with zero human oversight ✅ Ideal for eCommerce teams scaling product listings 🧩 Features Scheduled automation (every 15 minutes) Airtable integration for fetching & updating records Batch processing to prevent rate-limit issues GPT-4o-mini AI content generation Structured output parser for clean JSON Code node formatting for Airtable Automatic status + timestamp updating 🔐 Requirements Airtable Personal Access Token OpenAI API key (GPT-4o-mini) Airtable base + table with required fields n8n with LangChain nodes enabled 🎯 Target Audience eCommerce teams managing product catalogs Marketplace sellers needing scalable content Operations teams automating product listings Agencies generating optimized product copy for clients
by Sergei Byvshev
Overview This workflow helps automatically analyze the causes of build failures in Gitlab CI and propose solutions without involving DevOps engineers. See detailed setup guide How it work Checks whether a job crashed. Gets logs of the crashed job and a description of the pipeline. The agent analyzes this data according to its system prompt. During the process, the agent can retrieve additional data from Gitlab and check the availability of endpoints. A report on the causes and solutions is sent to the selected Slack channel Works good with Openai, Anthropic, Google last models How to use Generate webhook credentials and use it in Gitlab Add your own Rules and recommendations to system prompt Run MCP servers Choose Slack channel
by Robert Breen
Run an AI-powered degree audit for each senior student. This template reads student rows from Google Sheets, evaluates completed courses against hard-coded program requirements, and writes back an AI Degree Summary of what's still missing (major core, Gen Eds, major electives, and upper-division credits). It's designed for quick advisor/registrar review and SIS prototypes. Trigger: Manual — When clicking "Execute workflow" Core nodes: Google Sheets, OpenAI Chat Model, (optional) Structured Output Parser Programs included: Computer Science BS, Business Administration BBA, Psychology BA, Mechanical Engineering BS, Biology BS (Pre-Med), English Literature BA, Data Science BS, Nursing BSN, Economics BA, Graphic Design BFA Who's it for Registrars & advisors** who need fast, consistent degree checks Student success teams** building prototype dashboards SIS/EdTech builders** exploring AI-assisted auditing How it works Read seniors from Google Sheets (Senior_data) with: StudentID, Name, Program, Year, CompletedCourses. AI Agent compares CompletedCourses to built-in requirements (per program) and computes Missing items + a short Summary. Write back to the same sheet using "Append or update" by StudentID (updates AI Degree Summary; you can also map the raw Missing array to a column if desired). Example JSON (for one student): { "StudentID": "S001", "Program": "Computer Science BS", "Missing": [ "GEN-REMAIN | General Education credits remaining | 6", "CS-EL-REM | CS Major Electives (200+ level) | 6", "UPPER-DIV | Additional Upper-Division (200+ level) credits needed | 18", "FREE-EL | Free Electives to reach 120 total credits | 54" ], "Summary": "All core CS courses are complete. Still need 6 Gen Ed credits, 6 CS electives, and 66 total credits overall, including 18 upper-division credits — prioritize 200/300-level CS electives." } Setup (2 steps) 1) Connect Google Sheets (OAuth2) In n8n → Credentials → New → Google Sheets (OAuth2) and sign in. In the Google Sheets nodes, select your spreadsheet and the Senior_data tab. Ensure your input sheet has at least: StudentID, Name, Program, Year, CompletedCourses. 2) Connect OpenAI (API Key) In n8n → Credentials → New → OpenAI API, paste your key. In the OpenAI Chat Model node, select that credential and a model (e.g., gpt-4o or gpt-5). Requirements Sheet columns:** StudentID, Name, Program, Year, CompletedCourses CompletedCourses format:** pipe-separated IDs (e.g., GEN-101|GEN-103|CS-101). Program labels:** should match the built-in list (e.g., Computer Science BS). Credits/levels:** Template assumes upper-division ≥ 200-level (adjust the prompt if your policy differs). Customization Change requirements:** Edit the Agent's system message to update totals, core lists, elective credit rules, or level thresholds. Store more output:** Map Missing to a new column (e.g., AI Missing List) or write rows to a separate sheet for dashboards. Distribute results:** Email summaries to advisors/students (Gmail/Outlook), or generate PDFs for advising folders. Add guardrails:** Extend the prompt to enforce residency, capstone, minor/cognate constraints, or per-college Gen Ed variations. Best practices (per n8n guidelines) Sticky notes are mandatory:** Include a yellow sticky note that contains this description and quick setup steps; add neutral sticky notes for per-step tips. Rename nodes clearly:** e.g., "Get Seniors," "Degree Audit Agent," "Update Summary." No hardcoded secrets:** Use credentials—not inline keys in HTTP or Code nodes. Sanitize identifiers:** Don't ship personal spreadsheet IDs or private links in the published version. Use a Set node for config:** Centralize user-tunable values (e.g., column names, tab names). Troubleshooting OpenAI 401/429:** Verify API key/billing; slow concurrency if rate-limited. Empty summaries:** Check column names and that CompletedCourses uses |. Program mismatch:** Align Program labels to those in the prompt (exact naming recommended). Sheets auth errors:** Reconnect Google Sheets OAuth2 and re-select spreadsheet/tab. Limitations Not an official audit:** It infers gaps from the listed completions; registrar rules can be more nuanced. Catalog drift:** Requirements are hard-coded in the prompt—update them each term/year. Upper-division heuristic:** Adjust the level threshold if your institution defines it differently. Tags & category Category: Education / Student Information Systems Tags: degree-audit, registrar, google-sheets, openai, electives, upper-division, graduation-readiness Changelog v1.0.0 — Initial release: Senior_data in/out, 10 programs, AI Degree Summary output, append/update by StudentID. Contact Need help tailoring this to your catalog (e.g., per-college Gen Eds, capstones, minors, PDFs/email)? 📧 rbreen@ynteractive.com 📧 robert@ynteractive.com 🔗 Robert Breen — https://www.linkedin.com/in/robert-breen-29429625/ 🌐 ynteractive.com — https://ynteractive.com
by Luís Philipe Trindade
What’s up guys, I’m Luís 🙋🏻♂️ If you manage learning programs, communities, or customer groups on WhatsApp, this workflow will save your life. It’s your AI-powered FAQ engine. This workflow captures group conversations (via Google Sheets), identifies the most common doubts and recurring questions, and automatically builds a structured FAQ document with suggested answers. ⚠️ Important note To use this workflow, you must already have all WhatsApp conversations saved into a Google Sheet. If you don’t have this yet, check out my other workflow that does exactly that: Workflow to Summary Group WhatsApp. ✅ What this workflow does Runs weekly (every Monday 6am) Pulls all conversations from your Google Sheet Groups messages by week into structured blocks Sends blocks to an AI Agent to detect FAQs AI extracts recurring questions, explains context, and suggests answers Creates a new FAQ document in Google Docs based on a template Keeps everything organized and accessible for the team 🧩 Flow Structure Part 1 – Data Capture & Weekly Blocks Retrieves group messages from Google Sheets Organizes them by ISO week number Prepares clean message blocks for AI analysis Part 2 – AI FAQ Builder AI Agent analyzes the messages Extracts FAQs with suggested responses Generates a new Google Doc FAQ every week 🔧 Tools used ✅ Google Sheets (message log database) ✅ OpenAI (AI analysis & FAQ generation) ✅ Google Docs (automatic FAQ output) ✅ Schedule Trigger (weekly automation) 🌟 Why this workflow stands out 📊 Turns raw WhatsApp conversations into weekly FAQ reports 🤖 AI not only detects questions, but also suggests answers 🚀 Automated, scalable and perfect for communities and teams 📝 Delivers a ready-to-use FAQ doc every week ✅ Works on both n8n Cloud and Self-hosted 🔐 100% secure. No hacks. No shortcuts. Want to adapt this flow for your business, education program, or internal team? 📩 Custom requests: WhatsApp me at +55 34 99256-9346 🇧🇷 Português (PT-BR) Fala, galera! Eu sou o Luís 🙋🏻♂️ Se você gerencia cursos, comunidades ou grupos de clientes no WhatsApp, esse fluxo vai salvar sua vida. É o seu FAQ automático com IA. Ele pega as conversas que já estão salvas em uma planilha do Google Sheets, identifica as dúvidas mais recorrentes e gera um documento organizado com respostas sugeridas. ⚠️ Atenção Para usar esse fluxo, você precisa já ter todas as conversas do WhatsApp salvas em uma planilha no Google Sheets. Se você ainda não tem isso configurado, utilize meu outro workflow que faz exatamente esse processo: Workflow to Summary Group WhatsApp. ✅ O que esse fluxo faz Roda toda segunda-feira às 6h Busca as mensagens salvas em sua planilha no Google Sheets Agrupa por semana em blocos organizados Envia para um Agente de IA que identifica dúvidas recorrentes Gera explicações e respostas sugeridas Cria automaticamente um novo documento FAQ no Google Docs Mantém um histórico semanal claro e acessível 🧩 Como o fluxo está estruturado Parte 1 – Captura & Blocos Semanais Puxa mensagens da planilha Organiza por semana ISO Prepara blocos para análise pela IA Parte 2 – FAQ Builder com IA Agente de IA analisa blocos Extrai dúvidas recorrentes e sugere respostas Cria um documento FAQ atualizado no Google Docs 🔧 Ferramentas utilizadas ✅ Google Sheets (base de mensagens) ✅ OpenAI (análise & geração de FAQ) ✅ Google Docs (documento automático) ✅ Agendamento semanal (gatilho) 🌟 Por que esse fluxo se destaca 📊 Transforma mensagens de grupo em relatórios semanais de FAQ 🤖 IA identifica dúvidas e já entrega respostas prontas 🚀 Escalável para qualquer comunidade ou programa educacional 📝 Documento novo toda semana, sem esforço manual ✅ Compatível com n8n Cloud e Self-hosted 🔐 100% seguro. Sem gambiarras. Quer adaptar esse fluxo pro seu negócio, curso ou comunidade? 📩 Solicitação personalizada: me chama no WhatsApp +55 34 99256-9346
by Khaisa Studio
Promo Seeker finds fresh, working promo codes and vouchers on the web so your team never misses a deal. This n8n workflow uses SerpAPI and Decodo Scrapper for real-time search, an agent powered by GPT-5 Mini for filtering and validation, and Chat Memory to keep context—saving time, reducing manual checks, and helping marketing or customer support teams deliver discounts faster to customers (and yes, it's better at hunting promos than your inbox). 💡 Why Use Promo Seeker? Speed: Saves hours per week by automatically finding and validating current promo codes, so you can publish deals faster. Simplicity: Eliminates manual searching across sites, no more copy-paste scavenger hunts. Accuracy: Reduces false positives by cross-checking results and keeping only working vouchers—fewer embarrassed "expired code" moments. Edge: Combine search APIs with an AI agent to surface hard-to-find, recently-live offers—win over competitors who still rely on manual scraping. ⚡ Perfect For Marketing teams: Quickly populate newsletters, landing pages, or ads with valid promos. Customer support: Give verified discount codes to users without ping-ponging between tabs. Deal aggregators & affiliates: Discover fresh vouchers faster and boost conversion rates. 🔧 How It Works ⏱ Trigger: A user message via the chat webhook starts the search (Message node). 📎 Process: The agent queries SerpAPI and Decodo Scrapper to collect potential promo codes and voucher pages. 🤖 Smart Logic: The Promo Seeker Agent uses GPT-5 Mini with Chat Memory to filter for fresh, working promos and to verify validity and relevance. 💌 Output: Results are returned to the chat with clear, copy-ready promo codes and source links. 🗂 Storage: Chat Memory stores context and recent searches so the agent avoids repeating old results and can follow up with improved queries. 🔐 Quick Setup Import JSON file to your n8n instances Add credentials: SerpAPI, Azure OpenAI (Gpt 5 Mini), Decodo API Customize: Search parameters (brands, regions, validity window), agent system message, and result formatting Update: Azure OpenAI endpoint and API key in the Gpt 5 Mini credentials; add your SerpAPI key and Decodo key Test: Run a few queries like "latest Amazon promo" or "food delivery voucher" and confirm returned codes are valid 🧩 You'll Need Active n8n instances SerpAPI account and API key Azure OpenAI (for GPT-5 Mini) with key and endpoint Decodo account/API key 🛠️ Level Up Ideas Push verified promos to a Slack channel or email digest for the team. Add scheduled scans to detect newly expired codes and remove them from lists. Integrate with a CMS to auto-post verified deals to landing pages. Made by: khaisa Studio Tags: promo, vouchers, discounts Category: Marketing Automation Need custom work? Contact Us
by Yanagi Chinatsu
Who is it for? This workflow is perfect for content creators, marketers, researchers, or anyone who wants to efficiently keep up with a YouTube channel without watching every video. It saves you hours of manual work by automatically transcribing, translating, and summarizing new video content. What it does This workflow automates the entire process of digesting YouTube video content. It watches a specified YouTube channel for new uploads. When a new video is published, it uses Google's Gemini AI to create a full English transcription. The text is then translated into Japanese using DeepL and summarized by OpenAI's GPT into a clean title and a few key bullet points. Finally, it saves both the concise summary and the full translated text as separate documents in your Google Drive. How to set up Connect Your Accounts: Authenticate your credentials for the YouTube, Google Gemini, DeepL, OpenAI, and Google Drive nodes. Set the YouTube Channel: In the YouTube: Get Channel by ID node, replace the placeholder channel ID with the one you want to monitor. Choose Google Drive Folders: In the two Google Drive nodes (Save Summary and Save Full Translation), specify the Folder ID where you'd like to store the output files. Activate: Enable the workflow to start monitoring for new videos. Requirements An n8n instance A Google account with access to the YouTube API A Google AI (Gemini) API key A DeepL API account An OpenAI API key A Google account with access to Google Drive How to customize the workflow Change the Trigger: Adjust the Run once a day schedule trigger to run more or less frequently. Adjust the Language: Modify the DeepL: Translate to Japanese node to translate to a different language, or remove it entirely to summarize the original English text. Swap AI Models: Select different AI models in the Google Gemini Model or OpenAI Chat Model nodes based on your preference for speed or quality. Modify the Summary Prompt: Edit the prompt in the Agent: Summarize Japanese Text node to change the tone, length, or format of the summary. Change the Destination: Replace the Google Drive nodes with other services like Slack, Notion, or a database to send your summaries wherever you need them.
by Jitesh Dugar
Transform patient intake from paperwork chaos into intelligent, automated triage that detects emergencies, prepares providers with comprehensive briefs, and streamlines scheduling—improving patient safety while saving 15-20 hours per week. 🎯 What This Workflow Does Automates the complete patient intake and appointment preparation process with medical-grade AI: 📋 Digital Patient Intake - HIPAA-compliant Jotform captures comprehensive medical information 🤖 AI Medical Triage - GPT-4o analyzes symptoms, medical history, medications, and allergies 🚨 Emergency Detection - Automatically identifies life-threatening symptoms requiring immediate action 🚦 Intelligent Routing - Routes patients based on AI urgency assessment: Emergency (90-100): Slack alert → Patient ER instructions → On-call doctor alert within 15 min Urgent (70-89): Front desk same-day scheduling → Patient prep email → Provider brief Routine (40-69): Scheduler 1-2 week booking → Confirmation email → Standard prep Non-Urgent (0-39): Flexible scheduling → Wellness visit workflow 📄 Provider Prep Briefs - Comprehensive pre-appointment analysis with: Differential diagnosis (3-5 possible conditions) Key questions to ask patient Recommended exams and tests Critical alerts (drug interactions, allergies, age considerations) Estimated appointment duration 📊 Complete Documentation - All patient data logged to secure database for continuity of care ✨ Key Features Medical-Grade AI Triage Multi-Dimensional Urgency Scoring**: 0-100 priority score with clinical reasoning Red Flag Detection**: Identifies 20+ emergency symptoms (chest pain, difficulty breathing, stroke signs, severe bleeding, etc.) Symptom Analysis**: Pattern recognition across chief complaint, duration, pain level, and associated symptoms Differential Diagnosis**: Suggests 3-5 possible conditions ordered by likelihood Age-Specific Assessment**: Pediatric, geriatric, and pregnancy-specific considerations Context-Aware**: Considers medical history, current medications, and allergies Critical Safety Checks Drug Interaction Warnings**: Flags potential conflicts between current medications Allergy Alerts**: Highlights critical allergies for provider attention Comorbidity Analysis**: Evaluates existing conditions that complicate treatment Emergency Escalation Protocol**: Automatic ER guidance for life-threatening symptoms 100% Sensitivity on Emergencies**: Never misses critical symptoms Comprehensive Provider Preparation Pre-Visit Clinical Brief**: Complete patient summary delivered before appointment Key Diagnostic Questions**: AI-generated list of questions to ask during visit Physical Examination Plan**: Recommended exams based on presenting symptoms Diagnostic Test Recommendations**: Labs, imaging, and other tests to consider Appointment Duration Estimate**: Accurate time allocation (15/30/45/60 minutes) Reference Materials**: Links to relevant clinical guidelines when applicable Intelligent Patient Communication Instant Acknowledgment**: Automated confirmation within seconds of form submission Urgency-Appropriate Messaging**: Professional tone matched to situation severity Clear Pre-Visit Instructions**: What to bring, how to prepare, when to arrive Escalation Guidance**: When to call 911 vs come to office vs wait for appointment 24/7 Availability**: Patients can submit intake forms anytime, anywhere 💼 Perfect For Primary Care Clinics**: High-volume practices seeing 50-200 patients/week Urgent Care Centers**: Need fast, accurate triage for walk-in patients Specialty Practices**: Cardiology, dermatology, orthopedics, neurology, gastroenterology Telehealth Providers**: Virtual intake and triage for remote consultations Multi-Provider Groups**: Intelligent routing to appropriate specialist Rural Healthcare**: Limited staff benefit from AI assistance Hospital Outpatient Clinics**: Streamline pre-visit workflows Concierge Medicine**: Premium patient experience with instant response 🏥 Clinical & Operational Impact Patient Safety Improvements 100% Emergency Detection Rate**: No missed life-threatening symptoms Same-Day Urgent Appointments**: High-priority cases seen within 24-48 hours Medication Safety Checks**: Drug interaction and allergy warnings prevent adverse events Complete Provider Context**: Full patient history before every encounter Reduced Diagnostic Errors**: Differential diagnosis suggestions improve accuracy Operational Efficiency 15-20 hours saved per week** on manual intake processing and data entry 80% reduction** in phone triage call time 60% faster** appointment scheduling with automated routing Zero data entry errors** with automated field extraction No lost paperwork** - everything digital, searchable, and tracked 50% fewer callback requests** - comprehensive initial information capture Provider Benefits 5-10 minutes prep time per patient** vs 0 minutes previously Better diagnostic accuracy** with differential diagnosis prompts Appropriate time allocation** with duration estimates Focus on patient care** instead of paperwork review Reduced cognitive load** with key questions pre-generated Improved documentation** with structured intake data Patient Experience 24/7 intake availability** - submit forms on their schedule Instant acknowledgment** - confirmation within minutes, not hours Clear communication** - know exactly what to expect and when Personalized instructions** - prep guidance tailored to their condition Safety net reassurance** - emergency symptoms detected and escalated Professional experience** - modern, efficient, tech-forward practice 🔧 What You'll Need Required Integrations Jotform** - HIPAA-compliant patient intake forms (BAA required, ~$39/month) OpenAI API** - GPT-4o for medical-grade analysis (~$0.05-0.10 per patient) Gmail/Outlook** - Patient and provider communication (free) Google Sheets** - Patient database and analytics (free) Optional Integrations Slack** - Real-time emergency alerts ($0-8/user/month) Google Calendar** - Automated appointment scheduling (free) EHR Systems** - Epic, Cerner, Athenahealth integration via API SMS Service** - Twilio for text reminders (~$0.01/message) Telehealth Platforms** - Zoom, Doxy.me auto-scheduling Insurance Verification** - Eligibility API for real-time checks
by riandra
Description This n8n template gives ecommerce brands a fully automated review intelligence system — running every morning to scrape, analyze, and report on what customers are actually saying across every platform. It uses MrScraper to collect reviews from Tokopedia, Shopee, Lazada, Bukalapak, Amazon, and more, then GPT-4o-mini to extract 15 brand intelligence signals per review including sentiment, emotion, viral risk, competitor mentions, and CS response suggestions. The result is a daily Brand Awareness Score (BAS) delivered to Slack, every review archived in Notion, and urgent alerts fired the moment a critical review is detected — before it goes viral. How It Works Phase 1 – Trigger & Config:** A Schedule Trigger fires daily at 6AM. The workflow reads your product list from Google Sheets — each row is one product SKU across any platform — and loops through them one by one. Phase 2 – Review URL Discovery:** For each product, the Map Agent crawls the product page and discovers review section URLs or paginated review pages. A smart fallback ensures reviews embedded directly on the product page (common on Tokopedia and Shopee) are still captured even when no separate review URL exists. Phase 3 – Review Extraction & Filtering:** Each review URL is processed by the General Agent, which extracts the full review text, star rating, reviewer name, date, photo count, helpful votes, verified purchase status, and any existing seller reply. Short reviews under 10 words are skipped — unless the rating is 1 or 2 stars, where even brief negative feedback is treated as a valuable signal. A deduplication hash is generated per review to prevent double-processing on re-runs. Phase 4 – AI Brand Sentiment Analysis:** Every valid review is sent to GPT-4o-mini with a structured prompt that returns 15 brand intelligence fields: sentiment label and score, emotion tags (frustration, delight, anger, loyalty, etc.), the most impacted product dimension (quality, delivery, packaging, pricing, authenticity), a CX score out of 10, competitor brand mentions, viral risk assessment, urgency level, and a ready-to-use customer service response suggestion. Phase 5 – Storage, Alerts & Daily Digest:** Reviews flagged as action-required trigger an immediate Slack alert to your #brand-alerts channel. Every review is saved to Notion with all 27 metadata fields. At the end of each run, a Daily Brand Health Digest is compiled — including the Brand Awareness Score (BAS) out of 100, sentiment breakdown, top praises and complaints, emotion trends, competitor mentions, viral risk list, and action items — then posted to your #brand-monitoring Slack channel. How to Set Up Prepare your Google Sheet with these columns: Platform, Product_URL, Brand_Name, SKU_Code, Category, Active (Y/N). Add one row per product SKU you want to monitor. Create 2 scrapers in your MrScraper account: Review List Scraper (Map Agent) — crawls product pages and discovers review URLs Review Detail Scraper (General Agent) — extracts review_text, rating, reviewer_name, review_date, photo_count, helpful_votes, verified_purchase, seller_reply Copy the scraperId for each and paste into the corresponding n8n nodes. Enable AI Scraper API access in your MrScraper account settings. Add your credentials in n8n: MrScraper API key OpenAI API key Slack OAuth Notion OAuth Google Sheets OAuth2 Configure your Notion database with all 27 properties listed in the setup note inside the workflow (Title, SKU Code, Platform, Sentiment, Sentiment Score, Emotion Tags, CX Score, Viral Risk, Action Required, and more). Update config values inside the Filter & Enrich Review Data node: slackChannel (e.g. #brand-monitoring) slackAlertChannel (e.g. #brand-alerts) notionDatabaseId (from your Notion database URL) competitorKeywords (comma-separated competitor brand names to detect) alertThreshold (default: 2.5 stars) Adjust the Map Agent node include/exclude patterns for your target platform (e.g. /review, /ulasan for Indonesian platforms, paginated review paths for Amazon). Set your trigger time in the Schedule node to match your timezone (default: every 24 hours). Requirements MrScraper** account with API access enabled OpenAI** account (GPT-4o-mini used by default) Slack** workspace with OAuth connected (two channels recommended: one for digest, one for urgent alerts) Notion** workspace with a database configured and OAuth connected Google Sheets** (OAuth2 connected) for your product SKU list Good to Know The Brand Awareness Score (BAS) is calculated using a weighted formula combining average star rating (30%), positive review rate (25%), sentiment score (20%), brand loyalty signals (10%), and a viral risk penalty (up to -10 points). It gives you a single number to track brand health over time. GPT-4o-mini processes each review for approximately $0.0001 — making it extremely cost-effective even for catalogs with hundreds of SKUs. Short 1–2 star reviews are never skipped, even if they contain only a few words — a single-word complaint like "rusak" (broken) or "fake" carries strong signal and is always analyzed. The workflow has two independent Slack outputs: an instant urgent alert fires the moment a critical review is detected, while the Daily Digest posts a full brand health report after all reviews for the day have been processed. Reviews from platforms where feedback is embedded on the product page (no separate review URL) are handled automatically via the smart URL fallback in the Extract Review URLs node. Customising This Workflow Multi-brand monitoring:** Duplicate the workflow and point it at a different Google Sheet to run separate brand intelligence pipelines for different product lines or client brands. Auto CS ticket creation:** Add a Jira, Asana, or Trello node after the Notion save step to automatically create customer service tickets for every critical or high-urgency review. Email reporting:** Insert a Gmail node after the Slack Daily Digest to also deliver the Brand Health Report as a formatted email to your marketing or brand team each morning. Visual dashboards:** Connect your Notion database to Google Looker Studio or Metabase to build BAS trend charts, sentiment heatmaps by platform, and weekly brand health dashboards. Competitor switch alerts:** Extend the competitor keyword list and add a dedicated Slack message branch specifically for reviews where a competitor switch is mentioned — a high-priority signal for retention teams.
by Rahul Joshi
📘 Description: This workflow automates developer Q&A handling by connecting GitHub, GPT-4o (Azure OpenAI), Notion, Google Sheets, and Slack. Whenever a developer comments on a pull request with a “how do I…” or “how to…” question, the workflow automatically detects the query, uses GPT-4o to generate a concise technical response, stores it in Notion for documentation, and instantly shares it on Slack for visibility. It reduces repetitive manual answering, boosts engineering knowledge sharing, and keeps teams informed with AI-powered insights. ⚙️ What This Workflow Does (Step-by-Step) 🟢 GitHub PR Comment Trigger — Starts the automation when a pull request comment is posted in a specified repository. Action: Listens for pull_request_review_comment events. Description: Captures comment text, author, PR number, and repository name as the trigger payload. 🔍 Validate GitHub Webhook Payload (IF Node) — Ensures the webhook data includes a valid comment URL. ✅ True Path: Continues to question detection. ❌ False Path: Sends invalid or missing data to Google Sheets for error logging. ❓ Detect Developer Question in PR Comment — Checks whether the comment includes question patterns such as “how do I…” or “how to…”. If a valid question is found, the workflow proceeds to the AI assistant; otherwise, it ends silently. 🧠 Configure GPT-4o Model (Azure OpenAI) — Connects to the GPT-4o model for intelligent language generation. Acts as the central AI engine to craft short, precise technical answers. 🤖 Generate AI Response for Developer Question — Sends the developer’s comment and PR context to GPT-4o. GPT analyzes the question and produces a short (2–3 line) helpful answer, maintaining professional and technical tone. 🧩 Extract GitHub Comment Metadata — Uses a JavaScript code node to structure key details (repo, user, comment, file path, PR number) into a clean JSON format. Prepares standardized data for storage and further use. 🧾 Save Comment Insight to Notion Database — Appends the GitHub comment, AI response, and metadata into a Notion database (“test db”). Acts as a centralized knowledge base for tracking and reusing AI-generated technical answers. 💬 Post AI Answer & PR Link to Slack — Sends the generated response and GitHub PR comment link to a Slack channel or user. Helps reviewers or teammates instantly view AI-generated suggestions and maintain active discussion threads. 🚨 Log Errors in Google Sheets (Error Handling) — Logs webhook validation or AI-processing errors into a shared Google Sheet (“error log sheet”). Ensures full visibility into workflow issues for future debugging. 🧩 Prerequisites GitHub OAuth credentials with webhook access Azure OpenAI (GPT-4o) account Notion API integration for the documentation database Slack API connection for notifications Google Sheets API access for error tracking 💡 Key Benefits ✅ Automated detection of developer questions in GitHub comments ✅ AI-generated instant answers with context awareness ✅ Centralized documentation in Notion for knowledge reuse ✅ Real-time Slack notifications for visibility and collaboration ✅ Continuous error logging for transparent troubleshooting 👥 Perfect For Developer teams using GitHub for code reviews Engineering leads wanting AI-assisted PR support Companies aiming to build self-learning documentation Teams using Notion and Slack for workflow visibility
by moosa
This n8n workflow automates the process of fetching, processing, and storing tech news articles from RSS feeds into a Notion database. It retrieves articles from The Verge and TechCrunch, processes them to avoid duplicates, extracts full article content, generates summaries using an LLM, and stores the data in Notion. The workflow is designed to run on a schedule or manually for testing, with sticky notes providing clear documentation for each step. Data in notion Workflow Overview Triggers**: Manual Trigger: Used for testing the workflow (When clicking ‘Execute workflow’). Schedule Trigger: Runs daily at 11 AM to fetch new articles (Schedule Trigger, disabled). Fetch Feeds**: Pulls RSS feeds from The Verge (The Verge) and TechCrunch (TechCrunch). Hash Creation**: Generates a SHA256 hash of each article’s URL (Crypto, Crypto1) to identify unique articles efficiently. Loop Over Articles**: Processes articles in batches (Loop Over Items, Loop Over Items1) to handle multiple articles from each feed. Duplicate Check**: Queries the Notion database (Get many database pages, Get many database pages1) to check if an article’s hash exists. If it does, the article is skipped (If, If1). Fetch Full Article**: If the article is new, retrieves the full article content via HTTP request (HTTP Request, HTTP Request1). Extract Content**: Extracts paragraph content from the article HTML (HTML, HTML1) using specific CSS selectors (.duet--article--article-body-component p for The Verge, .entry-content p for TechCrunch). Clean Data**: JavaScript code (Code in JavaScript, Code in JavaScript1) processes the extracted content by removing empty paragraphs, links, and excessive whitespace, then joins paragraphs into a single string. Summarize Article**: Uses an OpenAI model (OpenAI Chat Model, OpenAI Chat Model1) with a LangChain node (Basic LLM Chain, Basic LLM Chain1) to generate a concise summary (max 1500 characters) in plain text, focusing on main arguments or updates. Store in Notion**: Creates a new page in a Notion database (Create a database page, Create a database page1) with fields for title, summary, date, hash, URL, source, digest status, and full article text. Credentials**: Uses Notion API and OpenAI API credentials, ensuring no hardcoded API keys in HTTP nodes. Notes This workflow is for learning purpose only.
by Cheng Siong Chin
HOW IT WORKS : Automates SaaS operations by consolidating user management, AI-driven support triage, analytics, and billing into one unified system. User signups flow through registration, support requests route via OpenAI prioritization, billing events trigger confirmations, and daily analytics feed dashboards. The AI Business Logic layer orchestrates real-time decisions, enriches data, and triggers Gmail notifications. Four data streams converge into centralized routing for customer onboarding, ticket triage, metrics aggregation, and revenue automation. SETUP STEPS: Add OpenAI API credentials for chat model routing Authenticate Gmail with app password for notifications Connect Data Table Tool for user/support/billing storage Configure workflow settings: priority thresholds and routing rules Test each branch with sample data to verify all integrations PREREQUISITES: OpenAI API key, Gmail account with app password, MCP Server access, Data Table Tool credentials USE CASES: Manage SaaS customer lifecycle end-to-end; Route critical support instantly CUSTOMIZATION: Extend AI prompts for different support categories; add Slack/Teams notifications BENEFITS: Reduces manual overhead 70%, routes tickets 10x faster, centralizes customer data