by DIGITAL BIZ TECH
AI Product Catalog Chatbot with Google Drive Ingestion & Supabase RAG Overview This workflow builds a dual-system that connects automated document ingestion with a live product catalog chatbot powered by Mistral AI and Supabase. It includes: Ingestion Pipeline:** Automatically fetches JSON files from Google Drive, processes their content, and stores vector embeddings in Supabase. Chatbot:** An AI agent that queries the Supabase vector store (RAG) to answer user questions about the product catalog. It uses Mistral AI for chat intelligence and embeddings, and Supabase for vector storage and semantic product search. Chatbot Flow Trigger:** When chat message received or Webhook (from live website) Model:** Mistral Cloud Chat Model (mistral-medium-latest) Memory:** Simple Memory (Buffer Window) — keeps last 15 messages for conversational context Vector Search Tool:** Supabase Vector Store Embeddings:** Mistral Cloud Agent:** product catalog agent Responds to user queries using the products table in Supabase. Searches vectors for relevant items and returns structured product details (name, specs, images, and links). Maintains chat session history for natural follow-up questions. Document → Knowledge Base Pipeline Triggered manually (Execute workflow) to populate or refresh the Supabase vector store. Steps Google Drive (List Files) → Fetch all files from the configured Google Drive folder. Loop Over Items → For each file: Google Drive (Get File) → Download the JSON document. Extract from File → Parse and read raw JSON content. Map Data into Fields (Set node) → Clean and normalize JSON keys (e.g., page_title, comprehensive_summary, key_topics). Convert Data into Chunks (Code node) → Merge text fields like summary and markdown. → Split content into overlapping 2,000-character chunks. → Add metadata such as title, URL, and chunk index. Embeddings (Mistral Cloud) → Generate vector embeddings for each text chunk. Insert into Supabase Vectorstore → Save chunks + embeddings into the website_mark table. Wait → Pause for 30 seconds before the next file to respect rate limits. Integrations Used | Service | Purpose | Credential | |----------|----------|------------| | Google Drive | File source for catalog JSON documents | Google Drive account dbt | | Mistral AI | Chat model & embeddings | Mistral Cloud account dbt | | Supabase | Vector storage & RAG search | Supabase DB account dbt | | Webhook / Chat | User-facing interface for chatbot | Website or Webhook | Sample JSON Data Format (for Ingestion) The ingestion pipeline expects structured JSON product files, which can include different categories such as Apparel or Tools. Apparel Example (T-Shirts) [ { "Name": "Classic Crewneck T-Shirt", "Item Number": "A-TSH-NVY-M", "Image URL": "https://www.example.com/images/tshirt-navy.jpg", "Image Markdown": "", "Size Chart URL": "https://www.example.com/charts/tshirt-sizing", "Materials": "100% Pima Cotton", "Color": "Navy Blue", "Size": "M", "Fit": "Regular Fit", "Collection": "Core Essentials" } ] Tools Example (Drill Bits) [ { "Name": "Titanium Drill Bit, 1/4\"", "Item Number": "T-DB-TIN-250", "Image URL": "https://www.example.com/images/drill-bit-1-4.jpg", "Image Markdown": "", "Spec Sheet URL": "https://www.example.com/specs/T-DB-TIN-250", "Materials": "HSS with Titanium Coating", "Type": "Twist Drill Bit", "Size (in)": "1/4", "Shank Type": "Hex", "Application": "Metal, Wood, Plastic" } ] Agent System Prompt Summary > “You are an AI product catalog assistant. Use only the Supabase vector database as your knowledge base. Provide accurate, structured responses with clear formatting — including product names, attributes, and URLs. If data is unavailable, reply politely: ‘I couldn’t find that product in the catalog.’” Key Features Automated JSON ingestion from Google Drive → Supabase Intelligent text chunking and metadata mapping Dual-workflow architecture (Ingestion + Chatbot) Live conversational product search via RAG Supports both embedded chat and webhook channels Summary > A powerful end-to-end workflow that transforms your product data into a searchable, AI-ready knowledge base, enabling real-time product Q&A through a Mistral-powered chatbot. Perfect for eCommerce teams, distributors, or B2B companies managing large product catalogs. Need Help or More Workflows? Want to customize this workflow for your business or integrate it with your tools? Our team at Digital Biz Tech can tailor it precisely to your use case — from automation pipelines to AI-powered product discovery. 💡 We can help you set it up for free — from connecting credentials to deploying it live. Contact: shilpa.raju@digitalbiz.tech Website: https://www.digitalbiz.tech LinkedIn: https://www.linkedin.com/company/digital-biz-tech/ You can also DM us on LinkedIn for any help.
by Parag Javale
Turn a simple email workflow into a LinkedIn content machine. Generate post ideas, draft full posts, and auto-publish to LinkedIn all controlled by replying to emails. 📌 Purpose Automate your LinkedIn posting pipeline using AI + Email approvals. Generate 10 scroll-stopping post ideas tailored to your niche & audience. Approve your favorite by replying to the email with a number. Receive 3 AI-written drafts for the chosen idea. Pick your favorite draft via email reply. The selected post gets auto-published to LinkedIn ✅. All steps are logged in Google Sheets. 🔗 Apps Used Google Gemini** → generates ideas & drafts Gmail** → email-based approval workflow Google Sheets** → tracks ideas, drafts, and published posts LinkedIn API** → posts directly to your company or personal account ✨ Highlights 📬 Email-based approval → no dashboards, just reply with a number 📝 10 AI-generated content ideas + 3 full drafts per topic 🔄 End-to-end tracking in Google Sheets (ideas → drafts → published) ⚡ Auto-posting directly to LinkedIn ✅ Final confirmation email with preview 👤 Best For Startup founders Agencies managing multiple clients’ LinkedIn Solopreneurs & creators who want consistent posting 🛠️ Workflow Overview flowchart TB A["Manual Trigger"] --> B["AI Agent - Generate 10 Ideas"] B --> C["Code - Parse JSON + Correlation ID"] C --> D["Google Sheets - Append Ideas"] D --> E["Gmail - Send Ideas Email"] E --> F["Gmail Trigger - Await Reply"] F --> G["Code1 - Extract Reply Number"] G --> H["Google Sheets - Fetch Row"] H --> I{"Switch Stage"} I -- Ideas --> J["AI Agent - Generate 3 Drafts"] J --> K["Code3 - Parse Drafts"] K --> L["Google Sheets - Update Drafts"] L --> M["Gmail - Send Drafts Email"] I -- Drafts --> N["Code4 - Select Final Draft"] N --> O["LinkedIn - Publish Post"] O --> P["Google Sheets - Update Posted"] P --> Q["Gmail - Send Confirmation"] `
by Intuz
This n8n template from Intuz provides a complete solution to automate on-demand lead generation. It acts as a powerful scraping agent that takes a simple chat query, scours both Google Search and Google Maps for relevant businesses, scrapes their websites for contact details, and compiles an enriched lead list directly in Google Sheets. Who's this workflow for? Sales Development Representatives (SDRs) Local Marketing Agencies Business Development Teams Freelancers & Consultants Market Researchers How it works 1. Start with a Chat Query: The user initiates the workflow by typing a search query (e.g., "dentists in New York") into a chat interface. 2. Multi-Source Search: The workflow queries both the Google Custom Search API (for web results across multiple pages) and scrapes Google Maps (for local businesses) to gather a broad list of potential leads. 3. Deep Dive Website Scraping: For each unique business website found, the workflow visits the URL to scrape the raw HTML content of the page. 4. Intelligent Contact Extraction: Using custom code, it then parses the scraped website content to find and extract valuable contact information like email addresses, phone numbers, and social media links. 5. Deduplicate and Log to Sheets: Before saving, the workflow checks your Google Sheet to ensure the lead doesn't already exist. All unique, newly enriched leads are then appended as clean rows to your sheet, along with the original search query for tracking. Key Requirements to Use This Template 1. n8n Instance & Required Nodes: An active n8n account (Cloud or self-hosted). This workflow uses the official n8n LangChain integration (@n8n/n8n-nodes-langchain) for the chat trigger. If you are using a self-hosted version of n8n, please ensure this package is installed. 2. Google Custom Search API: A Google Cloud Project with the "Custom Search API" enabled. You will need an API Key for this service. You must also create a Programmable Search Engine and get its Search engine ID (cx). This tells Google what to search (e.g., the whole web). 3. Google Sheets Account: A Google account and a pre-made Google Sheet with columns for Business Name, Primary Email, Contact Number, URL, Description, Socials, and Search Query. Setup Instructions 1. Configure the Chat Trigger: In the "When chat message received" node, you can find the Direct URL or Embed code to use the chat interface. Set Up Google Custom Search API (Crucial Step): Go to the "Custom Google Search API" (HTTP Request) node. Under "Query Parameters", you must replace the placeholder values for key (with your API Key) and cx (with your Search Engine ID). 3. Configure Google Sheets: In all Google Sheets nodes (Append row in sheet, Get row(s) in sheet, etc.), connect your Google Sheets credentials. Select your target spreadsheet (Document ID) and the specific sheet (Sheet Name) where you want to store the leads. 4. Activate the Workflow: Save the workflow and toggle the "Active" switch to ON. Open the chat URL and enter a search query to start generating leads. Connect with us Website: https://www.intuz.com/services Email: getstarted@intuz.com LinkedIn: https://www.linkedin.com/company/intuz Get Started: https://n8n.partnerlinks.io/intuz For Custom Workflow Automation Click here- Get Started
by dongou
Fetch user-specific research papers from arXiv on a daily schedule, process and structure the data, and create or update entries in a Notion database, with support for data delivery Paper Topic**: single query keyword Update Frequency**: Daily updates, with fewer than 20 entries expected per day Tools**: Platform: n8n, for end-to-end workflow configuration AI Model: Gemini-2.5-Flash, for daily paper summarization and data processing Database: Notion, with two tables — Daily Paper Summary and Paper Details Message: Feishu (IM bot notifications), Gmail (email notifications) 1. Data Retrieval arXiv API The arXiv provides a public API that allows users to query research papers by topic or by predefined categories. arXiv API User Manual Key Notes: Response Format: The API returns data as a typical Atom Response. Timezone & Update Frequency: The arXiv submission process operates on a 24-hour cycle. Newly submitted articles become available in the API only at midnight after they have been processed. Feeds are updated daily at midnight Eastern Standard Time (EST). Therefore, a single request per day is sufficient. Request Limits: The maximum number of results per call (max_results) is 30,000, Results must be retrieved in slices of at most 2,000 at a time, using the max_results and start query parameters. Time Format: The expected format is [YYYYMMDDTTTT+TO+YYYYMMDDTTTT], TTTT is provided in 24-hour time to the minute, in GMT. Scheduled Task Execution Frequency**: Daily Execution Time**: 6:00 AM Time Parameter Handling (JS)**: According to arXiv’s update rules, the scheduled task should query the previous day’s (T-1) submittedDate data. 2. Data Extraction Data Cleaning Rules (Convert to Standard JSON) Remove Header Keep only the 【entry】【/entry】 blocks representing paper items. Single Item Each 【entry】【/entry】 represents a single item. Field Processing Rules 【id】【/id】 ➡️ id Extract content. Example: 【id】http://arxiv.org/abs/2409.06062v1【/id】 → http://arxiv.org/abs/2409.06062v1 【updated】【/updated】 ➡️ updated Convert timestamp to yyyy-mm-dd hh:mm:ss 【published】【/published】 ➡️ published Convert timestamp to yyyy-mm-dd hh:mm:ss 【title】【/title】 ➡️ title Extract text content 【summary】【/summary】 ➡️ summary Keep text, remove line breaks 【author】【/author】 ➡️ author Combine all authors into an array Example: [ "Ernest Pusateri", "Anmol Walia" ] (for Notion multi-select field) 【arxiv:comment】【/arxiv:comment】 ➡️ Ignore / discard 【link type="text/html"】 ➡️ html_url Extract URL 【link type="application/pdf"】 ➡️ pdf_url Extract URL 【arxiv:primary_category term="cs.CL"】 ➡️ primary_category Extract term value 【category】 ➡️ category Merge all 【category】 values into an array Example: [ "eess.AS", "cs.SD" ] (for Notion multi-select field) Add Empty Fields github huggingface 3. Data Processing Analyze and summarize paper data using AI, then standardize output as JSON. Single Paper Basic Information Analysis and Enhancement Daily Paper Summary and Multilingual Translation 4. Data Storage: Notion Database Create a corresponding database in Notion with the same predefined field names. In Notion, create an integration under Integrations and grant access to the database. Obtain the corresponding Secret Key. Use the Notion "Create a database page" node to configure the field mapping and store the data. Notes "Create a database page"** only adds new entries; data will not be updated. The updated and published timestamps of arXiv papers are in UTC. Notion single-select and multi-select fields only accept arrays. They do not automatically parse comma-separated strings. You need to format them as proper arrays. Notion does not accept null values, which causes a 400 error. 5. Data Delivery Set up two channels for message delivery: EMAIL and IM, and define the message format and content. Email: Gmail GMAIL OAuth 2.0 – Official Documentation Configure your OAuth consent screen Steps: Enable Gmail API Create OAuth consent screen Create OAuth client credentials Audience: Add Test users under Testing status Message format: HTML (Model: OpenAI GPT — used to design an HTML email template) IM: Feishu (LARK) Bots in groups Use bots in groups
by Athanasios
AI Interior Design Assistant: Your Digital Design Partner What This System Does This n8n workflow transforms your Telegram into a professional interior design studio powered by artificial intelligence. Send a photo of furniture or a room space, and watch as the system intelligently catalogs items, documents spaces, and generates stunning custom interior designs tailored to your vision. The Magic Behind the Scenes Smart Image Recognition When you upload a photo, the system immediately springs into action with sophisticated image analysis: Furniture Detection**: Spots individual pieces like sofas, chairs, tables, and lamps with catalog-precision accuracy Room Analysis**: Identifies complete spaces, architectural features, lighting conditions, and existing design elements Style Classification**: Determines design styles from modern minimalist to traditional classic Material Recognition**: Identifies wood types, fabric textures, metal finishes, and color palettes Intelligent Database Management The workflow maintains three interconnected databases that work like a professional design firm's catalog system: Furniture Catalog (catalog_products) Comprehensive product details including style, materials, dimensions, and compatibility Professional descriptions written for interior designers Searchable tags for quick design matching High-quality image storage for visual reference Room Documentation (rooms) Detailed space analysis including size, style, and architectural features Color palette documentation and lighting assessment Existing furniture inventory for design planning Room-specific design recommendations Design Portfolio (ai_generated_images) Archive of all AI-generated interior designs Original prompts and design descriptions Searchable by style, room type, or specific elements Ready for client presentations or further modifications AI-Powered Design Generation The system's crown jewel is its ability to create stunning interior visualizations: Contextual Understanding: Combines room characteristics with catalog products to create realistic design scenarios Professional Prompting: Generates detailed, interior-design-specific prompts that result in high-quality, commercially viable designs Style Consistency: Maintains design coherence across different elements while respecting user preferences Modification Capabilities: Can reference and modify previous designs, allowing for iterative improvements The User Experience Journey Scenario 1: Building Your Furniture Catalog Upload: Send photos of furniture pieces via Telegram Analysis: AI examines each piece, identifying style, materials, dimensions, and design era Cataloging: Items are professionally documented with searchable metadata Confirmation: Receive detailed catalog entries for each piece Scenario 2: Documenting Your Spaces Room Photos: Share images of your living spaces Space Analysis: AI assesses room size, style, lighting, and architectural features Documentation: Complete room profiles are created for design planning Inventory: Existing furniture and design elements are noted Scenario 3: Creating Custom Designs Design Request: Ask for specific interior modifications or new layouts Smart Matching: System pulls relevant items from your catalog and room data AI Generation: Gemini 2.5 Flash creates photorealistic interior designs Instant Delivery: Receive professional-quality visualizations via Telegram Scenario 4: Design Evolution Reference Previous Work: Mention earlier designs you want to modify Contextual Modification: AI understands your reference and applies new changes Enhanced Generation: Creates updated designs building on previous concepts Continuous Improvement: Iterate until the design matches your vision Technical Sophistication Multi-AI Coordination OpenAI GPT-4**: Handles complex reasoning, database operations, and user interaction Google Gemini 2.5 Flash**: Specializes in high-quality image generation with interior design expertise Intelligent Routing**: Automatically determines whether to catalog, document, or generate based on context Professional Data Structure The database schema reflects real interior design workflows: Industry-standard categorization systems Professional terminology and measurements Design compatibility matrices Style and era classifications used by actual designers Seamless Integration Telegram Interface**: No app downloads or complex interfaces - just send photos and text Cloud Storage**: All images stored in Supabase with public URLs for easy access Real-time Processing**: Immediate feedback and rapid design generation Persistent Memory**: Everything is saved and searchable for future reference Why This Matters This workflow bridges the gap between professional interior design tools and accessible consumer technology. It provides: For Design Professionals: A powerful cataloging and visualization tool that streamlines client presentations and design iteration For Homeowners: Professional-level design capability without the cost or complexity of traditional design software For Businesses: A scalable solution for furniture visualization, space planning, and customer engagement The Innovation Factor Unlike simple design apps that work with generic templates, this system: Learns your specific furniture and spaces Maintains design continuity across projects Provides professional-quality outputs Scales from single rooms to complete home designs Integrates seamlessly into your daily communication workflow The result is a design assistant that feels less like software and more like having a professional interior designer available 24/7 through your phone. Future Possibilities This foundation supports expansion into: Room dimension calculations and space optimization Integration with furniture retailers for purchase links 3D room modeling and virtual reality previews Style preference learning and automated suggestions Multi-user collaboration for design teams The workflow represents a new paradigm where AI doesn't replace human creativity but amplifies it, making professional design capabilities accessible to anyone with a smartphone and an imagination.
by Automate With Marc
Viral Marketing Reel & Autopost with Sora2 + Blotato Create funny, ultra-realistic marketing reels on autopilot using n8n, Sora2, Blotato, and OpenAI. This beginner-friendly template generates a comedic video prompt, creates a 12-second Sora2 video, writes a caption, and auto-posts to Instagram/TikTok — all on a schedule. 🎥 Watch the full step-by-step tutorial: https://www.youtube.com/watch?v=lKZknEzhivo What this template does This workflow automates an entire short-form content production pipeline: Scheduled Trigger: Runs automatically at your chosen time (e.g., every evening at 7PM). AI “Video Prompt Agent”: Creates a cinematic, funny, 12-second Sora2 text-to-video prompt designed to promote a product (default: Sally’s Coffee). Insert Row (Data Table): Logs each generated video prompt for tracking, reuse, or inspiration. Sora2 (via Wavespeed): Sends POST request to generate a video. Waits 30 seconds. Polls the prediction endpoint until the video is completed. Blotato Integration: Uploads the finished video to your connected social account(s). Automatically publishes or schedules the post. Caption Generator: Uses an AI agent to create an Instagram/TikTok-ready caption with relevant hashtags. This turns n8n into a hands-free comedic marketing engine that writes, creates, and posts content for you. Why it’s useful Create daily or weekly marketing reels without filming, editing, or writing scripts. Experiment with new comedic formats, hooks, and product placements in seconds. Perfect for small businesses, agencies, creators, and social media managers. Demonstrates how to combine AI agents + Sora2 + polling + external posting services inside one workflow. Requirements Before running this template, configure: OpenAI API Key (for the prompt agent & caption model) Wavespeed / Sora2 API credentials Blotato account connected to Instagram/TikTok (for posting) n8n Data Table (optional, or replace with your own) ⚠️ All credentials must be added manually after import. No real credentials are included in the template. How it works Schedule Trigger Runs at a fixed time or interval. Video Prompt Agent (LangChain Agent) Generates a cinematic, realistic comedic video idea. Built with a detailed system prompt. Ensures brand integration (e.g., Sally’s Coffee) happens naturally. Insert Row (Data Table) Logs each generated prompt so future videos can be referenced or reused. Sora2 POST Request Sends the generated prompt to Sora2 via Wavespeed’s /text-to-video endpoint. Wait 30s + GET Sora2 Result Polls the result until data.status === "completed". Continues looping if still “processing”. Upload Media (Blotato) Uploads the finished video file. Caption Generator Creates a funny, platform-ready Instagram/TikTok caption with hashtags. Create Post (Blotato) Publishes (or schedules) the video + caption. Setup Instructions (Step-by-Step) Import template into n8n. Open Video Prompt Agent → review or customize the brand name, style, humor tone. Add your OpenAI API credentials: For prompt generation For caption generation Add your Wavespeed/Sora2 credentials to the POST and GET nodes. Connect your Blotato credential for uploading and posting. (Optional) Replace the Data Table ID with your own table. Adjust the Schedule Trigger time to your desired posting schedule. Run once manually to confirm: Prompt is generated Video is created Caption is written Video uploads successfully Enable workflow → your daily/weekly comedic autoposter is live. Customization Ideas Change the brand from Sally’s Coffee to any business, product, or influencer brand. Modify the prompt agent to enforce specific camera styles, settings, or comedic tones. Swap posting destinations: Blotato supports multiple networks—configure IG/TikTok/Facebook/YouTube Shorts. Add approval steps: Insert a Slack/Telegram “Approve before posting” step. Add analytics logging: Store video URLs, caption, and AI cost estimate. Troubleshooting Sora video stuck in processing: Increase the wait time or add another polling loop. Upload fails: Ensure media URL exists and Blotato account has posting permissions. Caption empty: Reconnect OpenAI credential or check model availability. Posting fails: Confirm your Blotato API key is valid and linked to a connected account. Category: Marketing, AI Video, Social Media Automation Difficulty: Beginner–Intermediate Core Nodes: LangChain Agent, HTTP Request, Wait, Data Table, Blotato, OpenAI Includes: System prompts, polling logic, caption generator, posting workflow
by Mychel Garzon
AI-Powered CV Feedback & Fit Score This workflow uses AI to automatically analyze a candidate’s CV against any job posting. It extracts key skills, requirements, and gaps, then generates a clear fit summary, recommendations, and optimization tips. Candidates also receive a structured email report, helping them improve their CV and focus on the right roles. No more guesswork, the workflow delivers objective, AI-powered career insights in minutes. Benefits • Automated CV analysis: Instantly compare your CV with any job description. • Clear recommendations: Get a fit score (1–10) plus “Apply,” “Consider,” or “Not a fit.” • Actionable feedback: See missing skills and concrete optimization tips. • Email reports: Candidates receive a professional summary directly in their inbox. Target Audience • Job seekers • Career coaches and recruiters • HR teams evaluating candidate job alignment • Tech bootcamps and training programs Required APIs • Google Gemini API (AI analysis) • Email credentials (send candidate reports) Easy Customization • Fit score logic: Adjust thresholds for “Apply,” “Consider,” and “Not a fit.” • Email templates: Personalize branding, tone, or add follow-up resources. • Delivery channels: Add Slack, Teams, or WhatsApp nodes for real-time feedback. • Language detection: Extend to more languages by adding translation nodes.
by Rahul Joshi
Description: Keep your API documentation accurate and reliable with this n8n automation template. The workflow automatically tests your FAQ content related to authentication and rate limits, evaluating each answer using Azure OpenAI GPT-4o-mini for completeness, edge-case coverage, and technical clarity. It logs all results to Google Sheets, scores FAQs from 0–10, and sends Slack alerts when low-quality answers are detected. Ideal for API teams, developer relations managers, and technical writers who want to maintain high-quality documentation with zero manual review effort. ✅ What This Template Does (Step-by-Step) ▶️ Manual Trigger or On-Demand Run Start the evaluation anytime you update your FAQs — perfect for regression testing before documentation releases. 📖 Fetch FAQ Q&A from Google Sheets Reads FAQ questions and answers from your designated test sheet (columns A:B). Each Q&A pair becomes a test case for AI evaluation. 🤖 AI Evaluation via GPT-4o-mini Uses Azure OpenAI GPT-4o-mini to evaluate how well each FAQ covers critical aspects of API authentication and rate limiting. The AI provides a numeric score (0–10) and a short explanation. 🔍 Parse & Format AI Results Extracts structured JSON data (Question, Score, Explanation, Timestamp) and prepares it for reporting and filtering. 💾 Save Evaluation to Google Sheets Appends all results to a Results Sheet (A:D) — creating a running history of FAQ quality audits. ⚠️ Filter for Low-Scoring FAQs Identifies any FAQ with a score below 7, flagging them as needing review or rewrite. 🔔 Send Slack Alerts for Weak Entries Posts an alert message in your chosen Slack channel, including: The question text Score received AI’s explanation Link to the full results sheet This ensures your documentation team can quickly address weak or incomplete FAQ answers. 🧠 Key Features 🤖 AI-powered FAQ quality scoring (0–10) 📊 Automated tracking of doc health over time 📥 Seamless Google Sheets integration for results storage ⚙️ Slack notifications for underperforming FAQs 🧩 Ideal for continuous documentation improvement 💼 Use Cases 📘 Validate FAQ accuracy before API documentation updates ⚡ Auto-test new FAQ sets during content refresh cycles 🧠 Ensure API rate limit and auth topics cover all edge cases 📢 Alert documentation owners about weak answers instantly 📦 Required Integrations Google Sheets API – for reading and storing FAQs and test results Azure OpenAI (GPT-4o-mini) – for evaluating FAQ coverage and clarity Slack API – for sending quality alerts and notifications 🎯 Why Use This Template? ✅ Ensures API FAQ accuracy and completeness automatically ✅ Replaces tedious manual content reviews with AI scoring ✅ Builds an ongoing record of documentation improvements ✅ Keeps technical FAQs consistent, relevant, and developer-friendly
by Abdullah Alshiekh
🧩 What Problem Does It Solve? In real estate, inquiries come from many sources and often require immediate, personalized attention. Brokers waste significant time manually: Qualifying leads:** Determining if a prospect's budget, neighborhood, and needs match available inventory. Searching listings:** Cross-referencing customer criteria against a large, static database. Data entry:** Moving contact details and search summaries into a CRM like Zoho. Initial follow-up:** Sending an email to confirm the submission and schedule the next step. 🛠️ How to Configure It Jotform & CRM Setup Jotform Trigger:** Replace the placeholder with your specific Jotform ID. Zoho CRM:** Replace the placeholder TEMPLATED_COMPANY_NAME with your actual company name. Gmail:** Replace the placeholder Calendly link YOUR_CALENDLY_LINK in the Send a message node with your real estate consultant's booking link. Database & AI Setup Google Sheets:** Replace YOUR_GOOGLE_SHEET_DOCUMENT_ID and YOUR_SHEET_GID_OR_NAME in both Google Sheets nodes. Your listings must be structured with columns matching the AI prompt (e.g., bedrooms, rent, neighborhoods). AI Models:** Ensure your Google Gemini API key is linked to the Google Gemini Chat Model node. AI Agent Prompt:** The included prompt contains the exact matching and scoring rules for the AI. You can edit this prompt to refine how the AI prioritizes factors like supplier_rating or neighborhood proximity. 🧠 Use Case Examples Small Startups:** Collect High-Quality Leads: New inquiries must be quickly logged for sales follow-up, but manual entry is slow. B2B Sales:** High-Value Lead Enrichment: Need to prioritize leads that match specific product requirements and budget tiers. Travel/Hospitality:** Personalized Itinerary Matching: Quickly match customer preferences (e.g., dates, group size, activity level) to available packages. E-commerce:** Manual Product Recommendation: Sales teams manually recommend expensive, configurable items (e.g., furniture, specialized equipment). If you need any help Get in Touch
by Jitesh Dugar
Revolutionize university admissions with intelligent AI-driven application evaluation that analyzes student profiles, calculates eligibility scores, and automatically routes decisions - saving 2.5 hours per application and reducing decision time from weeks to hours. 🎯 What This Workflow Does Transforms your admissions process from manual application review to intelligent automation: 📝 Captures Applications - Jotform intake with student info, GPA, test scores, essay, extracurriculars 🤖 AI Holistic Evaluation - OpenAI analyzes academic strength, essay quality, extracurriculars, and fit 🎯 Intelligent Scoring - Evaluates students using 40% academics, 25% extracurriculars, 20% essay, 15% fit (0-100 scale) 🚦 Smart Routing - Automatically routes based on AI evaluation: Auto-Accept (95-100)**: Acceptance letter with scholarship details → Admin alert → Database Interview Required (70-94)**: Interview invitation with scheduling link → Admin alert → Database Reject (<70)**: Respectful rejection with improvement suggestions → Database 💰 Scholarship Automation - Calculates merit scholarships ($5k-$20k+) based on eligibility score 📊 Analytics Tracking - All applications logged to Google Sheets for admissions insights ✨ Key Features AI Holistic Evaluation: Comprehensive analysis weighing academics, extracurriculars, essays, and institutional fit Intelligent Scoring System: 0-100 eligibility score with automated categorization and scholarship determination Structured Output: Consistent JSON schema with academic strength, admission likelihood, and decision reasoning Automated Communication: Personalized acceptance, interview, and rejection letters for every applicant Fallback Scoring: Manual GPA/SAT scoring if AI fails - ensures zero downtime Admin Alerts: Instant email notifications for exceptional high-scoring applicants (95+) Comprehensive Analytics: Track acceptance rates, average scores, scholarship distribution, and applicant demographics Customizable Criteria: Easy prompt editing to match your institution's values and requirements 💼 Perfect For Universities & Colleges: Processing 500+ undergraduate applications per semester Graduate Programs: Screening master's and PhD applications with consistent evaluation Private Institutions: Scaling admissions without expanding admissions staff Community Colleges: Handling high-volume transfer and new student applications International Offices: Evaluating global applicants 24/7 across all timezones Scholarship Committees: Identifying merit scholarship candidates automatically 🔧 What You'll Need Required Integrations Jotform - Application form with student data collection (free tier works) Create your form for free on Jotform using this link Create your application form with fields: Name, Email, Phone, GPA, SAT Score, Major, Essay, Extracurriculars OpenAI API - GPT-4o-mini for cost-effective AI evaluation (~$0.01-0.05 per application) Gmail - Automated applicant communication (acceptance, interview, rejection letters) Google Sheets - Application database and admissions analytics Optional Integrations Slack - Real-time alerts for exceptional applicants Calendar APIs - Automated interview scheduling Student Information System (SIS) - Push accepted students to enrollment system Document Analysis Tools - OCR for transcript verification 🚀 Quick Start Import Template - Copy JSON and import into n8n (requires LangChain support) Create Jotform - Use provided field structure (Name, Email, GPA, SAT, Major, Essay, etc.) Add API Keys - OpenAI, Jotform, Gmail OAuth2, Google Sheets Customize AI Prompt - Edit admissions criteria with your university's specific requirements and values Set Score Thresholds - Adjust auto-accept (95+), interview (70-94), reject (<70) cutoffs if needed Personalize Emails - Update templates with your university branding, dates, and contact info Create Google Sheet - Set up columns: id, Name, Email, GPA, SAT Score, Major, Essay, Extracurriculars Test & Deploy - Submit test application with pinned data and verify all nodes execute correctly 🎨 Customization Options Adjust Evaluation Weights: Change academics (40%), extracurriculars (25%), essay (20%), fit (15%) percentages Multiple Programs: Clone workflow for different majors with unique evaluation criteria Add Document Analysis: Integrate OCR for transcript and recommendation letter verification Interview Scheduling: Connect Google Calendar or Calendly for automated booking SIS Integration: Push accepted students directly to Banner, Ellucian, or PeopleSoft Waitlist Management: Add conditional routing for borderline scores (65-69) Diversity Tracking: Include demographic fields and bias detection in AI evaluation Financial Aid Integration: Automatically calculate need-based aid eligibility alongside merit scholarships 📈 Expected Results 90% reduction in manual application review time (from 2.5 hours to 15 minutes per application) 24-48 hour decision turnaround time vs 4-6 weeks traditional process 40% higher yield rate - faster responses increase enrollment commitment 100% consistency - every applicant evaluated with identical criteria Zero missed applications - automated tracking ensures no application falls through cracks Data-driven admissions - comprehensive analytics on applicant pools and acceptance patterns Better applicant experience - professional, timely communication regardless of decision Defensible decisions - documented scoring criteria for accreditation and compliance 🏆 Use Cases Large Public Universities Screen 5,000+ applications per semester, identify top 20% for auto-admit, route borderline to committee review. Selective Private Colleges Evaluate 500+ highly competitive applications, calculate merit scholarships automatically, schedule interviews with top candidates. Graduate Programs Process master's and PhD applications with research experience weighting, flag candidates for faculty review, automate fellowship awards. Community Colleges Handle high-volume open enrollment while identifying honors program candidates and scholarship recipients instantly. International Admissions Evaluate global applicants 24/7, account for different GPA scales and testing systems, respond same-day regardless of timezone. Rolling Admissions Provide instant decisions for early applicants, fill classes strategically, optimize scholarship budget allocation. 💡 Pro Tips Calibrate Your AI: After 100+ applications, refine evaluation criteria based on enrolled student success A/B Test Thresholds: Experiment with score cutoffs (e.g., 93 vs 95 for auto-admit) to optimize yield Build Waitlist Pipeline: Keep 70-84 score candidates engaged for spring enrollment or next year Track Source Effectiveness: Add UTM parameters to measure which recruiting channels deliver best students Committee Review: Route 85-94 scores to human admissions committee for final review Bias Audits: Quarterly review of AI decisions by demographic groups to ensure fairness Parent Communication: Add parent/guardian emails for admitted students under 18 Financial Aid Coordination: Sync scholarship awards with financial aid office for packaging 🎓 Learning Resources This workflow demonstrates: AI Agents with structured output** - LangChain integration for consistent JSON responses Multi-stage conditional routing** - IF nodes for three-tier decision logic Holistic evaluation** - Weighted scoring across multiple dimensions Automated communication** - HTML email templates with dynamic content Real-time notifications** - Admin alerts for high-value applicants Analytics and data logging** - Google Sheets integration for reporting Fallback mechanisms** - Manual scoring when AI unavailable Perfect for learning advanced n8n automation patterns in educational technology! 🔐 Compliance & Ethics FERPA Compliance: Protects student data with secure credential handling Fair Admissions: Documented criteria eliminate unconscious bias Human Oversight: Committee review option for borderline cases Transparency: Applicants can request evaluation criteria Appeals Process: Structured workflow for decision reconsideration Data Retention: Configurable Google Sheets retention policies 📊 What Gets Tracked Application submission date and time Complete student profile (GPA, test scores, major, essay, activities) AI eligibility score (0-100) and decision category Academic strength rating (excellent/strong/average) Scholarship eligibility and amount ($0-$20,000+) Admission likelihood (high/medium/low) Decision outcome (accepted/interview/rejected) Email delivery status and open rates Time from application to decision Ready to transform your admissions process? Import this template and start evaluating applications intelligently in under 1 hour. Questions or customization needs? The workflow includes detailed sticky notes explaining each section and comprehensive fallback logic for reliability.
by Billy Christi
Who is this for? This workflow is perfect for: Support teams and customer service departments managing Jira tickets Team leads and managers who need daily visibility into ticket resolution progress Organizations wanting to automate ticket reporting and communication IT departments seeking to streamline support ticket summarization and tracking What problem is this workflow solving? Manual ticket review and reporting is time-consuming and often lacks comprehensive analysis. This workflow solves those issues by: Automating daily ticket analysis** by fetching, analyzing, and summarizing all tickets created each day Providing intelligent summaries** using AI to extract key insights from ticket descriptions, comments, and resolutions Streamlining communication** by automatically sending formatted daily reports to stakeholders Saving time** by eliminating manual ticket review and report generation What this workflow does This workflow automatically fetches daily Jira tickets, analyzes them with AI, and sends comprehensive summaries via email to keep your team informed about support activities. Step by step: Schedule Trigger runs the workflow automatically at your chosen interval (or manual trigger for testing) Set Project Key defines the Jira project to monitor (default: SUP project) Get All Tickets from the specified project created today Split Out extracts individual ticket data including key, summary, and description Loop Tickets processes each ticket individually through batch processing Get Comments from Ticket retrieves all comments and conversations for complete context Merge combines ticket data with associated comments for comprehensive analysis Ticket Summarizer (AI Agent) uses OpenAI GPT-5 to generate professional summaries and proposed solutions Set Output structures the AI analysis into standardized JSON format Aggregate collects all processed ticket summaries into a single dataset Format Body creates a readable email format with direct Jira ticket links Send Ticket Summaries delivers the daily report via Gmail How to set up Connect your Jira account by adding your Jira Software Cloud API credentials to the Jira nodes Add your OpenAI API key to the OpenAI Chat Model node for AI-powered ticket analysis Configure Gmail credentials for the Send Ticket Summaries node to deliver reports Update the recipient email in the "Send Ticket Summaries" node to your desired recipient Adjust the project key in the "Set Project Key" node to match your Jira project identifier Configure the schedule trigger to run daily at your preferred time for automatic reporting Customize the JQL query in Jira nodes to filter tickets based on your specific requirements Test the workflow using the manual trigger to ensure proper ticket fetching and AI analysis Review email formatting in the "Format Body" node and adjust as needed for your reporting style How to customize this workflow to your needs Modify AI prompts**: customize the ticket analysis prompt in the "Ticket Summarizer" node to focus on specific aspects like priority, resolution time, or customer impact Adjust ticket filters**: change the JQL queries to filter by status, priority, assignee, or custom date ranges beyond "today" Add more data points**: include additional ticket fields like priority, status, assignee, or custom fields in the analysis Customize email format**: modify the "Format Body" node to change the report structure, add charts, or include additional formatting Set up different schedules**: create multiple versions for different reporting frequencies (hourly, weekly, monthly) Need help customizing? Contact me for consulting and support: 📧 billychartanto@gmail.com
by Rahul Joshi
📘 Description: This end-to-end automation transforms developer support emails into actionable FAQs and sentiment insights using Azure OpenAI GPT-4o, Gmail, Notion, Slack, and Google Sheets. It not only classifies and summarizes each email into a Notion knowledge base but also detects sentiment and urgency, alerts the team on Slack for critical messages, and automatically replies to users with acknowledgment emails. Every failed or malformed payload is transparently logged in Google Sheets — ensuring zero message loss and full visibility into the AI pipeline. The result is a complete AI-driven customer support loop, from inbox to Notion and back to the sender. ⚙️ What This Workflow Does (Step-by-Step) 🟢 Gmail Polling Trigger – Developer Support Inbox Continuously monitors the developer support Gmail inbox every minute for new messages. Extracts the subject, sender, and snippet to initiate AI analysis. 🔍 Validate Email Payload (IF Node) Checks if each incoming email contains valid message data (like message ID and subject). ✅ True Path: continues to AI analysis ❌ False Path: logs error details in Google Sheets for debugging. 🧠 Configure GPT-4o Model (Azure OpenAI) Initializes GPT-4o as the reasoning model for semantic classification of developer support content. 🤖 Analyze & Classify Developer Email (AI Agent) Interprets each email and produces a structured JSON with: Problem summary FAQ category (e.g., API, Billing, UI) 2–3 line solution “Is recurring” flag for common issues. 🧹 Parse & Clean AI JSON Output (Code Node) Removes code formatting (json) and safely parses GPT-4o’s output into clean JSON. If parsing fails, the raw text and error message are sent to Google Sheets for review. 📘 Save FAQ Entry to Notion Database Creates a new FAQ record inside Notion’s “Release Notes” database. Stores the problem, category, and solution as searchable structured fields. 💬 Announce New FAQ in Slack Posts a summary of the new FAQ in Slack with title, category, and answer preview. Includes a link to view the Notion record instantly for team visibility. 🧠 Configure GPT-4o Model (Sentiment Analysis) Sets up another GPT-4o instance focused on understanding tone, emotion, and urgency of each email. ❤️ Analyze Email Sentiment & Urgency (AI Agent) Analyzes the email content to determine: Urgency: Low, Medium, High, Critical Sentiment: Positive, Neutral, Frustrated, Angry Immediate response required? (Yes/No) Provides a short “reason” explaining the classification. 🧹 Parse AI JSON Output – Sentiment Analysis Cleans and validates the JSON from sentiment AI for consistent field names (urgency, sentiment, reason). ⚖️ Filter Critical or High-Urgency Emails (IF Node) Checks if urgency == High or Critical. ✅ True Path: triggers escalation to Slack ❌ False Path: ends quietly to avoid unnecessary noise. 🚨 Alert Team in Slack – Critical Issue Sends an immediate Slack alert with: Email snippet Detected urgency and sentiment Short justification (reason) CTA for urgent action. Ensures fast team response to high-priority issues. 📨 Send Acknowledgment Email to Sender (Gmail Node) Automatically replies to the customer confirming receipt and providing a short AI-generated solution summary. Thanks the user and links the response back to the knowledge base — creating a closed-loop support experience. 🪶 Log Workflow Errors to Google Sheets Appends all failed validations, missing fields, or JSON parsing issues to the “error log sheet.” Provides a live audit trail for monitoring workflow health. 🧩 Prerequisites Gmail account with API access Azure OpenAI (GPT-4o) credentials Notion API integration (for FAQ database) Slack API access (for team alerts) Google Sheets (for logging errors) 💡 Key Benefits ✅ Converts support emails into structured FAQs automatically ✅ Detects sentiment & urgency for faster triage ✅ Keeps Notion knowledge base continuously updated ✅ Sends Slack alerts for critical issues instantly ✅ Maintains transparent error logs in Google Sheets 👥 Perfect For Developer Relations or Product Support Teams SaaS companies managing large support volumes Teams using Gmail, Notion, and Slack for internal comms Startups automating customer response and knowledge creation