by Budi SJ
Automated Brand DNA Generator Using JotForm, Google Search, AI Extraction & Notion The Brand DNA Generator workflow automatically scans and analyzes online content to build a company’s Brand DNA profile. It starts with input from a form, then crawls the company’s website and Google search results to gather relevant information. Using AI-powered extraction, the system identifies insights such as value propositions, ideal customer profiles (ICP), pain points, proof points, brand tone, and more. All results are neatly formatted and automatically saved to a Notion database as a structured Brand DNA report, eliminating the need for manual research. 🛠️ Key Features Automated data capture, collects company data directly from form submissions and Google search results. Uses AI-powered insight extraction with LLMs to extract and summarize brand-related information from website content. Fetches clean text from multiple web pages using HTTP requests and a content extractor. Merges extracted data from multiple sources into a single Brand DNA JSON structure. Automatically creates a new page in Notion with formatted sections (headings, paragraphs, and bullet points). Handles parsing failures and processes multiple pages efficiently in batches. 🔧 Requirements JotForm API Key, to capture company data from form submissions. SerpAPI Key, to perform automated Google searches. OpenRouter / LLM API, for AI-based language understanding and information extraction. Notion Integration Token & Database ID, to save the final Brand DNA report to Notion. 🧩 Setup Instructions Connect your JotForm account and select the form containing the fields Company Name and Company Website. Add your SerpAPI Key. Configure the AI model using OpenRouter or LLM. Enter your Notion credentials and specify the databaseId in the Create a Database Page node. Customize the prompt in the Information Extractor node to modify the tone or structure of AI analysis (Optional). Activate the workflow, then submit data through the JotForm to test automatic generation and Notion integration. 💡 Final Output A complete Brand DNA Report containing: Company Description Ideal Customer Profile Pain Points Value Proposition Proof Points Brand Tone Suggested Keywords All generated automatically from the company’s online presence and stored in Notion with no manual input required.
by riandra
Description This n8n template turns any website or documentation portal into a fully functional AI-powered support chatbot — no manual copy-pasting, no static FAQs. It uses MrScraper to crawl and extract your site's content, OpenAI to generate embeddings, and Pinecone to store and retrieve that knowledge at chat time. The result is a retrieval-augmented chatbot that answers questions using only your actual website content, always cites its sources, and never hallucinates policies or pricing. How It Works Phase 1 – URL Discovery:** The Map Agent crawls your target domain using include/exclude patterns to discover all relevant documentation or help center pages. It returns a clean, deduplicated list of URLs ready for content extraction. Phase 2 – Page Content Extraction:** Each discovered URL is processed in controlled batches by the General Agent, which extracts the readable content (title + main text) from every page. Low-quality or near-empty pages are automatically filtered out. Phase 3 – Chunking & Embedding:** Page text is split into overlapping chunks (default: ~1,100 chars with 180-char overlap) to preserve context at boundaries. Each chunk is sent to OpenAI Embeddings to generate a vector, then stored in Pinecone with metadata including the source URL, page title, and chunk index. Phase 4 – Chat Endpoint:** A Chat Trigger exposes a webhook endpoint your website or widget can connect to. When a user asks a question, the Support Chat Agent queries Pinecone for the most relevant chunks and generates a grounded answer using GPT-4.1-mini — always with source URLs included and strict anti-hallucination rules enforced. How to Set Up Create 2 scrapers in your MrScraper account: Map Agent Scraper (for crawling and discovering page URLs) General Agent Scraper (for extracting title + content from each page) Copy the scraperId for each — you'll need these in n8n. Set up your Pinecone index: Create a Pinecone index with dimensions that match your chosen OpenAI embedding model (e.g. 1536 for text-embedding-ada-002) Choose a namespace (recommended format: docs-yourdomain) Add your credentials in n8n: MrScraper API token OpenAI API key (used for both embeddings and the chat model) Pinecone API key Configure the Map Agent node: Set your target domain or docs root URL (e.g. https://docs.yoursite.com) Set includePatterns to focus on relevant sections (e.g. /docs/, /help/, /support/) Optionally set excludePatterns to skip noise (e.g. /assets/, /tag/, /static/) Configure the General Agent node: Enter your General Agent scraperId Adjust the batch size in the SplitInBatches node (start with 1–5 to stay within rate limits) Configure the Pinecone nodes: Select your Pinecone index in both the Upsert and Retriever nodes Set the correct namespace in both nodes so indexing and retrieval use the same data Customise the chatbot system prompt: Edit the Support Chat Agent's system message to set the chatbot's name, tone, and rules Adjust topK in the Pinecone Retriever (default: 8) based on how much context you want per answer Connect your chat widget or frontend to the Chat Trigger webhook URL generated by n8n Requirements MrScraper** account with API access enabled OpenAI** account (for embeddings and GPT-4.1-mini chat) Pinecone** account with an index created and ready Good to Know The overlap between chunks (default 180 chars) is intentional — it prevents answers from being cut off at chunk boundaries and significantly improves retrieval quality. The chatbot is configured to cite 1–3 source URLs per answer, so users can always verify the information themselves. The anti-hallucination rules in the system prompt instruct the agent to say it can't find an answer rather than guess — making it safe to use for support, pricing, or policy questions. Re-indexing is as simple as re-running the workflow. Use a consistent Pinecone namespace and upsert mode to update existing vectors without duplicating them. Customising This Workflow Swap the chat model:** Replace GPT-4.1-mini with GPT-4o or another OpenAI model for higher-quality answers on complex queries. Scheduled re-indexing:** Add a Schedule Trigger to automatically re-crawl and re-index your docs whenever content changes. Multiple knowledge bases:** Use different Pinecone namespaces (e.g. docs-product, docs-api) and route questions to the right namespace based on user intent. Embed on your website:** Connect the Chat Trigger webhook to any chat widget library to give your users a live support experience powered entirely by your own documentation. Multilingual support:** Add a translation node before chunking to index content in multiple languages and serve a global audience.
by Liveblocks
Analyzing uploaded Liveblocks comments attachments with AI This example uses Liveblocks Comments, collaborative commenting components for React. When an AI assistant is mentioned in a thread (e.g. "@AI Assistant"), it will automatically leave a response. Additionally, it will analyze any PDf or image attachments in the comments, and use them to help it respond. Using webhooks, this workflow is triggered when a comment is created in a thread. If the agent's ID ("__AI_AGENT") it will create a response. If a PDF or image file is uploaded, these will be analyzed by Anthropic and used as context. This response is then added, and users will see it appear in their apps in real time. Set up This workflow requires a Comments app installed and webhooks set up in the Liveblocks dashboard. You can try it with a demo application: Download the Next.js comments example, and run it with a secret key. Find database.ts inside the example and uncomment the AI assistant user. Insert the secret key from the project into n8n nodes: "Get a comment", "Get a thread", "Create a comment". Go to the Liveblocks dashboard, open your project and go to "Webhooks". Create a new webhook in your project using a placeholder URL, and selecting "commentCreated" events. Copy your webhook secret from this page and paste it into the "Liveblocks Trigger" node. Expose the webhook URL from the trigger, for example with localtunnel or ngrok. Copy the production URL from the "Liveblocks Trigger" and replace localhost:5678 with the new URL. Your workflow is now set up! Tag @AI Assistant in the application and add attachments to trigger it. Localtunnel The easiest way to expose your webhook URL: npx localtunnel --port 5678 --subdomain your-name-here This creates a URL like: https://honest-months-fix.loca.lt The URL you need for the dashboard looks like this: https://honest-months-fix.loca.lt/webhook/9cc66974-aaaf-4720-b557-1267105ca78b/webhook `
by deAPI Team
Who is this for? Teams who upload meeting recordings to YouTube (unlisted or private) and want automated notes Project managers who need to track action items across recurring meetings Remote teams who want searchable, structured meeting notes in Notion Content teams repurposing recorded calls into documentation What problem does this solve? Meeting notes are either rushed, incomplete, or never written at all. This workflow removes that bottleneck — upload a recording to YouTube and get a structured Notion page with summary, action items, decisions, and key topics, plus a Slack notification, all within minutes. What this workflow does Monitors a YouTube channel via RSS for new video uploads Transcribes the video using deAPI (Whisper Large V3) directly from the YouTube URL — no file download or size limits AI Agent analyzes the transcript and extracts a title, summary, action items, decisions, and key topics Creates a structured meeting notes page in a Notion database Posts the summary and action items to a Slack channel Setup Requirements n8n instance** (self-hosted or n8n Cloud) deAPI account for video transcription Anthropic account for the AI Agent Notion workspace with a meeting notes database Slack workspace Installing the deAPI Node n8n Cloud: Go to **Settings → Community Nodes and toggle the "Verified Community Nodes" option Self-hosted: Go to **Settings → Community Nodes and install n8n-nodes-deapi Configuration Add your deAPI credentials (API key + webhook secret) Add your Anthropic credentials (API key) Set the Feed URL in the RSS trigger to your YouTube channel's RSS feed: https://www.youtube.com/feeds/videos.xml?channel_id=YOUR_CHANNEL_ID Add your Notion credentials and set the Database ID in the Notion node Add your Slack credentials and set the Channel in the Slack node Ensure your n8n instance is on HTTPS How to customize this workflow Change the AI model**: Swap Anthropic for OpenAI, Google Gemini, or any other LLM provider Adjust the note structure**: Modify the AI Agent system message to extract different fields (attendees, follow-up date, sentiment, etc.) Change the trigger**: Replace the RSS trigger with a Google Drive trigger or form upload for non-YouTube recordings Change the output destination**: Replace Notion with Google Docs, Confluence, or Airtable Change the notification**: Replace Slack with Microsoft Teams, email, or Discord Monitor multiple channels**: Duplicate the RSS trigger or use multiple feed URLs to track several YouTube channels
by Stephan Koning
Recruiter Mirror is a proof‑of‑concept ATS analysis tool for SDRs/BDRs. Compare your LinkedIn or CV to job descriptions and get recruiter‑ready insights. By comparing candidate profiles against job descriptions, it highlights strengths, flags missing keywords, and generates actionable optimization tips. Designed as a practical proof of concept for breaking into tech sales, it shows how automation and AI prompts can turn LinkedIn into a recruiter‑ready magnet. Got it ✅ — based on your workflow (Webhook → LinkedIn CV/JD fetch → GhostGenius API → n8n parsing/transform → Groq LLM → Output to Webhook), here’s a clear list of tools & APIs required to set up your Recruiter Mirror (Proof of Concept) project: 🔧 Tools & APIs Required 1. n8n (Automation Platform) Either n8n Cloud or self‑hosted n8n instance. Used to orchestrate the workflow, manage nodes, and handle credentials securely. 2. Webhook Node (Form Intake) Captures LinkedIn profile (LinkedIn_CV) and job posting (LinkedIn_JD) links submitted by the user. Acts as the starting point for the workflow. 3. GhostGenius API Endpoints Used: /v2/profile → Scrapes and returns structured CV/LinkedIn data. /v2/job → Scrapes and returns structured job description data. Auth**: Requires valid credentials (e.g., API key / header auth). 4. Groq LLM API (via n8n node) Model Used: moonshotai/kimi-k2-instruct (via Groq Chat Model node). Purpose: Runs the ATS Recruiter Check, comparing CV JSON vs JD JSON, then outputs a structured JSON per the ATS schema. Auth**: Groq account + saved API credentials in n8n. 5. Code Node (JavaScript Transformation) Parses Groq’s JSON output safely (JSON.parse). Generates clean, recruiter‑ready HTML summaries with structured sections: Status Reasoning Recommendation Matched keywords / Missing keywords Optimization tips 6. n8n Native Nodes Set & Aggregate Nodes** → Rebuild structured CV & JD objects. Merge Node** → Combine CV data with job description for comparison. If Node** → Validates LinkedIn URL before processing (fallback to error messaging). Respond to Webhook Node** → Sends back the final recruiter‑ready insights in JSON (or HTML). ⚠️ Important Notes Credentials**: Store API keys & auth headers securely inside n8n Credentials Manager (never hardcode inside nodes). Proof of Concept: This workflow demonstrates feasibility but is **not production‑ready (scraping stability, LinkedIn terms of use, and API limits should be considered before real deployments).
by Fabian Maume
AI chatbots are only as good as the data they learn from. Most large language models (LLM) rely only on their training datasets. If you want the chatbots to know more about your business, the best is to implement a retrieval-augmented generation (RAG) pipeline to train Gemini with your website data. This is what this workflow will help you to do. This workflow uses a scheduler to scrape a website on a regular basis using Apify; web pages are then indexed or updated in a Pinecone vector database. This allows the chatbot to provide accurate and up-to-date information. The workflow uses Google's Gemini AI for both embeddings and response generation. How does it work? This workflow is split into 2 sub-logics highlighted with green sticky notes: RAG Training logic Chatbot logic RAG training logic Use the Apify Website Content Crawler to retrieve all content from your website The Pinecone Vector Store node indexes the text chunk in a Pinecone index. The Embeddings Google Gemini node generates embeddings for each text chunk Chatbot logic The Chat Trigger node receives user questions through a chat interface. An AI Agent node handles those requests. The AI Agent node uses a Vector Store Tool node, linked to a Pinecone Vector Store node in query mode, to retrieve relevant text chunks from Pinecone based on the user's question. The AI Agent sends the retrieved information and the user's question to the Google Gemini Chat Model (gemini-pro). How to set up this template? All nodes with an orange sticky note require setup. Get your tools set up: 1 Google Cloud Project and Vertex AI API: Create a Google Cloud project. Enable the Vertex AI API for your project. Obtain a Google AI API key from Google AI Studio 2 Get an Apify account Create an Apify account 3 Pinecone Account: Create a free account on the Pinecone website. Obtain your API key from your Pinecone dashboard. Create an index named company-website in your Pinecone project. Configure credentials in your n8n environment for: Google Gemini(PaLM) Api (using your Google AI API key) Pinecone API (using your Pinecone API key) Setup trigger frequency: Edit the Schedule Trigger to match the frequency at which you wish to update your RAG If you want to train your chatbot only once, you can replace it with a click trigger. Set up the Apify node Authenticate (via OAuth or API) Set up your website URL in the JSON input FAQ What is RAG? RAG stands for retrieval-augmented generation. It is a technique that provides an AI model (such as a large language model) with additional data. That allows the LLM to give more up-to-date and topic-specific information. What is the difference between RAG and LLM? RAG is a way to complement an LLM by giving it more up-to-date information. You can think of the LLM as the CPU processing your question, and RAG as the hard drive providing information. Do I have to use my website as training data? No. Website Content Crawler can scrape any website. So you can, in theory, use this template to build a RAG for someone else. You can even combine data from multiple websites. Can I use another model other than Gemini? In theory, yes. You could replace the Gemini node with another LLM model. If you are looking for inspiration about RAG implementation with the Ollama model, check out this template.
by Zain Khan
AI-Powered Quiz Generator for Instructors 📝🤖 Instantly turn any document into a shareable online quiz! This n8n workflow automates the entire quiz creation process: a new Jotform submission triggers the flow, the Google Gemini AI extracts key concepts and generates multiple-choice questions with correct answers, saves the questions to a Google Sheet for record-keeping, and finally creates a fully built, ready-to-share Jotform quiz using an HTTP request. How it Works This powerful workflow acts as a complete "document-to-quiz" automation tool, simplifying the process of creating educational or testing materials: Trigger & Input: The process starts when a user fills out the main Jotform submission form, providing a document (PDF/file upload), the desired Quiz Title, and the Number of Questions to generate. Create a jotform like this: https://form.jotform.com/252856893250062 having fields for Quiz Name, File Upload and Number of questions. Document Processing: The workflow retrieves the uploaded document via an HTTP request and uses the Extract from File node to parse and extract the raw text content from the file. AI Question Generation: The extracted text, quiz title, and desired question count are passed to the Google Gemini AI Agent. Following strict instructions, the AI analyzes the content and generates the specified number of multiple-choice questions (with four options and the correct answer indicated) in a precise JSON format. Data Structuring: The generated JSON is validated and formatted using a Structured Output Parser and split into individual items for each question. Record Keeping (Google Sheets): Each generated question, along with all its options and the confirmed correct answer, is appended as a new row in a designated Google Sheet for centralized record-keeping and review. Jotform Quiz Creation (HTTP Request): The workflow dynamically constructs the required API body, converting the AI-generated questions and options into the necessary fields for a new Jotform. It then uses an HTTP Request node to call the Jotform API, creating a brand-new, ready-to-use quiz form. Final Output: The final output provides the link to the newly created quiz, which can be shared immediately for submissions. Requirements To deploy this automated quiz generator, ensure you have the following accounts and credentials configured in your n8n instance: Jotform Credentials:* An *API Key* is required for both the *Jotform Trigger* (to start the workflow) and for the final *HTTP Request* (to create the new quiz form via the API). *Sign up for Jotform here:** https://www.jotform.com/?partner=zainurrehman Google Gemini API Key:* An API key for the *Google Gemini Chat Model* to power the *AI Agent** for question generation. Google Sheets Credentials:* An *OAuth2* or *API Key* credential for the *Google Sheets** node to save the generated questions. Initial Jotform:* A source Jotform that accepts the user input: a *File Upload* field, a *Text* field for the Quiz Title, and a *Number** field for the Number of Questions. Pro Tip: After the final HTTP Request, add an additional step (like an Email or Slack node) to automatically send the generated quiz link back to the user who submitted the initial request!
by Yasser Sami
Skool Community Scraper Using Olostep API This n8n template automates scraping content from Skool communities using the Olostep API. It collects structured data from Skool pages and stores it in a clean format, making it easy to analyze communities, extract insights, or build datasets for research and outreach. Who’s it for Community builders researching Skool groups Marketers analyzing competitor or niche communities SaaS founders validating ideas through community data Automation builders collecting structured social data Anyone who wants Skool data without manual scraping How it works / What it does Trigger The workflow starts with a manual trigger or form input containing a Skool URL or query. Skool Page Scraping The workflow uses the Olostep API to scrape Skool community pages. Extracts structured data using LLM-based parsing. Data Extraction Depending on configuration, the workflow can extract: Community name Post titles and content Author names Engagement metrics (likes, comments) URLs to posts or discussions Parse & Normalize The raw response is cleaned and split into individual items. Ensures consistent fields across all scraped entries. Deduplication Duplicate posts or entries are automatically removed. Data Storage The final structured data is stored in a table (Google Sheets or n8n Data Table). Ready for filtering, exporting, or further automation. This workflow allows you to turn Skool communities into structured datasets without browser automation or manual copy/paste. How to set up Import the template into your n8n workspace. Add your Olostep API key. Define the Skool page or community URL you want to scrape. Connect your storage destination (Google Sheets or Data Table). Run the workflow and collect structured Skool data automatically. Requirements n8n account (cloud or self-hosted) Olostep API key Google Sheets account or n8n Data Table How to customize the workflow Change extraction schema to capture more fields (timestamps, tags, replies). Add pagination to scrape older posts. Store data in Airtable, Notion, or a database. Trigger scraping on a schedule instead of manually. Combine with AI agents to summarize or analyze community discussions. 👉 This template makes it easy to extract, analyze, and reuse Skool community data at scale.
by Daniel Iliesh
This n8n workflow lets you effortlessly tailor your resume for any job using Telegram and LinkedIn. Simply send a LinkedIn job URL or paste a job description to the Telegram bot, and the workflow will: Extract the job information (using optional proxy if needed) Fetch your resume in JSON Resume format (hosted on GitHub Gist or elsewhere) Use an OpenRouter-powered LLM agent to automatically adapt your resume to match the job requirements Generate both HTML and PDF versions of your tailored resume Return the PDF file and shareable download links directly in Telegram The workflow is open-source and designed with privacy in mind. You can host the backend yourself to keep your data entirely under your control. It requires a Telegram Bot, a public JSON Resume, and an OpenRouter account. Proxy support is available for LinkedIn scraping. Perfect for anyone looking to quickly customize their resume for multiple roles with minimal manual effort!
by Madame AI
Scrape & Import Products to Shopify from Any Site (with Variants & Images)-(Optimized for shoes) This advanced n8n template automates e-commerce operations by scraping product data (including variants and images) from any URL and creating fully detailed products in your Shopify store. This workflow is essential for dropshippers, e-commerce store owners, and anyone looking to quickly import product catalogs from specific websites into their Shopify store. Self-Hosted Only This Workflow uses a community contribution and is designed and tested for self-hosted n8n instances only. How it works The workflow reads a list of product page URLs from a Google Sheet. Your sheet, with its columns for Product Name and Product Link, acts as a database for your workflow. The Loop Over Items node processes products one URL at a time. Two BrowserAct nodes run sequentially to scrape all product details, including the Name, price, description, sizes, and image links. A custom Code node transforms the raw scraped data (where fields like sizes might be a single string) into a structured JSON format with clean lists for sizes and images. The Shopify node creates the base product entry using the main details. The workflow then uses a series of nodes (Set Option and Add Option via HTTP Request) to dynamically add product options (e.g., "Shoe Size") to the new product. The workflow intelligently uses HTTP Request nodes to perform two crucial bulk tasks: Create a unique variant for each available size, including a custom SKU. Upload all associated product images from their external URLs to the product. A final Slack notification confirms the batch has been processed. Requirements BrowserAct** API account for web scraping BrowserAct* "Bulk Product Scraping From (URLs) and uploading to Shopify (Optimized for shoe - NIKE -> Shopify)*" Template BrowserAct** n8n Community Node -> (n8n Nodes BrowserAct) Google Sheets** credentials for the input list Shopify** credentials (API Access Token) to create and update products, variants, and images Slack** credentials (optional) for notifications Need Help? How to Find Your BrowseAct API Key & Workflow ID How to Connect n8n to Browseract How to Use & Customize BrowserAct Templates How to Use the BrowserAct N8N Community Node Workflow Guidance and Showcase Automate Shoe Scraping to Shopify Using n8n, BrowserAct & Google Sheets
by txampa_n8n
Insert Notion Database Fields from a Public URL via WhatsApp How it works WhatsApp Trigger receives a message containing a public URL. The workflow extracts the URL and retrieves the page content (via Apify). The content is parsed and transformed into structured fields. A new record is created in Notion, mapping the extracted fields to your database properties. Setup steps Configure your WhatsApp credentials in the WhatsApp Trigger node. In the Search / URL Extraction step, adjust the input logic if your message format differs. Configure your Apify credentials (and actor/task) to scrape the target page. Connect your Notion database and map the extracted values in Properties. Customization Default example: Amazon/Goodreads/Casa del Libro book pages. Update the scraping/parsing logic to match your target sources (e.g., books, products, articles, recipes, news, or LinkedIn profiles). If you change the data model in Notion, update the Properties mapping accordingly in the final node.
by Oneclick AI Squad
This automated n8n workflow enables AI-powered responses across multiple social media platforms, including Instagram DMs, Facebook messages, and WhatsApp chats using Meta's APIs. The system provides intelligent customer support, lead generation, and smart engagement at scale through AI-driven conversation management and automated response routing. Good to Know Supports multi-platform messaging across Instagram, Facebook, and WhatsApp Uses AI Travel Agent and Ollama Chat Model for intelligent response generation Includes platform memory for maintaining conversation context and history Automatic message processing and routing based on platform and content type Real-time webhook integration for instant message detection and response How It Works WhatsApp Trigger** - Monitors incoming WhatsApp messages and initiates automated response workflow Instagram Webhook** - Captures Instagram DM notifications and processes them for AI analysis Facebook Webhook** - Detects Facebook Messenger interactions and routes them through the system Message Processor** - Analyzes incoming messages from all platforms and prepares them for AI processing AI Travel Agent** - Processes messages using intelligent AI model to generate contextually appropriate responses Ollama Chat Model** - Provides advanced language processing for complex conversation scenarios Platform Memory** - Maintains conversation history and context across multiple interactions for personalized responses Response Router** - Determines optimal response strategy and routes messages to appropriate sending mechanisms Instagram Sender** - Delivers AI-generated responses back to Instagram DM conversations Facebook Sender** - Sends automated replies through Facebook Messenger API Send Message (WhatsApp)** - Delivers personalized responses to WhatsApp chat conversations How to Use Import workflow into n8n Configure Meta's Instagram Graph API, Facebook Messenger API, and WhatsApp Business Cloud API Set up approved Meta Developer App with required permissions Configure webhook endpoints for real-time message detection Set up Ollama Chat Model for AI response generation Test with sample messages across all three platforms Monitor response accuracy and adjust AI parameters as needed Requirements Access to Meta's Instagram Graph API, Facebook Messenger API, and WhatsApp Business Cloud API Approved Meta Developer App Webhook setup and persistent token management for real-time messaging Ollama Chat Model integration AI Travel Agent configuration Customizing This Workflow Modify AI prompts for different business contexts (customer service, sales, support) Adjust response routing logic based on message content or user behavior Configure platform-specific message templates and formatting Set up custom memory storage for enhanced conversation tracking Integrate additional AI models for specialized response scenarios Add message filtering and content moderation capabilities