by Shelly-Ann Davy
Automate Bug Reports: GitHub Issues → AI Analysis → Jira Tickets with Slack & Discord Alerts Automatically convert GitHub issues into analyzed Jira tickets with AI-powered severity detection, developer assignment, and instant team alerts. Overview This workflow captures GitHub issues in real-time, analyzes them with GPT-4o for severity and categorization, creates enriched Jira tickets, assigns the right developers, and notifies your team across Slack and Discord—all automatically. Features AI-Powered Triage**: GPT-4o analyzes bug severity, category, root cause, and generates reproduction steps Smart Assignment**: Automatically assigns developers based on mentioned files and issue context Two-Way Sync**: Posts Jira ticket links back to GitHub issues Multi-Channel Alerts**: Rich notifications in Slack and Discord with action buttons Time Savings**: Eliminates 15-30 minutes of manual triage per bug Customizable Routing**: Easy developer mapping and priority rules What Gets Created Jira Ticket: Original GitHub issue details with reporter info AI severity assessment and categorization Reproduction steps and root cause analysis Estimated completion time Automatic labeling and priority assignment GitHub Comment: Jira ticket link AI analysis summary Assigned developer and estimated time Team Notifications: Severity badges and quick-access buttons Developer assignment and root cause summary Color-coded priority indicators Use Cases Development teams managing 10+ bugs per week Open source projects handling community reports DevOps teams tracking infrastructure issues QA teams coordinating with developers Product teams monitoring user-reported bugs Setup Requirements Required: GitHub repository with admin access Jira Software workspace OpenAI API key (GPT-4o access) Slack workspace OR Discord server Customization Needed: Update developer email mappings in "Parse GPT Response & Map Data" node Replace YOUR_JIRA_PROJECT_KEY with your project key Update Slack channel name (default: dev-alerts) Replace YOUR_DISCORD_WEBHOOK_URL with your webhook Change your-company.atlassian.net to your Jira URL Setup Time: 15-20 minutes Configuration Steps Import workflow JSON into n8n Add credentials: GitHub OAuth2, Jira API, OpenAI API, Slack, Discord Configure GitHub webhook in repository settings Customize developer mappings and project settings Test with sample GitHub issue Activate workflow Expected Results 90% faster bug triage (20 min → 2 min per issue) 100% consistency in bug analysis Zero missed notifications Better developer allocation Improved bug documentation Tags GitHub, Jira, AI, GPT-4, Bug Tracking, DevOps, Automation, Slack, Discord, Issue Management, Development, Project Management, OpenAI, Webhook, Team Collaboration
by Don Jayamaha Jr
Instantly access real-time Binance Spot Market data in Telegram! This workflow connects the Binance REST API with Telegram and optional GPT-4.1-mini formatting, delivering structured insights such as latest prices, 24h stats, order book depth, trades, and candlesticks directly into chat. 🔎 How It Works A Telegram Trigger listens for incoming user requests. User Authentication validates the Telegram ID to restrict access. A Session ID is generated from chat.id to manage session memory. The Binance AI Agent executes HTTP calls to the Binance public API: Latest Price (Ticker) → /api/v3/ticker/price?symbol=BTCUSDT 24h Statistics → /api/v3/ticker/24hr?symbol=BTCUSDT Order Book Depth → /api/v3/depth?symbol=BTCUSDT&limit=50 Best Bid/Ask Snapshot → /api/v3/ticker/bookTicker?symbol=BTCUSDT Candlestick Data (Klines) → /api/v3/klines?symbol=BTCUSDT&interval=15m&limit=200 Recent Trades → /api/v3/trades?symbol=BTCUSDT&limit=100 Utility Tools refine outputs: Calculator → computes spreads, midpoints, averages, % changes. Think → extracts and reformats JSON into human-readable fields. Simple Memory → saves symbol, sessionId, and user context. Message Splitter ensures outputs >4000 characters are chunked for Telegram. Final structured reports are sent back to Telegram. ✅ What You Can Do with This Agent Get real-time Binance Spot prices with 24h stats. Fetch order book depth and liquidity snapshots. View best bid/ask quotes. Retrieve candlestick OHLCV data across timeframes. Check recent trades (up to 100). Calculate spreads, mid-prices, % changes automatically. Receive clean, structured messages instead of raw JSON. 🛠️ Setup Steps Create a Telegram Bot Use @BotFather and save the bot token. Configure in n8n Import Binance AI Agent v1.02.json. Update the User Authentication node with your Telegram ID. Add Telegram credentials (bot token). Add OpenAI API key (Optional) Add Binance API key Deploy & Test Activate the workflow in n8n. Send BTCUSDT to your bot. Instantly receive Binance Spot Market insights inside Telegram. 📤 Output Rules Group outputs by Price, 24h Stats, Order Book, Candles, Trades. Respect Telegram’s 4000-char message limit (auto-split enabled). Only structured summaries — no raw JSON. 📺 Setup Video Tutorial Watch the full setup guide on YouTube: ⚡ Unlock Binance Spot Market insights instantly in Telegram — clean, fast, and API-key free. 🧾 Licensing & Attribution © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade report structure are IP-protected. No unauthorized rebranding permitted. 🔗 For support: Don Jayamaha – LinkedIn
by Greypillar
How it works • Webhook receives lead form submissions from your website • AI Agent (GPT-4o) analyzes lead quality using intelligent scoring framework • Clearbit enriches company data automatically (employee count, industry, revenue) • Qualification score (0-100) determines routing: high-quality leads → HubSpot CRM + Slack alert, low-quality leads → Airtable for manual review • Structured output parser ensures reliable JSON formatting every time Set up steps • Time to set up: 15-20 minutes • Import the Clearbit sub-workflow first (separate workflow file included) • Create 7 custom properties in HubSpot (qualification_score, buying_intent, urgency_level, budget_indicator, ai_summary, pain_points, recommended_action) • Create Airtable base with 14 columns for low-quality lead tracking • Get Slack channel IDs for sales alerts and review requests • Add credentials: OpenAI (GPT-4o), Clearbit API, HubSpot OAuth2, Slack OAuth2, Airtable Token • Replace placeholder IDs in Slack and Airtable nodes • Configure the Clearbit Enrichment Tool node with your sub-workflow ID What you'll need • OpenAI API - OpenAI model access for AI qualification • Clearbit API - Free tier available for company enrichment • HubSpot - Free CRM account works • Slack - Standard workspace • Airtable - Free plan works • Website form - To send webhook data Who this is for Sales teams and agencies that want to automatically qualify inbound leads before they hit the CRM. Perfect for B2B companies with high lead volume that need intelligent routing.
by Don Jayamaha Jr
📊 WEEX Spot Market Quant AI Agent (All-in-One Multi-Agent Trading System) ⚡ Overview This multi-agent n8n workflow delivers an automated, intelligent trading analysis system for the WEEX Spot Market. It uses GPT-4o to interpret user prompts, route them to the correct sub-agent tools, analyze technical indicators, price data, sentiment insights, and return concise trading signals via Telegram or downstream automations. No need to download additional workflows—everything is embedded in this single orchestrated agent. 🧠 Core Features 🔹 Single-entry architecture → Built-in orchestration logic with no external subworkflow dependencies 🔹 Multi-timeframe indicator analysis → 15m, 1h, 4h, and 1d 🔹 Sentiment + news insights from crypto sources 🔹 Live price, volume, kline, and order book analysis 🔹 LLM-powered signal evaluation using GPT-4o 🔹 Telegram integration for fast human queries or autonomous alerts 🤖 Built-In Agent Modules | Module | Description | | ----------------------------------- | ---------------------------------------------------------- | | ✅ Financial Analyst Tool | Routes prompts, interprets tokens, and triggers sub-agents | | ✅ News & Sentiment Analyst Tool | Gathers real-time sentiment from crypto news sources | | ✅ Technical Indicator Tools | 15m, 1h, 4h, 1d indicators using WEEX spot market data | | ✅ Price & Order Book Agent | Fetches real-time stats, price, and structure | | ✅ Trading Signal Evaluator | GPT-4o merges all data and generates trading decision | 🖥️ Prompt Flow Example User Input: “Should I long or short ETH on WEEX today?” → Financial Analyst Agent interprets the query → Fetches multi-timeframe indicators, live price, sentiment → GPT-4o evaluates conditions and creates recommendation → Output delivered via Telegram: 📈 ETH/USDT Overview • Price: \$3,710 • 4h RSI: 64.5 – Slightly Overbought • MACD: Bullish Crossover • Market Sentiment: 🔼 Positive Recommendation: Consider long entry with stop at \$3,640. 🔧 Setup Instructions Follow these steps to fully deploy and operate the WEEX Quant AI Agent in your n8n environment: 🟢 Get Telegram Bot API Key Create your bot via @BotFather on Telegram Save the token it gives you (format: 123456789:ABCdefGHIjkLMNopQRStuvWXyz) 🔑 Add OpenAI / DeepSeek Chat API Key Compatible with GPT-4o (OpenAI) or DeepSeek Chat 📈 (Optional) WEEX API Keys If expanding to live trading or authenticated data, get a WEEX Spot API key from your account dashboard Not required for the analysis agent to function 🔗 Connect Telegram to n8n Use Telegram Trigger and Telegram node with your API key Ensure webhook is set correctly (or use polling mode) ✅ Example Use Cases | Scenario | Outcome | | ---------------------------------- | ----------------------------------------------------- | | “Is BTC bullish or bearish?” | Merged indicator + sentiment + price analysis summary | | “Get 15m and 4h trends for SOL” | Multi-timeframe volatility vs macro trend report | | “Latest crypto news on XRP” | Real-time filtered news + DeepSeek sentiment summary | | “What’s the order book structure?” | Level-by-level spread analysis with buy/sell volumes | 🎥 Watch the Live Demo 👨💼 Licensing & Support 🧾 © 2025 Treasurium Capital Limited Company Architecture, prompts, and trade signal framework are IP-protected. No unauthorized rebranding or replication permitted. 📩 Connect with the Creator Don Jayamaha – LinkedIn Profile
by Stéphane Bordas
How it Works This workflow lets you build a Messenger AI Agent capable of understanding text, images, and voice notes, and replying intelligently in real time. It starts by receiving messages from a Facebook Page via a Webhook, detects the message type (text, image, or audio), and routes it through the right branch. Each input is then prepared as a prompt and sent to an AI Agent that can respond using text generation, perform quick calculations, or fetch information from Wikipedia. Finally, the answer is formatted and sent back to Messenger via the Graph API, creating a smooth, fully automated chat experience. Set Up Steps Connect credentials Add your OpenAI API key and Facebook Page Access Token in n8n credentials. Plug the webhook Copy the Messenger webhook URL from your workflow and paste it into your Facebook Page Developer settings (Webhook → Messages → Subscribe). Customize the agent Edit the System Message of the AI Agent to define tone, temperature, and purpose (e.g. “customer support”, “math assistant”). Enable memory & tools Turn on Simple Memory to keep conversation context and activate tools like Calculator or Wikipedia. Test & deploy Switch to production mode, test text, image, and voice messages directly from Messenger. Benefits 💬 Multi-modal Understanding — Handles text, images, and audio messages seamlessly. ⚙️ Full Automation — End-to-end workflow from Messenger to AI and back. 🧠 Smart Replies — Uses OpenAI + Wikipedia + Calculator for context-aware answers. 🚀 No-Code Setup — Build your first Messenger AI in less than 30 minutes. 🔗 Extensible — Easily connect more tools or APIs like Airtable, Google Sheets, or Notion.
by Jay Emp0
🤖 Reddit Auto-Comment Assistant (AI-Driven Marketing Workflow) Automate how you reply to Reddit posts using AI-generated, first-person comments that sound human, follow subreddit rules, and (optionally) promote your own links or products. 🧩 Overview This workflow monitors Reddit mentions (via F5Bot Gmail alerts) and automatically: Fetches the relevant Reddit post. Checks the subreddit’s rules for self-promotion. Generates a comment using GPT-5 style prompting (human-like tone, <255 chars). Optionally promotes your chosen product from Google Sheets. Posts the comment automatically It’s ideal for creators, marketers, or founders who want to grow awareness organically and authentically on Reddit — without sounding like a bot. 🧠 Workflow Diagram 🚀 Key Features | Feature | Description | |----------|--------------| | AI-Generated Reddit Replies | Uses GPT-powered reasoning and prompt structure that mimics a senior marketing pro typing casually. | | Rule-Aware Posting | Reads subreddit rules and adapts tone — no promo where it’s not allowed. | | Product Integration | Pulls product name + URL from your Google Sheet automatically. | | Full Automation Loop | From Gmail → Gsheet → Reddit | | Evaluation Metrics | Logs tool usage, link presence, and formatting to ensure output quality. | 🧰 Setup Guide 1️⃣ Prerequisites | Tool | Purpose | |------|----------| | n8n Cloud or Self-Host | Workflow automation environment | | OpenAI API key | For comment generation | | Reddit OAuth2 credentials | To post comments | | Google Sheets API | To fetch and evaluate products | | Gmail API | To read F5Bot alerts | 2️⃣ Import the Workflow Download Reddit Assistant.json In n8n, click Import Workflow → From File Paste your credentials in the corresponding nodes: Reddit account Gmail account Gsheet account OpenAI API 3️⃣ Connect Your Google Sheets You’ll need two Google Sheets: | Sheet | Purpose | Example Tab | |--------|----------|-------------| | Product List | Contains all your product names, URLs, goals, and CTAs | promo | | Reddit Evaluations | Logs AI performance metrics and tool usage | reddit evaluations | 4️⃣ Set Up Gmail Trigger (F5Bot) Subscribe to F5Bot alerts for keywords like "blog automation" or your brand name. Configure Gmail Trigger to only pull from sender: admin@f5bot.com. 5️⃣ Configure AI Agent Prompt The built-in prompt follows a GPT-5-style structured reasoning chain: Reads the Reddit post + rules. Determines if promotion is allowed. Fetches product data from Google Sheets. Writes a short, human comment (<255 chars). Avoids buzzwords and fake enthusiasm. 📊 Workflow Evaluations The workflow includes automatic evaluation nodes to track: | Metric | Description | |--------|--------------| | contains link | Checks if comment includes a URL | | contains dash | Detects format breaks | | Tools Used | Logs which AI tools were used in reasoning | | executionTime | Monitors average latency | 💡 Why This Workflow Has Value | Value | Explanation | |--------|--------------| | Saves time | Automates Reddit marketing without manual engagement. | | Feels human | AI comments use a fast-typing, casual tone (e.g., “u,” “ur,” “idk”). | | Follows rules | Respects subreddits where promo is banned. | | Data-driven | Logs performance across 10 test cases for validation. | | Monetizable | Can promote Gumroad, YouTube, or SaaS products safely. | ⚙️ Example Use Case > “I used this automation to pull $1.4k by replying to Reddit posts about blog automation. > Each comment felt natural and directed users to my n8n workflow.”
by Connor Provines
[Meta] Multi-Format Documentation Generator for N8N Creators (+More) One-Line Description Transform n8n workflow JSON into five ready-to-publish documentation formats including technical guides, social posts, and marketplace submissions. Detailed Description What it does: This workflow takes an exported n8n workflow JSON file and automatically generates a complete documentation package with five distinct formats: technical implementation guide, LinkedIn post, Discord community snippet, detailed use case narrative, and n8n Creator Commons submission documentation. All outputs are compiled into a single Google Doc for easy access and distribution. Who it's for: n8n creators** preparing workflows for the template library or community sharing Automation consultants** documenting client solutions across multiple channels Developer advocates** creating content about automation workflows for different audiences Teams** standardizing workflow documentation for internal knowledge bases Key Features: Parallel AI generation** - Creates all five documentation formats simultaneously using Claude, saving 2+ hours of manual writing Automatic format optimization** - Each output follows platform-specific best practices (LinkedIn character limits, Discord casual tone, n8n marketplace guidelines) Single Google Doc compilation** - All documentation consolidated with clear section separators and automatic workflow name detection JSON upload interface** - Simple form-based trigger accepts workflow exports without technical setup Smart content adaptation** - Same workflow data transformed into technical depth for developers, engaging narratives for social media, and searchable descriptions for marketplaces Ready-to-publish outputs** - No editing required—each format follows platform submission guidelines and style requirements How it works: User uploads exported n8n workflow JSON through a web form interface Five AI agents process the workflow data in parallel, each generating format-specific documentation (technical guide, LinkedIn post, Discord snippet, use case story, marketplace listing) All outputs merge into a formatted document with section headers and separators Google Docs creates a new document with auto-generated title from workflow name and timestamp Final document populates with all five documentation formats, ready for copying to respective platforms Setup Requirements Prerequisites: Anthropic API** (Claude AI) - Powers all documentation generation; requires paid API access or credits Google Docs API** - Creates and updates documentation; free with Google Workspace account n8n instance** - Cloud or self-hosted with AI agent node support (v1.0+) Estimated Setup Time: 20-25 minutes (15 minutes for API credentials, 5-10 minutes for testing with sample workflow) Installation Notes API costs**: Each workflow documentation run uses ~15,000-20,000 tokens across five parallel AI calls (approximately $0.30-0.50 per generation at current Claude pricing) Google Docs folder**: Update the folderId parameter in the "Create a document" node to your target folder—default points to a specific folder that won't exist in your Drive Testing tip**: Use a simple 3-5 node workflow for your first test to verify all AI agents complete successfully before processing complex workflows Wait node purpose**: The 5-second wait between document creation and content update prevents Google Docs API race conditions—don't remove this step Form URL**: After activation, save the form trigger URL for easy access—bookmark it or share with team members who need to generate documentation Customization Options Swappable integrations: Replace Google Docs with Notion, Confluence, or file system storage by swapping final nodes Switch from Claude to GPT-4, Gemini, or other LLMs by changing the language model node (may require prompt adjustments) Add Slack/email notification nodes after completion to alert when documentation is ready Adjustable parameters: Modify AI prompts in each agent node to match your documentation style preferences or add company-specific guidelines Add/remove documentation formats by duplicating or deleting agent nodes and updating merge configuration Change document formatting in the JavaScript code node (section separators, headers, metadata) Extension possibilities: Add automatic posting to LinkedIn/Discord by connecting their APIs after doc generation Create version history tracking by appending to existing docs instead of creating new ones Build approval workflow by adding human-in-the-loop steps before final document creation Generate visual diagrams by adding Mermaid chart generation from workflow structure Create multi-language versions by adding translation nodes after English generation Category Development Tags documentation n8n content-generation ai claude google-docs workflow automation-publishing Use Case Examples Marketplace contributors**: Generate complete n8n template submission packages in minutes instead of hours of manual documentation writing across multiple format requirements Agency documentation**: Automation consultancies can deliver client workflows with professional documentation suite—technical guides for client IT teams, social posts for client marketing, and narrative case studies for portfolio Internal knowledge base**: Development teams standardize workflow documentation across projects, ensuring every automation has consistent technical details, use case examples, and setup instructions for team onboarding
by noda
AI Recommender: From Food Photo to Restaurant and Book (Google Books Integrated) What it does Analyzes a food photo with an AI vision model to extract dish name + category Searches nearby restaurants with Google Places and selects the single best (rating → reviews tie-break) Finds a matching book via Google Books and posts a tidy summary to Slack Who it’s for Foodies, bloggers, and teams who want a plug-and-play flow that turns a single food photo into a dining pick + themed reading. How it works Google Drive Trigger detects a new photo Dish Classifier (Vision LLM) → JSON (dish_name, category, basic macros) Search Google Places near your origin; Select Best Place (AI) Recommend Book (AI) → Search Google Books → format details Post to Slack (JP/EN both possible) Requirements Google Drive / Google Places / Google Books credentials, LLM access (OpenRouter/OpenAI), Slack OAuth. Customize Edit origin/radius in Set Origin & Radius, tweak category→keyword mapping in Normalize Classification, adjust Slack channel & message in Post to Slack.
by plemeo
Who’s it for Influencers and social-media teams who want to comment automatically on posts from selected Instagram profiles—scaling engagement while staying within platform limits. How it works / What it does Schedule Trigger runs every 2 h. Profile Post Extractor fetches up to 20 posts per profile from your CSV. Select Cookie rotates Instagram session-cookies. Get Random Post selects one. Create Comment (GPT-4o) writes ≤150-character reply in your chosen language. Builds instagram_post_to_comment.csv, uploads to SharePoint. Phantombuster Autocomment Agent posts it. Logged in instagram_posts_already_commented.csv to prevent repeats. How to set up Same as the auto-liker, but also add instagram_posts_already_commented.csv (header postUrl). Define tracked profiles in profiles_instagram.csv. Configure comment prompt & language in Set ENV Variables. Profile CSV format Your profiles_instagram.csv must contain a header profileUrl and valid Instagram profile URLs. Example: profileUrl https://www.instagram.com/brand_account/ https://www.instagram.com/influencer123/
by Jimleuk
On my never-ending quest to find the best embeddings model, I was intrigued to come across Voyage-Context-3 by MongoDB and was excited to give it a try. This template implements the embedding model on a Arxiv research paper and stores the results in a Vector store. It was only fitting to use Mongo Atlas from the same parent company. This template also includes a RAG-based Q&A agent which taps into the vector store as a test to helps qualify if the embeddings are any good and if this is even noticeable. How it works This template is split into 2 parts. The first part being the import of a research document which is then chunked and embedded into our vector store. The second part builds a RAG-based Q&A agent to test the vector store retrieval on the research paper. Read the steps for more details. How to use First ensure you create a Voyage account voyageai.com and a MongoDB database ready. Start with Step 1 and fill in the "Set Variables" node and Click on the Manual Execute Trigger. This will take care of populating the vector store with the research paper. To use the Q&A agent, it is required to publish the workflow to access the public chat interface. This is because "Respond to Chat" works best in this mode and not in editor mode. To use for your own document, edit the "Set Variables" node to define the URL to your own document. This embeddings approach should work best on larger documents. Requirements Voyageai.com account for embeddings. You may need to add credit to get a reasonable RPM for this workflow. MongoDB database either self-hosted or online at https://www.mongodb.com. OpenAI account for RAG Q&A agent. Customising this workflow The Voyage embeddings work with any vector store so feel free to swap out to other such as Qdrant or Pinecone if you're not a fan of MongoDB Atlas. If you're feeling brave, instead of the 3 sequential pages setup I have, why not try the whole document! Fair warning that you may hit memory problems if your instance isn't sufficiently sized - but if it is, go head and share the results!
by John
How it works User Signup & Verification: The workflow starts when a user signs up. It generates a verification code and sends it via SMS using Twilio. Code Validation: The user replies with the code. The workflow checks the code and, if valid, creates a session for the user. Conversational AI: Incoming SMS messages are analyzed by Chat GPT AI for sentiment, intent, and urgency. The workflow stores the conversation context and generates smart, AI-powered replies. Escalation Handling: If the AI detects urgency or frustration, the workflow escalates the session—alerting your team and sending a supportive SMS to the user. Set up steps Estimated setup time:** 10–20 minutes for most users. What you’ll need:** A free n8n account (self-hosted or cloud) Free Twilio account (for SMS) OpenAI API key (for AI) A PostgreSQL database (Supabase, Neon, or local) Setup process:** Import this workflow into n8n. Add your Twilio and OpenAI credentials as environment variables or n8n credentials. Update webhook URLs in your Twilio console (for incoming SMS). (Optional) Adjust sticky notes in the workflow for detailed, step-by-step guidance.
by Rodrigo
How it works This workflow creates a complete AI-powered restaurant ordering system through WhatsApp. It receives customer messages, processes multimedia content (text, voice, images, PDFs, location), uses GPT-4 to understand customer intent and manage conversations, handles the complete ordering flow from menu selection to payment verification, and sends formatted orders to restaurant staff. The system maintains conversation memory, verifies payment receipts using OCR, and provides automated responses in multiple languages. Who's it for Restaurant owners, food delivery services, and hospitality businesses looking to automate customer service and order management through WhatsApp without hiring additional staff. Requirements WhatsApp Business API account OpenAI API key (GPT-4/GPT-4o access recommended) Supabase account (for conversation memory and vector storage) Google Drive account (for menu images and QR codes) Google Maps API key (for location services) Gemini API key (for PDF processing) How to set up Configure credentials - Add your WhatsApp Business API, OpenAI, Supabase, Google Drive, and Gemini API credentials to n8n Update phone numbers - Replace [PHONE_NUMBER] placeholders with your actual restaurant and staff phone numbers Customize restaurant details - Replace [RESTAURANT_NAME], [RESTAURANT_OWNER_NAME], and [BANK_ACCOUNT_NUMBER] with your information Upload menu images - Add your menu images to Google Drive and update the file IDs Set up Supabase - Create tables for chat memory and upload your menu/restaurant information to the vector database Configure AI prompts - Update the restaurant information in the AI agent system messages Test the workflow - Send test messages to verify all integrations work How to customize the workflow Menu management**: Update Google Drive file IDs to display your current menu images Payment verification**: Modify the receipt analysis logic to match your bank's receipt format Order formatting**: Customize the order confirmation template sent to kitchen staff AI personality**: Adjust the restaurant agent's tone and responses in the system prompts Languages**: The AI supports multiple languages - customize welcome messages for your target market Business hours**: Add time-based logic to handle orders outside operating hours Delivery zones**: Integrate with your delivery area logic using the location processing features