by Jimleuk
This n8n workflow builds an appointment scheduling AI agent which can Take enquiries from prospective customers and help them book an appointment by checking appointment availability Where no appointment is booked, the Agent is able to send follow-up messages to re-engage leads. After an appointment is booked, the agent is able reschedule or even cancel the booking for the user without human intervention. For small outfits, this workflow could contribute the necessary "man-power" required to increase business sales. The sample Airtable can be found here: https://airtable.com/appO2nHiT9XPuGrjN/shroSFT2yjf87XAox 2024-10-22 Updated to Cal.com API v2. How it works The customer sends an enquiry via SMS to trigger our workflow. For this trigger, we'll use a Twilio webhook. The prospective or existing customer's number is logged in an Airtable Base which we'll be using to track all our enquries. Next, the message is sent to our AI Agent who can reply to the user and decide if an appointment booking can be made. The reply is made via SMS using Twilio. A scheduled trigger which runs every day, checks our chat logs for a list of prospective customers who have yet to book an appointment but still show interest. This list is sent to our AI Agent to formulate a personalised follow-up message to each lead and ask them if they want to continue with the booking. The follow-up interaction is logged so as to not to send too many messages to the customer. Requirements A Twilio account to receive customer messages. An Airtable account and Base to use as our datastore for enquiries. Cal.com account to use as our scheduling service. OpenAI account for our AI model. Customising this workflow Not using Airtable? Swap this out for your CRM of choice such as hubspot or your own service. Not using Cal.com? Swap this out for API-enabled services such as Acuity Scheduling or your own service.
by DUBCOM
Workflow: Snapshot Contabo How it Works This workflow automates daily backups (snapshots) of VPS instances hosted on Contabo. Each day at midnight, it checks for existing snapshots and ensures that only the latest backups are retained by removing older ones. It provides a seamless, hands-off backup process to keep your data secure. Setup Steps Setting up this workflow is quick, typically taking about 10-15 minutes. The essential part of the setup is providing the necessary credentials, which you can easily retrieve from your Contabo control panel. Import the Workflow: Download and upload the workflow JSON into n8n. Configure Credentials: Add CLIENT_ID, CLIENT_SECRET, API_USER, and API_PASSWORD in the credential node. Activate the Workflow: Enable it to run automatically at midnight every day. Flow Overview Schedule Trigger (00:00 daily):** Automatically initiates the workflow. Formatted Date:** Prepares a timestamp for naming the snapshot. List Snapshots:** Verifies if an existing snapshot is available for each VPS. Conditional Logic:** No Snapshot? Proceeds to create a new one. Snapshot Found? Deletes the old snapshot before creating a new one. Key Points Snapshot Retention:** Old snapshots are deleted to ensure only the latest backups are stored. Unique Identifiers:** UUIDs are used to track and guarantee unique operations.
by Mind-Front
Description: The closest definition to this workflow is a cheaper Modular Version of Perplexity online API empowered by LLM models that outperform the Perplexity Lama Model. This flow provides a seamless way to conduct detailed web searches, extract data, and generate insightful reports based on real-time information. It provides a webhook-based flow that gets any search question and reports back the results via a multi-level web search analysis and domain-specific emulation of an agent to deliver an unbiased expert report. This Flow is Ideal for market research, competitive analysis, or any scenario where actionable, structured insights are needed. A more complete, step-by-step guide is provided within the workflow, ensuring you have all the details to set up and customize each component. This tool is designed to function similarly to Perplexity by performing semantic search, reranking, and follow-up queries. However, it offers a unique advantage—complete customization at every stage. Modify any part of the process, from query refinement to data extraction, allowing you to tailor the workflow to your specific needs. Key Features: AI-Powered Query Generation and Expert Emulation**: Uses Google Gemini to transform user queries into expert-level searches, providing accurate and context-aware results. Dual-Stage Semantic Search with Intelligent Reranking**: Performs an initial search, reranks results, and refines the query based on findings to conduct a second, more targeted search. Top-Result Data Extraction**: Extracts content from the top three results of each search, capturing relevant insights from six total sources. Customizable API Options**: Pre-configured with free APIs (Google Gemini, DuckDuckGo, and Article Extraction APIs) but easily adaptable to other APIs if preferred. Automated, Insightful Reporting**: Synthesizes data into a cohesive report, providing expert-level insights tailored to the user’s query. Instructions for API Setup: This workflow is designed to work with free-tier APIs, offering a cost-effective way to retrieve high-quality data. Here’s how to set up each API, with detailed instructions included in the workflow: Google Gemini API (for Query Generation and Analysis): Visit Google AI Studio and log in. Create a free API key under "Get API Key" → "Create API Key in New Project." The free tier includes up to 15 requests per minute, 1 million tokens per minute, and up to 1,500 requests per day. Brave Search API (for Web Search): To attain the free web search API tier from Brave, follow these steps: Visit api.search.brave.com Create an account Subscribe to the free plan (no charge) Navigate to the API Keys section Generate an API key. For the subscription type, choose "Free". Article Extraction API (for Content Extraction): Register on RapidAPI.com and subscribe to the Article Extraction API. The free plan allows up to 300 extractions per month. Enter your API key in each of the 6 extraction nodes for content retrieval. Alternative: In the workflow, we have provided the full instructions on how to replace the current flow with alternative API Keys and provided suggestions such as Scraper Tech API. Additional Tip: To use other APIs, you can generate a cURL request in RapidAPI’s playground, and then paste it into the HTTP Request node in n8n. This approach streamlines integration by automatically filling in headers and request details. Why Choose This Workflow? The Intelligent Online Web Researcher offers an all-in-one solution for complex, customizable online research. Unlike other tools that provide automated semantic search, this workflow is fully modifiable, allowing you to tailor each step, from the initial query and reranking to data extraction and reporting. With built-in instructions and a structure that’s easy to adapt, it’s ideal for commercial applications that require real-time, high-quality insights. Tags: Online Research, Web Search, Market Analysis, Web Search Automation, Data Extraction, Semantic Search, API Integration, Competitive Intelligence, Business Intelligence, Real-Time Reporting, Web Scrape, Data Crawler, Perplexity
by Sam Nesler
Syncs assignments and completion states to and fro between Canvas LMS and a Notion database. Automatically triggers every 2 hours during the schoolday by default (meaning 7 times a day), but also supports manual refreshing via webhooks. Setup You'll need a few things to get started: A Canvas API key. You can generate one by going to your Canvas account settings and clicking on the "New Access Token" button. The URL looks like https://canvas.wisc.edu/profile/settings You'll also need to replace URLs in Canvas nodes with your institution's domain, unless you're a student at UW-Madison. Canvas nodes are all the HTTP Request nodes except the one labelled "OpenAI Categorization", which is an OpenAI node and will require a key in a later step. A Notion integration token. You can find this by going to your Notion integrations page and clicking "Create new integration". You can make it a "Internal Integration". A Notion database to sync to. I made a template for use with the workflow, but you can use any database that has the following fields: Status (status): Status with at least the options "Not Started" and "Completed" - assignments start out "Not Started", and are marked "Completed" when they are submitted on Canvas. Estimate (select): Select with at least the options "XS", "S", "M", "L", "XL" - this is where the estimated time to complete the assignment will be stored. Even if you don't use AI, they'll start out as "M" Priority (select): Select with at least the options "Could Do", "Should Do", "Must Do" - assignments start out "Should Do" ID (text): this is where the ID of the assignment will be stored. We use this to sync without having a database on the server Due Date (date): this is where the due date of the assignment will be stored Class (text): this is where the name of the class will be stored Link (URL): this is where the link to the assignment will be stored The ID of the Notion database you want to sync to. You can find this by clicking "Share" in the top right of your database and copying the link. The ID is the part of the link that comes after https://www.notion.so/ and before ?v=. So for https://www.notion.so/tsuniiverse/1976e99d91128076b034e7379464560f?v=1976e99d911281e7bd4b000c2cbec692&pvs=4, the ID would be 1976e99d91128076b034e7379464560f. An OpenAI key for assignment length estimation or disable the node. Manual Refreshing Embed the production URL from the Webhook Trigger inside a "toggle list" or "toggle heading" inside Notion, then expand the heading to refresh, like so:
by Angel Menendez
Upload Public-Facing Images to an S3 Cloudflare Bucket via Slack Modal 🛠 Who is this for? This workflow is for teams that use Slack for internal communication and need a streamlined way to upload public-facing images to an S3 Cloudflare bucket. It's especially beneficial for DevOps, marketing, or content management teams who frequently share assets and require efficient cloud storage integration. 💡 What problem does this workflow solve? Manually uploading images to cloud storage can be time-consuming and disruptive, especially if you're already working in Slack. This workflow automates the process, allowing you to upload images directly from Slack via a modal popup. It reduces friction and keeps your workflow within a single platform. 🔍 What does this workflow do? This workflow connects Slack with an S3 Cloudflare bucket to simplify the image-uploading process: Slack Modal Interaction**: Users trigger a Slack modal to select images for upload. Dynamic Folder Management**: Choose to create a new folder or use an existing one for uploads. S3 Integration**: Automatically uploads the images to a specified S3 Cloudflare bucket. Slack Confirmation**: After upload, Slack sends a confirmation with the uploaded file URLs. 🚀 Setup Instructions Prerequisites Slack Bot with the following permissions: commands files:write files:read chat:write Cloudflare S3 Credentials: Create an API token with write access to your S3 bucket. n8n Instance: Ensure n8n is properly set up with webhook capabilities. Steps Configure Slack Bot: Set up a Slack app and enable the Events API. Add your n8n webhook URL to the Events Subscription section. Add Credentials: Add your Slack API and S3 Cloudflare credentials to n8n. Customize the Workflow: Open the Idea Selector Modal node and update folder options to suit your needs. Update the Post Image to Channel node with your Slack channel ID. Deploy the Workflow: Activate the workflow and test by triggering the Slack modal. 🛠 How to Customize This Workflow Adjust the Slack Modal You can modify the modal layout in the Idea Selector Modal node to add additional fields or adjust the styling. Change the Bucket Structure Update the Upload to S3 Bucket node to customize the folder paths or change naming conventions. 🔗 References and Helpful Links Slack API Documentation Cloudflare S3 Setup n8n Documentation 📓 Workflow Notes Key Features: Slack Integration**: Uses Slack modal interactions to streamline the upload process. Cloud Storage**: Automatically uploads to a Cloudflare S3 bucket. User Feedback**: Sends a Slack message with file URLs upon successful upload. Setup Dependencies: Slack API token Cloudflare S3 credentials n8n webhook configuration Sticky Notes Included Sticky notes are embedded within the workflow to guide you through configuration and explain node functionality. 🌟 Why Use This Workflow? This workflow keeps your image-uploading process intuitive, efficient, and fully integrated with tools you already use. By leveraging n8n's flexibility, you can ensure smooth collaboration and quick sharing of public-facing assets without switching contexts.
by Akram Kadri
Who is this template for? This workflow template is designed for sales, marketing, and business development professionals who want a cost-effective and efficient way to generate leads. By leveraging n8n core nodes, it scrapes business emails from Google Maps without relying on third-party APIs or paid services, ensuring there are no additional costs involved. Ideal for small business owners, freelancers, and agencies, this template automates the process of collecting contact information for targeted outreach, making it a powerful tool for anyone looking to scale their lead generation efforts without incurring extra expenses. You can watch the video tutorial here: https://youtu.be/HaiO-UeiKBA How it works This template streamlines email scraping from Google Maps using only n8n core nodes, ensuring a completely free and self-contained solution. Here’s how it operates: Input Queries You provide a list of queries, each consisting of keywords related to the type of business you want to target and the specific region or subregion you’re interested in. Iterates through Queries The workflow processes each query one at a time. For each query, it triggers a sub-workflow dedicated to handling the scraping tasks. Scrapes Google Maps for URLs Using these queries, the workflow scrapes Google Maps to collect URLs of business listings matching the provided criteria. Fetches HTML Content The workflow then fetches the HTML pages of the collected URLs for further processing. Extracts Emails Using a Code Node with custom JavaScript, the workflow runs regular expressions on the HTML content to extract business email addresses. Setup Add Queries: Open the first node, "Run Workflow" and input a list of queries, each containing the business keywords and the target region. Configure the Google Sheets Node: Open the Google Sheets node and select a document and specific sheet where the scraped results will be saved. Run the workflow: Click on "Test workflow" and watch your Google Sheets document gradually receive business email addresses. Customize as Needed: You can adjust the regular expressions in the Code Node to refine the email extraction logic or add logic to extract other kinds of information.
by Simon
Address Validation Workflow About This workflow automates the process of validating and correcting client shipping addresses in Billbee, ensuring accurate delivery information. It's ideal for e-commerce businesses looking to save time and reduce errors in their order fulfillment process. The workflow uses Billbee, an order management platform for small to medium-sized online retailers, and the Endereco API for address validation. Who Is This For? E-Commerce Businesses**: Streamline order fulfillment by automatically correcting common shipping address errors. Warehouse Teams**: Reduce manual work and ensure packages are shipped to the correct address. Small to Medium-Sized Retailers**: Businesses using Billbee to manage orders and requiring efficient, automated solutions for address validation. How it Works Trigger: Workflow starts via a Billbee Webhook when an order is imported. Fetch Data: Retrieve the client's shipping address using the Order ID. Validate Address: Send the address to the Endereco API for validation and correction (e.g., house number errors). Conditional Actions: Valid Address: Update the address in Billbee. Invalid Address: Tag the order with "Validation Error." Track Status: Add tags in Billbee for processed orders. Setup Steps API Keys: Obtain Billbee Developer/User API Keys and Endereco API Key. Billbee Rule: Create an automation rule: Trigger: Order imported. Action: Call External URL with OrderId to trigger n8n workflow. Optional: Use a secondary trigger (e.g., order state changes to "gepackt") for manual corrections. Customization Options Filter Delivery Addresses: Customize filters to exclude specific delivery types, such as pickup shops ("Postfiliale," "Paketshop," or "Packstation"). Filters can be adjusted within Billbee or in the workflow. Error Handling: Configure additional actions for orders that fail validation, such as notifying your team or flagging orders for manual review. Order Tags: Define custom tags in Billbee to better track order statuses (e.g., "Address Corrected," "Validation Error"). Trigger Types: Use additional triggers such as changes to order states (e.g., "gepackt" or "In Fulfillment") for manual corrections or validations. Address Fields: Modify the workflow to focus on specific address components, such as postal codes, city names, or country codes. Validation Rules: Adjust Endereco API settings or add custom logic to refine validation criteria based on your business needs. API Documentation Endereco**: Endereco API Docs Billbee**: Billbee API Docs
by Mark Shcherbakov
Video Guide I prepared a detailed guide explaining how to set up and implement this scenario, enabling you to chat with your documents stored in Supabase using n8n. Youtube Link Who is this for? This workflow is ideal for researchers, analysts, business owners, or anyone managing a large collection of documents. It's particularly beneficial for those who need quick contextual information retrieval from text-heavy files stored in Supabase, without needing additional services like Google Drive. What problem does this workflow solve? Manually retrieving and analyzing specific information from large document repositories is time-consuming and inefficient. This workflow automates the process by vectorizing documents and enabling AI-powered interactions, making it easy to query and retrieve context-based information from uploaded files. What this workflow does The workflow integrates Supabase with an AI-powered chatbot to process, store, and query text and PDF files. The steps include: Fetching and comparing files to avoid duplicate processing. Handling file downloads and extracting content based on the file type. Converting documents into vectorized data for contextual information retrieval. Storing and querying vectorized data from a Supabase vector store. File Extraction and Processing: Automates handling of multiple file formats (e.g., PDFs, text files), and extracts document content. Vectorized Embeddings Creation: Generates embeddings for processed data to enable AI-driven interactions. Dynamic Data Querying: Allows users to query their document repository conversationally using a chatbot. Setup N8N Workflow Fetch File List from Supabase: Use Supabase to retrieve the stored file list from a specified bucket. Add logic to manage empty folder placeholders returned by Supabase, avoiding incorrect processing. Compare and Filter Files: Aggregate the files retrieved from storage and compare them to the existing list in the Supabase files table. Exclude duplicates and skip placeholder files to ensure only unprocessed files are handled. Handle File Downloads: Download new files using detailed storage configurations for public/private access. Adjust the storage settings and GET requests to match your Supabase setup. File Type Processing: Use a Switch node to target specific file types (e.g., PDFs or text files). Employ relevant tools to process the content: For PDFs, extract embedded content. For text files, directly process the text data. Content Chunking: Break large text data into smaller chunks using the Text Splitter node. Define chunk size (default: 500 tokens) and overlap to retain necessary context across chunks. Vector Embedding Creation: Generate vectorized embeddings for the processed content using OpenAI's embedding tools. Ensure metadata, such as file ID, is included for easy data retrieval. Store Vectorized Data: Save the vectorized information into a dedicated Supabase vector store. Use the default schema and table provided by Supabase for seamless setup. AI Chatbot Integration: Add a chatbot node to handle user input and retrieve relevant document chunks. Use metadata like file ID for targeted queries, especially when multiple documents are involved. Testing Upload sample files to your Supabase bucket. Verify if files are processed and stored successfully in the vector store. Ask simple conversational questions about your documents using the chatbot (e.g., "What does Chapter 1 say about the Roman Empire?"). Test for accuracy and contextual relevance of retrieved results.
by Mark Shcherbakov
Video Guide I prepared a detailed guide to help you set up your workflow effectively, enabling you to extract insights from YouTube for content generation using an AI agent. Youtube Link Who is this for? This workflow is ideal for content creators, marketers, and analysts looking to enhance their YouTube strategies through data-driven insights. It’s particularly beneficial for individuals wanting to understand audience preferences and improve their video content. What problem does this workflow solve? Navigating the content generation and optimization process can be complex, especially without significant audience insight. This workflow automates insights extraction from YouTube videos and comments, empowering users to create more engaging and relevant content effectively. What this workflow does The workflow integrates various APIs to gather insights from YouTube videos, enabling automated commentary analysis, video transcription, and thumbnail evaluation. The main functionalities include: Extracting user preferences from comments. Transcribing video content for enhanced understanding. Analyzing thumbnails via AI for maximum viewer engagement insights. AI Insights Extraction: Automatically pulls comments and metrics from selected YouTube creators to evaluate trends and gaps. Dynamic Video Planning: Uses transcriptions to help creators outline video scripts and topics based on audience interest. Thumbnail Assessment: Provides analysis on thumbnail designs to improve click-through rates and viewer attraction. Setup N8N Workflow API Setup: Create a Google Cloud project and enable the YouTube Data API. Generate an API key to be included in your workflow requests. YouTube Creator and Video Selection: Start by defining a request to identify top creators based on their video views. Capture the YouTube video IDs for further analysis of comments and other video metrics. Comment Analysis: Gather comments associated with the selected videos and analyze them for user insights. Video Transcription: Utilize the insights from transcriptions to formulate content plans. Thumbnail Analysis: Evaluate your video thumbnails by submitting the URL through the OpenAI API to gain insights into their effectiveness.
by Daniel Nolde
What it is: An automation to plan→draft→finalize and publish your textual blog post ideas to your wordpress blog Works in stages and hand back control to you in between those You can use a Google Spreadsheet for planning topics and configuring LLM models and prompts What it does: plans→drafts→finalizes blog post topics you specify in a Google Spreadsheet using an LLM with prompts that also ar configured in that spreadsheet (even which model to use) savs the results in the corresponding columns of the "Schedule" sheet in the spreadsheet hands control back to the user for inspecting or changing the results and for setting the next "Action" for th workflow Finally publishes the blog post to your Wordpress instance Limitations Probably slightly over-engineered ;-) No media generation yet some LLM models don't work because of their output format How it works: The Workflow is triggered manually or scheduled every hour It ingests a Google Spreadsheet to get Config for prompts/context tc Blog-Topics and their status and next action Depending on each blog topics "Status" and "Action" it then either uses an LLM for th next action ("plan"→"draft"→"final" actions) or publishes the written content to your Wordpress instance ("publish" actions) Set up steps: Import the workflow Make your own copy of the Google Spreadsheet Update the credentials using your individual credentials for: Google Spreadsheets OpenRouter Edit the "Settings" node and enter your individual values for Your spreadsheet copy URL Your wordpress blog URL Your wordpress blog username Your wordpress blog app password (a 4x4 alphanumeric sequence), that you probably have to create first, for which your wordpress user has to have 2-factor-authentication enabled. In your own copy of the spreadsheet: individualize the "Config" sheet's "Value" column for the prompts/context/etc Populate the "Schedule" sheet with at least one line in which you specify a "Topic" a "Schedulded" date (YYYY-MM-DD HH:mm:ss) a "Status" of "idea" an "Action" of "plan" (to kick off that action)
by Ferenc Erb
Overview Transform your Bitrix24 Open Line channels with an intelligent chatbot that leverages Retrieval-Augmented Generation (RAG) technology to provide accurate, document-based responses to customer inquiries in real-time. Use Case This workflow is designed for organizations that want to enhance their customer support capabilities in Bitrix24 by providing automated, knowledge-based responses to customer inquiries. It's particularly useful for: Customer service teams handling repetitive questions Support departments with extensive documentation Sales teams needing quick access to product information Organizations looking to provide 24/7 customer support What This Workflow Does Smart Document Processing Automatically processes uploaded PDF documents Splits documents into manageable chunks Generates vector embeddings for semantic understanding Indexes content for efficient retrieval AI-Powered Responses Utilizes Google Gemini AI to generate natural language responses Constructs answers based on relevant document content Maintains conversation context for coherent interactions Provides fallback responses when information is not available Vector Database Integration Stores document embeddings in Qdrant vector database Enables semantic search beyond simple keyword matching Retrieves the most relevant information for each query Maintains a persistent knowledge base that grows over time Webhook Handler Processes incoming messages from Bitrix24 Open Line channels Handles authentication and security validation Routes different types of events to appropriate handlers Manages session and conversation state Event Routing Intelligently routes different event types: ONIMBOTMESSAGEADD: Processes new user messages ONIMBOTJOINCHAT: Handles bot joining a conversation ONAPPINSTALL: Manages application installation ONIMBOTDELETE: Handles bot deletion Document Management Organizes processed documents in designated folders Tracks document processing status Moves indexed documents to appropriate locations Maintains document metadata for reference Interactive Menu Provides menu-based options for common user requests Customizable menu items and responses Easy navigation for users seeking specific information Fallback to operator option when needed Technical Architecture Components Webhook Handler: Receives and validates incoming requests from Bitrix24 Credential Manager: Securely manages authentication tokens and API keys Event Router: Directs events to appropriate processing functions Document Processor: Handles document loading, chunking, and embedding Vector Store: Qdrant database for storing and retrieving document embeddings Retrieval System: Searches for relevant document chunks based on user queries LLM Integration: Google Gemini model for generating natural language responses Response Manager: Formats and sends responses back to Bitrix24 Integration Points Bitrix24 API**: For bot registration, message handling, and user interaction Ollama API**: For generating document embeddings Qdrant API**: For vector storage and retrieval Google Gemini API**: For AI-powered response generation Setup Instructions Prerequisites Active Bitrix24 account with Open Line channels enabled Access to n8n workflow system Ollama API credentials Qdrant vector database access Google Gemini API key Configuration Steps Initial Setup Import the workflow into your n8n instance Configure credentials for all services Set up webhook endpoints Bitrix24 Configuration Create a new Bitrix24 application Configure webhook URLs Set appropriate permissions Install the application to your Bitrix24 account Document Storage Create a designated folder in Bitrix24 for knowledge base documents Configure folder paths in the workflow settings Upload initial documents to be processed Bot Configuration Customize bot name, avatar, and description Configure welcome messages and menu options Set up fallback responses Testing Verify successful installation Test document processing pipeline Send test queries to evaluate response qu
by Joseph LePage
Multi-AI Agent Chatbot for Postgres/Supabase Databases and QuickChart Generation Who is this for? This workflow is ideal for data analysts, developers, and business intelligence teams who need an AI-powered chatbot to query Postgres/Supabase databases and generate dynamic charts for data visualization. What problem does this solve? It simplifies data exploration by combining conversational AI with database querying and chart generation. Users can interact with their database using natural language, retrieve insights, and visualize data without manual SQL queries or chart configuration. What this workflow does AI-Powered Chat Interface: Accepts natural language prompts to query databases or generate charts. Routes user requests through a tool agent system to determine the appropriate action (query or chart). Database Querying: Executes SQL queries on Postgres/Supabase databases based on user input. Retrieves schema information, table definitions, and specific data records. Dynamic Chart Generation: Uses QuickChart to create bar charts, line charts, or other visualizations from database records. Outputs a shareable chart URL or JSON configuration for further customization. Memory Integration: Maintains chat history using Postgres memory nodes, enabling context-aware interactions. Workflow diagram showcasing AI agents, database querying, and chart generation paths. Setup Prerequisites: A Postgres-compatible database (e.g., Supabase). API credentials for OpenAI. Configuration Steps: Add your database connection credentials in the Postgres nodes. Set up OpenAI credentials for GPT-4o-mini in the language model nodes. Adjust the QuickChart schema in the "QuickChart Object Schema" node to fit your use case. Testing: Trigger the chat workflow via the "When chat message received" node. Test with prompts like "Generate a bar chart of sales data" or "Show me all users in the database." How to customize this workflow Modify AI Prompts** Add Chart Types** Integrate Other Tools**