by David Ashby
Complete MCP server exposing all Mandrill Tool operations to AI agents. Zero configuration needed - all 2 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Mandrill Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Mandrill Tool tool with full error handling 📋 Available Operations (2 total) Every possible Mandrill Tool operation is included: 💬 Message (2 operations) • Send a message based on a template • Send a message based on HTML 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Mandrill Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Mandrill Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Lucas Peyrin
How it works This workflow automates the process of checking for and applying updates to a self-hosted n8n instance running on Docker. It runs on a schedule, checks for new versions, summarizes the release notes with AI, and asks for your approval via Telegram before updating. Scheduled Check: The workflow runs hourly, triggered by a Schedule node. Version Discovery: It first confirms it's running in a Docker environment. It uses SSH to connect to the host machine and inspects the running n8n container to find its current version tag (e.g., latest or next). It then queries the Docker Hub API to compare the image digest (a unique ID for an image version) of the running version against the latest available version for that tag. Update Detection: If the digests do not match, it means a new image has been pushed for your version tag (e.g., a new latest image is available), and an update is needed. AI-Powered Release Notes: It fetches the official release notes for the new version from the GitHub API. An AI model (LLM) summarizes these technical notes into a concise, human-readable overview of the key features and fixes. Manual Approval: It sends a message to a Telegram chat with the AI-generated summary and two buttons: "✅ Update" and "❌ Ignore". The workflow then pauses and waits for your response. Execute Update: If you approve the update, the workflow uses SSH to run a docker compose command on your server, which pulls the new image, stops the old containers, and starts the new ones. Set up steps Setup time: ~5-10 minutes SSH Credentials: Go to Credentials and create a new SSH credential with the username, host, and password/private key for the server where your n8n Docker instance is running. Select this credential in the Get n8n Current Version and Update Docker nodes. Telegram Bot Credentials: Create a Telegram Bot and get its API token. Go to Credentials and create a new Telegram credential with your bot's token. Select this credential in the Send a text message node. AI Model Credentials: Ensure you have credentials for an AI provider (like Google AI, OpenAI, etc.) set up. Select your desired credential in the Google Gemini Chat Model node (or replace it with your preferred LLM node). Configure Paths and Commands: Open the Docker Path node. Set the docker_path to the absolute path of your docker-compose.yml file on the server (e.g., /root/n8n). If you use workers, adjust the worker_command to include the correct --scale argument for your setup. If not, you can leave it blank. Set Your Chat ID: Open the Approve Update Telegram node and enter your personal Telegram Chat ID in the Chat ID field. This ensures the approval message is sent to you. Activate the workflow. It will now check for updates every hour. To enable fully automatic updates (without manual approval): Delete the nodes from Get n8n Releases to Approved ? and connect the Needs Update ? node directly to the Update Docker node.
by Falk
How it works Collects articles from your preferred RSS feeds. Rates and tags each article using an AI model (e.g., QWEN 14B-s4), filtering for relevance and quality. Summarizes high-rated articles with a language model (e.g., Gemma3 4B) for quick, digestible reading. Checks for duplicates to avoid sending the same article twice. Formats and sends the top articles as an HTML newsletter via Gmail, using OAuth2 authentication. Stores records in a Postgres database, tracking which articles have been sent and their ratings. Requirements Postgres Account AI Models (if you work localy use Ollama) In the cloud you have to change Ollama node to your prefered Model Node RSS Feed of your desire Google Auth2, if you want to use Gmail Recommendations Use n8n local version for this workflow Here are some more informations: https://github.com/falks-ai-workbench/n8n_newsletter
by Jimleuk
This n8n template shows how anyone can build a simple newsletter-like subscription service where users can enrol themselves to receive messages/content on a regular basis. It uses n8n forms for data capture, Airtable for database, AI for content generation and Gmail for email sending. How it works An n8n form is setup up to allow users to subscribe with a desired topic and interval of which to recieve messages via n8n forms which is then added to the Airtable. A scheduled trigger is executed every morning and searches for subscribers to send messages for based on their desired intervals. Once found, Subscribers are sent to a subworkflow which performs the text content generation via an AI agent and also uses a vision model to generate an image. Both are attached to an email which is sent to the subscriber. This email also includes an unsubscribe link. The unsubscribe flow works similarly via n8n form interface which when submitted disables further scheduled emails to the user. How to use Make a copy of sample Airtable here: https://airtable.com/appL3dptT6ZTSzY9v/shrLukHafy5bwDRfD Make sure the workflow is "activated" and the forms are available and reachable by your audience. Requirements Airtable for Database OpenAI for LLM (but compatible with others) Gmail for Email (but can be replaced with others) Customising this workflow This simple use can be extended to deliver any types of content such as your company newsletter, promotions, social media posts etc. Doesn't have to be limited to just email - try social messaging, Whatsapp, Telegram and others.
by Hardikkumar
This workflow automates the entire process of creating SEO-optimized meta titles and descriptions. It analyzes your webpage, spies on top-ranking competitors for the same keywords, and then uses a multi-step AI process to generate compelling, length-constrained meta tags. 🤖 How It Works This workflow operates in a three-phase process for each URL you provide: Phase 1: Self-Analysis When you add a URL to a Google Sheet with the status "New", the workflow scrapes your page's content. The first AI then performs a deep analysis to identify the page's primary keyword, semantic keyword cluster, search intent, and target audience. Phase 2: Competitor Intelligence The workflow takes your primary keyword and performs a live Google search. A custom code block intelligently filters the search results to identify true competitors. A second AI analyzes their meta titles and descriptions to find common patterns and successful strategies. Phase 3: Master Generation & Update The final AI synthesizes all gathered intelligence—your page's data and the competitor's winning patterns—to generate a new, optimized meta title and description. It then writes this new data back to your Google Sheet and updates the status to "Generated". ⚙️ Setup Instructions You should be able to set up this workflow in about 10-15 minutes ⏱️. 🔑 Prerequisites You will need the following accounts and API keys: A Google Account with access to Google Sheets. A Google AI / Gemini API key. A SerpApi key for Google search data. A ScrapingDog API key for reliable website scraping. 🛠️ Configuration Google Sheet Setup: Create a new Google Sheet. The workflow requires the following columns: URL, Status, Current Meta Title, Current Meta Description, Generated Meta Title, Generated Meta Description, and Ranking Factor. Add Credentials: Google Sheets Nodes: Connect your Google account credentials to the Google Sheets Trigger & Google Sheets nodes. Google Gemini Nodes: Add your Google Gemini API key to the credentials for all three Google Gemini Chat Model nodes. Scrape Website Node: In this HTTP Request node, go to Query Parameters and replace <your-api-key> with your ScrapingDog API key. Googl SERP Node: In this HTTP Request node, go to Query Parameters and replace <your-api-key> with your SerpApi API key. Configure Google Sheets Nodes: Copy the Document ID from your Google Sheet's URL. Paste this ID into the "Document ID" field in the following nodes: Google Sheets Trigger, Get row(s) in sheet1, and Update row in sheet. In each of those nodes, select the correct sheet name from the "Sheet Name" dropdown. ✅ Activate Workflow Save and activate the workflow. To run it, simply add a new row to your Google Sheet containing the URL you want to process and set the "Status" column to New.
by Jitesh Dugar
Overview Advanced AI-powered stock analysis workflow that combines multi-timeframe technical analysis with real-time news sentiment to generate actionable BUY/SELL/HOLD recommendations. Uses sophisticated algorithms to process price data, news sentiment, and market context for informed trading decisions. Core Features Multi-Timeframe Technical Analysis 4-Hour Charts** - Intraday trend analysis and entry timing Daily Charts** - Primary trend identification and key levels Weekly Charts** - Long-term context and major trend direction Moving Average Analysis** - 5, 10, and 20-period trend indicators Support/Resistance Levels** - Dynamic price level identification Volume Analysis** - Trading activity and momentum confirmation AI-Powered News Sentiment Analysis Real-Time News Processing** - Latest market-moving headlines Sentiment Scoring** - Numerical sentiment rating (-1 to +1 scale) Impact Assessment** - News relevance to stock performance Multi-Source Analysis** - Comprehensive news coverage evaluation Context-Aware Processing** - Financial market-specific sentiment analysis Intelligent Recommendation Engine Professional Trading Logic** - Multi-timeframe alignment analysis Risk/Reward Calculations** - Minimum 1:2 ratio requirements Entry/Exit Price Targets** - Specific actionable price levels Stop-Loss Recommendations** - Risk management guidelines Confidence Scoring** - Recommendation strength assessment Technical Capabilities Data Sources & APIs TwelveData API** - Professional-grade price and volume data NewsAPI Integration** - Comprehensive news coverage Perplexity AI** - Additional sentiment context and analysis Chart-Img API** - Visual chart generation for analysis Real-Time Processing** - Live market data integration AI Models & Analysis GPT-4 Integration** - Advanced natural language processing Custom Sentiment Engine** - Financial market-tuned sentiment analysis Multi-Model Approach** - Cross-validation of recommendations Algorithmic Trading Logic** - Professional-grade decision frameworks Visual Analysis Tools Interactive Charts** - TradingView-style chart generation Technical Indicators** - Visual representation of analysis Dark Theme Support** - Professional trading interface Multiple Timeframes** - Comprehensive visual analysis Use Cases & Applications Individual Traders Day Trading Signals** - Short-term entry/exit recommendations Swing Trading Analysis** - Multi-day position guidance Risk Management** - Stop-loss and position sizing advice Market Timing** - Optimal entry point identification Investment Research Due Diligence** - Comprehensive stock analysis Sentiment Monitoring** - News impact assessment Technical Screening** - Multi-criteria stock evaluation Portfolio Optimization** - Individual stock recommendations Automated Trading Systems Signal Generation** - Systematic buy/sell/hold alerts Risk Controls** - Automated stop-loss calculations Multi-Asset Analysis** - Scalable across stock universe Backtesting Support** - Historical recommendation validation Financial Advisors & Analysts Client Reporting** - Professional analysis documentation Research Automation** - Streamlined analysis workflow Decision Support** - Data-driven recommendation framework Market Commentary** - AI-generated insights and rationale Key Benefits Professional-Grade Analysis Institutional Quality** - Bank-level analytical frameworks Multi-Dimensional** - Technical + fundamental + sentiment analysis Real-Time Processing** - Live market data integration Objective Decision Making** - Removes emotional bias from analysis Time Efficiency Instant Analysis** - Seconds vs hours of manual research Automated Processing** - Continuous market monitoring Scalable Operations** - Analyze multiple stocks simultaneously 24/7 Availability** - Round-the-clock market analysis Risk Management Built-in Stop Losses** - Automatic risk level calculation Position Sizing** - Risk-appropriate recommendation sizing Multi-Timeframe Validation** - Reduces false signals Conservative Approach** - Defaults to HOLD when uncertain Setup Requirements API Keys Needed TwelveData API - Free tier available at twelvedata.com NewsAPI Key - Free tier available at newsapi.org OpenAI API - For GPT-4 analysis capabilities Perplexity API - Additional sentiment analysis Chart-Img API - Optional chart visualization (chart-img.com) Configuration Steps API Integration - Add your API keys to respective nodes Symbol Format - Supports company names or stock symbols Risk Parameters - Customize stop-loss and target calculations Notification Setup - Configure alert delivery methods Testing & Validation - Verify API connections and data flow Advanced Features Natural Language Processing Company Name Recognition** - Automatic symbol conversion Context Understanding** - Market-aware news interpretation Multi-Language Support** - Global news source analysis Entity Extraction** - Key information identification Error Handling & Reliability API Failure Recovery** - Graceful degradation strategies Data Validation** - Input/output quality checks Rate Limit Management** - Automatic throttling controls Backup Data Sources** - Redundant information feeds Customization Options Timeframe Selection** - Adjustable analysis periods Risk Tolerance** - Configurable risk/reward ratios Sentiment Weighting** - Balance technical vs fundamental analysis Alert Thresholds** - Custom trigger conditions Important Disclaimers This tool provides educational and informational analysis only. All trading decisions should: Consider your personal risk tolerance and financial situation Be validated with additional research and professional advice Account for market volatility and potential losses Follow proper risk management principles Performance Optimization Speed Enhancements Parallel Processing** - Simultaneous data retrieval Caching Strategies** - Reduced API call frequency Efficient Algorithms** - Optimized calculation methods Memory Management** - Scalable resource usage Accuracy Improvements Multi-Source Validation** - Cross-reference data points Historical Backtesting** - Performance validation Continuous Learning** - Algorithm refinement Market Adaptation** - Evolving analysis criteria Transform your investment research with AI-powered analysis that combines the speed of automation with the depth of professional-grade financial analysis.
by Fabrizio Terzi
AI-Driven Handbook Generator with Multi-Agent Orchestration (Pyragogy AI Village) This n8n workflow is a modular, multi-agent AI orchestration system designed for the collaborative generation of Markdown-based handbooks. Inspired by peer learning and open publishing workflows, it simulates a content pipeline where specialized AI agents act in defined roles, enabling true AI–human co-creation and iterative refinement. This project is a core component of Pyragogy, an open framework dedicated to ethical cognitive co-creation, peer AI–human learning, and human-in-the-loop automation for open knowledge systems. It implements the master orchestration architecture for the Pyragogy AI Village, managing a complex sequence of AI agents to process input, perform review, synthesis, and archiving, with a crucial human oversight step for final approval. How It Works: A Deep Dive into the Workflow's Architecture The workflow orchestrates a sophisticated content generation and review process, ideal for creating AI-driven knowledge bases or handbooks with human oversight. Webhook Trigger & Input:* The process begins when the workflow receives a JSON input via a *Webhook** (specifically at /webhook/pyragogy/process). This input typically includes details like the handbook's title, initial text, and relevant tags. Database Verification:* It first verifies the connection to a *PostgreSQL database** to ensure data persistence. Meta-Orchestrator:* A powerful *Meta-Orchestrator** (powered by gpt-4o from OpenAI) analyzes the initial request. Its role is to dynamically determine and activate the optimal sequence of specialized AI agents required to fulfill the input, ensuring tasks are dynamically routed and assigned based on each agent’s responsibility. Agent Execution & Iteration:** Each activated agent executes its step using OpenAI or custom endpoints. This involves: Content Generation: Agents like the Summarizer and the Synthesizer generate new content or refine existing text. Peer Review Board: A crucial aspect is the Peer Review Board, comprised of AI agents like the Peer Reviewer, the Sensemaking Agent, and the Prompt Engineer. This board evaluates the output for quality, coherence, and accuracy. Reprocessing & Redrafting: If the review agents flag a major_issue, they trigger redrafting loops by generating specific feedback for the Synthesizer. This mechanism ensures iterative refinement until the content meets the required standards. Human-in-the-Loop (HITL) Review:* For final approval, particularly for the Archivist agent's output, a *human review process* is initiated. An email is sent to a human reviewer, prompting them to approve, reject, or comment via a "Wait for Webhook" node. This ensures *human oversight** and quality control. Content Persistence & Versioning:** If the content is approved by the human reviewer: It's saved to a PostgreSQL database (specifically to the handbook_entries and agent_contributions tables). Optionally, the content can be committed to a GitHub repository for version control, provided the necessary environment variables are configured. Notifications:* The final output and the sequence of executed agents can be sent as a notification to *Slack**, if configured. Observe the dynamic loop: orchestrate → assign → generate → review (AI/human) → store Included AI Agents This workflow leverages a suite of specialized AI agents, each with a distinct role in the content pipeline: Meta-Orchestrator:** Determines the optimal sequence of agents to execute based on the input. Summarizer Agent:** Summarizes text into key points (e.g., 3 key points). Synthesizer Agent:** Synthesizes new text and effectively incorporates reprocessing feedback from review agents. Peer Reviewer Agent:** Reviews generated text, highlighting strengths, weaknesses, and suggestions, and indicates major_issue flags. Sensemaking Agent:** Analyzes input within existing context, identifying patterns, gaps, and areas for improvement. Prompt Engineer Agent:** Refines or generates prompts for subsequent agents, optimizing their output. Onboarding/Explainer Agent:** Provides explanations of the process or offers guidance to users. Archivist Agent:** Prepares content for the handbook, manages the human review process, and handles archiving to the database and GitHub. Setup Steps & Prerequisites To get this powerful workflow up and running, follow these steps: Import the Workflow: Import the pyragogy_master_workflow.json (or generate-collaborative-handbooks-with-gpt4o-multi-agent-orchestration-human-review.json) into your n8n instance. Connect Credentials: Postgres: Set up a Postgres Pyragogy DB credential (ID: pyragogy-postgres). OpenAI: Configure an OpenAI Pyragogy credential (ID: pyragogy-openai) for all OpenAI agents. GPT-4o is highly suggested for optimal performance. Email Send: Set up a configured email credential (e.g., for sending human review requests). Define Environment Variables: Define essential environment variables (an .env.template is included in the repository). These include: API base for OpenAI. Database connection details. (Optional) GitHub: For content persistence and versioning, configure GITHUB_ACCESS_TOKEN, GITHUB_REPOSITORY_OWNER, and GITHUB_REPOSITORY_NAME. (Optional) Slack: For notifications, configure SLACK_WEBHOOK_URL. Send a sample payload to your webhook URL (/webhook/pyragogy/process): { "title": "History of Peer Learning", "text": "Peer learning is an educational approach where students learn from and with each other...", "tags": ["education", "pedagogy"], "requireHitl": true } Ideal For This workflow is perfectly suited for: Educators and researchers exploring AI-assisted publishing and co-authoring with AI. Knowledge teams looking to automate content pipelines for internal or external documentation. Anyone building collaborative Markdown-driven tools or AI-powered knowledge bases. Documentation & Contributions: An Open Source and Collaborative Project This workflow is an open-source project and community-driven. Its development is transparent and open to everyone. We warmly invite you to: Review it:** Contribute your analysis, identify potential improvements, or report issues. Remix it:** Adapt it to your specific needs, integrate new features, or modify it for a different use case. Improve it:** Propose and implement changes that enhance its efficiency, robustness, or capabilities. Share it back:** Return your contributions to the community, either through pull requests or by sharing your implementations. Every contribution is welcome and valued! All relevant information for verification, improvement, and collaboration can be found in the official repository: 🔗 GitHub – pyragogy-handbook-n8n-workflow
by Alfred Nutile
How it works This workflow provides a streamlined process for uploading files to Digital Ocean Spaces, making them publicly accessible. The process happens in three main steps: User submits the form with file, in this case I needed it to upload images I use in my seo tags. File is automatically uploaded to Digital Ocean Spaces using S3-compatible storage Form completion confirmation is provided Setup steps Initial setup typically takes 5-10 minutes Configure your Digital Ocean Spaces credentials and bucket settings Test the upload functionality with a small sample file Verify public access permissions are working as expected Important notes Credentials are tricky check the screenshot above for how I set the url, bucket etc. I am just using the S3 Node Set the ACL as seen below Troubleshooting Bucket name might be incorrect Region Wrong Check Space permissions if uploads fail Verify API credentials are correctly configured You can see a video here. (live in 24 hours) https://youtu.be/pYOpy3Ntt1o
by Romain Jouhannet
This workflow imports Productboard data into Snowflake, automating data extraction, mapping, and updates for features, companies, and notes. It supports scheduled weekly updates, data cleansing, and Slack notifications summarizing the latest insights. Features Fetches data from Productboard (features, companies, notes). Maps and processes data for Snowflake tables. Automates table creation, truncation, and updates. Summarizes new and unprocessed notes. Sends weekly Slack notifications with key insights. Setup Configure Productboard and Snowflake credentials in n8n. Update Snowflake table schemas to match your setup. Replace Slack channel ID and dashboard URL in the notification node. Activate the workflow and set the desired schedule.
by Ludwig
How it works: This workflow automates tagging for WordPress posts using AI: Fetch blog post content and metadata. Generate contextually relevant tags using AI. Verify existing tags in WordPress and create new ones if necessary. Automatically update posts with accurate and optimized tags. Set up steps: Estimated time: ~15 minutes. Configure the workflow with your WordPress API credentials. Connect your content source (e.g., RSS feed or manual input). Adjust tag formatting preferences in the workflow settings. Run the workflow to ensure proper tag creation and assignment. This workflow is perfect for marketers and content managers looking to streamline their content categorization and improve SEO efficiency.
by Francis Njenga
Workflow Documentation: HR Job Posting and Evaluation with AI Detailed Description The HR Job Posting and Evaluation with AI workflow is designed to streamline and enhance recruitment for technical roles, such as Automation Specialists. By automating key stages in the hiring process, this workflow ensures a seamless experience for both candidates and HR teams. From collecting applications to evaluating candidates using AI and scheduling interviews, this workflow provides an end-to-end solution for recruitment challenges. Who is this for? This workflow is ideal for: HR Professionals**: Managing multiple job postings and candidates efficiently. Recruitment Teams**: Handling large volumes of applications for technical positions. Hiring Managers**: Ensuring structured and objective candidate evaluations. What problem does this workflow solve? Time-Consuming Processes**: Automates repetitive tasks like data entry, CV management, and scheduling. Fair Candidate Evaluation**: Leverages AI to provide objective insights based on resumes and job descriptions. Streamlined Communication**: Ensures timely and personalized candidate interactions, improving their experience. What this workflow does This workflow automates the following steps: Form Submission: Collects candidate information via a structured application form. Data Storage: Stores applicant details in Airtable for centralized tracking. CV Management: Automatically uploads resumes to Google Drive for easy access and organization. AI-Powered Candidate Evaluation: Scores candidates based on their resumes and job descriptions using OpenAI, providing actionable insights. Interview Scheduling: Automates scheduling based on candidate and interviewer availability. Communication: Sends customized emails to candidates for interview invitations and feedback. Setup Prerequisites To use this workflow, you’ll need: n8n Account**: To create and run the workflow. Airtable Account**: For managing applicant data. Google Drive Account**: For storing candidate CVs. OpenAI API Key**: For AI-powered candidate scoring. SMTP Email Account**: For sending candidate communications. Setup Process Airtable Configuration: Create a base in Airtable with tables for Applicants and Job Positions. Google Drive Setup: Create a folder for CV storage and ensure you have write permissions. Integrate Airtable in n8n: Use the Airtable API key to connect Airtable to n8n. Integrate Google Drive in n8n: Authorize Google Drive to enable CV storage automation. OpenAI Integration: Add your OpenAI API key to n8n for candidate scoring. Email Configuration: Set up your SMTP email account in n8n for sending notifications and invitations. How to customize this workflow Tailor the workflow to fit your unique recruitment needs: Edit Job Descriptions: Adjust the form parameters to match the specific role and qualifications. Refine AI Evaluation Criteria: Modify OpenAI prompts to reflect the skills and competencies for the desired position. Personalize Email Templates: Update email content to match your organization’s tone and branding. Add New Features: Incorporate additional steps like feedback collection or integration with other HR tools. Conclusion The HR Job Posting and Evaluation with AI workflow simplifies and automates the recruitment process, enabling HR teams to focus on engaging with candidates rather than handling administrative tasks. With its powerful integrations and customization options, this workflow helps organizations hire efficiently while improving the candidate experience.
by Joseph LePage
Multi-AI Agent Chatbot for Postgres/Supabase Databases and QuickChart Generation Who is this for? This workflow is ideal for data analysts, developers, and business intelligence teams who need an AI-powered chatbot to query Postgres/Supabase databases and generate dynamic charts for data visualization. What problem does this solve? It simplifies data exploration by combining conversational AI with database querying and chart generation. Users can interact with their database using natural language, retrieve insights, and visualize data without manual SQL queries or chart configuration. What this workflow does AI-Powered Chat Interface: Accepts natural language prompts to query databases or generate charts. Routes user requests through a tool agent system to determine the appropriate action (query or chart). Database Querying: Executes SQL queries on Postgres/Supabase databases based on user input. Retrieves schema information, table definitions, and specific data records. Dynamic Chart Generation: Uses QuickChart to create bar charts, line charts, or other visualizations from database records. Outputs a shareable chart URL or JSON configuration for further customization. Memory Integration: Maintains chat history using Postgres memory nodes, enabling context-aware interactions. Workflow diagram showcasing AI agents, database querying, and chart generation paths. Setup Prerequisites: A Postgres-compatible database (e.g., Supabase). API credentials for OpenAI. Configuration Steps: Add your database connection credentials in the Postgres nodes. Set up OpenAI credentials for GPT-4o-mini in the language model nodes. Adjust the QuickChart schema in the "QuickChart Object Schema" node to fit your use case. Testing: Trigger the chat workflow via the "When chat message received" node. Test with prompts like "Generate a bar chart of sales data" or "Show me all users in the database." How to customize this workflow Modify AI Prompts** Add Chart Types** Integrate Other Tools**