by Akash Kankariya
All-in-One Portfolio Tracker & Telegram Finance Updates Workflow for n8n: Multi-Broker, Real-Time, Global 🚀 Overview Take control of all your investments—across multiple brokers and platforms—in one place, with live updates sent directly to your Telegram! 🌍💸 This n8n template brings together Google Sheets and Telegram so you can track your complete finance portfolio with ease, whether you’re in the US market, India, or anywhere in the world. 🔧 Built By - akash@codescale.tech How This Workflow Works Tracks your investments** across multiple brokers, platforms, or asset types. Automatically sends updates to your Telegram account**—see daily Profit & Loss (P&L), changes, and total returns in a rich, emoji-filled report. Works globally**, with a sample provided for the US market, but can be configured for any country and broker. Schedule automated updates** (e.g., market close/open) or get real-time insights on demand with Telegram commands. Highlights & Features 📊 Unified Dashboard: Integrate all your broker data in one Google Sheet for effortless monitoring (Google Sheet Link - https://docs.google.com/spreadsheets/d/1dakq9EhU8GrDgBsk82KvAen0N1P3FySAwNHFtG2lsLI/edit?usp=sharing) 🤖 Interactive Telegram Bot: Send /total or a specific broker’s name in the Telegram chat to get instant, formatted portfolio summaries. ⏰ Automatic Notifications: Receive scheduled P&L summaries at market open and close. 🗂️ Customizable for Any Region or Broker: Just update your Google Sheet with the platforms or brokers you use—including those in the US, Europe, Asia, etc. 🔐 Secure and Private: Only your pre-set Telegram user or chat receives the sensitive financial update. Example (For US Market) Let’s imagine you have portfolios with Robinhood, E*TRADE, and Charles Schwab. Every day at 10AM and 4PM Eastern Time, or whenever you send the /total command, you get this on Telegram: 📊 Daily P&L Report 🔹 Robinhood Invested: $5,000.00 P&L: $250.00 (5.00%) Change: $30.00 (0.60%) Current Value: $5,250.00 🔹 E*TRADE Invested: $8,000.00 P&L: $400.00 (5.00%) Change: $45.00 (0.56%) Current Value: $8,400.00 📈 Total Portfolio Total Invested: $13,000.00 Total P&L: $650.00 (5.00%) Today's Change: $75.00 (0.58%) 💰 Overall Value: $13,650.00 📈 Overall Return: 5.00% 💸 Overall P&L: $650.00 Easy Setup Steps Copy the Template to Your n8n Instance: Just import the provided workflow JSON. Configure Your Google Sheet: List all your brokers/platforms as rows (US, EU, or any other market). Update your credentials in n8n for Google Sheets and Telegram. Set Your Telegram Chat ID: Secure, so only you or your group receive updates. Customize Schedules: Change times for your local market hours or as you prefer. Send Commands in Telegram: /total for overall summary /Robinhood, /ETRADE, etc., for individual broker updates Who Is This For? Investors managing accounts across several brokers. Traders seeking real-time daily summaries. Portfolio managers wanting one consolidated, secure view. Users in any country, for any major market. Make It Yours! 🌏 Customize the sheet and workflow for your unique blend of accounts, currencies, and platforms—track mutual funds, stocks, ETFs, cryptos, or more. Get peace of mind with every notification, organized and delivered just for you! Start tracking smarter, not harder. Transform your finance workflow with n8n + Telegram today! 🚀
by Rosh Ragel
What it Does Automatically checks your Google Calendar to determine if you're officially off work for the rest of today. If so, it auto-sends a personalized out‑of‑office reply via Gmail, telling senders when you’ll be back—based on your next calendar entry within the next 2 weeks. Prerequisites To use this template, you'll need: Gmail credentials (for the trigger and reply nodes) Google Calendar credentials (for both calendar checks) A dedicated work calendar selected in the Calendar nodes Workflow Logic Gmail Trigger Monitors incoming emails every minute Can be filtered (e.g., labels or VIP senders) Calendar Check #1 Inspects if any events remain today Calendar Check #2 If no remaining events, scan the next 14 days for the next event Function Node Formats the return date as Weekday, Month D, YYYY (e.g., “Thursday, July 24, 2025”) Gmail Send Sends a customized out‑of‑office email, using the formatted date Optionally includes n8n attribution (editable) User Setup Instructions Gmail Trigger: Connect your Gmail account and add any desired filters (labels, senders). Google Calendar Nodes: Connect your calendar account and select your “work” calendar in both nodes. Function Node: No changes needed unless you prefer a different date format. Gmail Send Node: Edit the message template and toggle attribution as desired. Customization - Options Edit the final email content and tone in the Send node Adjust calendar lookahead in Calendar Check #2 (default is 14 days) Add Gmail filters to restrict auto-replies (e.g. only specific senders or labels) Why It's Useful Ideal for freelancers, consultants, or remote workers who don’t follow a strict 9–5, yet want automated responses aligned with their actual availability, not a static setting. It’s dynamic, real-time, and easy to tweak. Classification Use Case: Calendar-driven out-of-office automation Recommended audience: Business professionals, freelancers, remote employees
by Aitor | 1Node
Elevate your Stripe workflows with an AI agent that intelligently, securely, and interactively handles essential Stripe data operations. Leveraging the Kimi K2 model via OpenRouter, this n8n template enables safe data retrieval. From fetching summarized financial insights to managing customer discounts, while strictly enforcing privacy, concise outputs, and operational boundaries. 🧾 Requirements Stripe: Active Stripe account API key with read and write access. n8n: Deployed n8n instance (cloud or self-hosted) OpenRouter: Active OpenRouter account with credit API key from OpenRouter 🔗 Useful Links Stripe n8n Stripe Credentials Setup OpenRouter 🚦 Workflow Breakdown Trigger: User Request Workflow initiates when an authenticated user sends a message in the chat trigger. AI Agent (Kimi K2 OpenRouter): Intent Analysis Determines whether the user wants to: List customers, charges, or coupons Retrieve the account’s balance Create a new coupon in Stripe Filters unsupported or unclear requests, explaining permissions or terminology as needed. Stripe Data Retrieval For data queries: Only returns summarized, masked lists (e.g., last 10 transactions/customers) Sensitive details, such as card numbers, are automatically masked or truncated Never exposes or logs confidential information Coupon Creation When a coupon creation is requested: AI agent collects coupon parameters (discount, expiration, restrictions) Clearly summarizes the action and requires explicit user confirmation before proceeding Creates the coupon upon confirmation and replies with only the public-safe coupon details 🛡️ Privacy & Security No data storage:** All responses are ephemeral; sensitive Stripe data is never retained. Strict minimization:** Outputs are tightly scoped; only partial identifiers are shown and only when necessary. Retention rules enforced:** No logs, exports, or secondary storage of Stripe data. Confirmation required:** Actions modifying Stripe (like coupon creation) always require the user to approve before execution. Compliance-ready:** Aligned with Stripe and general data protection standards. ⏱️ Setup Steps Setup time: 10–15 minutes Add Stripe API credentials in n8n Add the OpenRouter API credentials in n8n and select your desired AI model to run the agent. In our template we selected Kimi K2 from Moonshot AI. ✅ Summary This workflow template connects a privacy-prioritized AI agent (Kimi K2 via OpenRouter) with your Stripe account to enable: Fast, summarized access to customer, transaction, coupon, and balance data Secure, confirmed creation of discounts/coupons Complete adherence to authorization, privacy, and operational best practices 🙋♂️ Need Help? Feel free to contact us at 1 Node Get instant access to a library of free resources we created.
by Nikhil Kuriakose
How it works Triggers on submitting an n8n form Uses the form details to prepare a message Sends the message to Slack Set up Steps Add in your team name Add in message tone Set up Open AI Set up Slack
by Hendriekus
Find OAuth URIs with AI Llama Overview: The AI agent identifies: Authorization URI Token URI Audience Methodology: Confidence scoring is utilized to assess the trustworthiness of extracted data: Score Range: 0 < x ≤ 1 Score Granularity: 0.01 increments Model Details: Leveraging the Wayfarer Large 70b Llama 3.3 model. How it works: This template is designed to assist users in obtaining OAuth2 settings using AI-powered insights. It is ideal for developers, IT professionals, or anyone working with APIs that require OAuth2 authentication. By leveraging the AI agent, users can simplify the process of extracting and validating key details such as the authorization_url, token_url, and audience. Set up instructions: 1. Configuration Nodes Structured Output Node**: Parses the AI model's output using a predefined JSON schema. This ensures the data is structured for downstream processing. Code Node**: If the AI model’s output does not match the required format, use the Code node to re-arrange and transform the data. Example code snippets are provided below for common scenarios. 2. AI Model Prompt The prompt for the AI model includes: A detailed structure and objectives of the query. Flexibility for the model to improvise when accurate results cannot be determined. 3. Confidence Scoring The AI model assigns a confidence score (0 < x ≤ 1) to indicate the reliability of the extracted data. Scores are provided in increments of 0.01 for granularity. Adaptability Customize this template: Update the AI model prompt with details specific to your API or OAuth2 setup. Adjust the JSON schema in the Structured Output node to match the data format. Modify the Code logic to suit the application's requirements.
by Adam Bertram
An AI-powered chat assistant that analyzes Azure virtual machine activity and generates detailed timeline reports showing VM state changes, performance metrics, and operational events over time. How It Works The workflow starts with a chat trigger that accepts user queries about Azure VM analysis. A Google Gemini AI agent processes these requests and uses six specialized tools to gather comprehensive VM data from Azure APIs. The agent queries resource groups, retrieves VM configurations and instance views, pulls performance metrics (CPU, network, disk I/O), and collects activity log events. It then analyzes this data to create timeline reports showing what happened to VMs during specified periods, defaulting to the last 90 days unless the user specifies otherwise. Prerequisites To use this template, you'll need: n8n instance (cloud or self-hosted) Azure subscription with virtual machines Microsoft Azure Monitor OAuth2 API credentials Google Gemini API credentials Proper Azure permissions to read VM data and activity logs Setup Instructions Import the template into n8n. Configure credentials: Add Microsoft Azure Monitor OAuth2 API credentials with read permissions for VMs and activity logs Add Google Gemini API credentials Update workflow parameters: Open the "Set Common Variables" node Replace <your azure subscription id here> with your actual Azure subscription ID Configure triggers: The chat trigger will automatically generate a webhook URL for receiving chat messages No additional trigger configuration needed Test the setup to ensure it works. Security Considerations Use minimum required Azure permissions (Reader role on subscription or resource groups). Store API credentials securely in n8n credential store. The Azure Monitor API has rate limits, so avoid excessive concurrent requests. Chat sessions use session-based memory that persists during conversations but doesn't retain data between separate chat sessions. Extending the Template You can add more Azure monitoring tools like disk metrics, network security group logs, or Application Insights data. The AI agent can be enhanced with additional tools for Azure cost analysis, security recommendations, or automated remediation actions. You could also integrate with alerting systems or export reports to external storage or reporting platforms.
by CustomJS
n8n Workflow: Invoice PDF Generator This n8n workflow captures invoice data and generates a PDF invoice, ready to be sent or saved. It uses a webhook to trigger the process, preprocesses the invoice data, and converts it to a PDF using HTML and custom styling. @custom-js/n8n-nodes-pdf-toolkit Features: Webhook Trigger**: Receives incoming data, including invoice details. Preprocessing**: Transforms the invoice data into HTML format. HTML to PDF Conversion**: Converts the preprocessed HTML into a styled PDF document. Response**: Sends the generated PDF back to the webhook response. Notice Community nodes can only be installed on self-hosted instances of n8n. Requirements Self-hosted** n8n instance A CustomJS API key for website screenshots. Invoice data** for PDF generation Workflow Steps: Webhook Trigger: Accepts incoming data (e.g., invoice number, recipient details, itemized list). This data is passed to the next node for processing. Set Data Node: Configures initial values for the invoice, including the recipient, sender, invoice number, and the items on the invoice. The invoice details include information like description, unit price, and quantity. Preprocess Node: Processes the raw data to format it correctly for HTML. This includes splitting addresses and converting the items into an HTML table format. HTML to PDF Conversion: Converts the generated HTML into a PDF document. The HTML includes a header, a detailed invoice table, and a footer with contact information. Respond to Webhook: Returns the generated PDF as a response to the initial webhook request. Setup Guide: 1. Configure CustomJS API Sign up at CustomJS. Retrieve your API key from the profile page. Add your API key as n8n credentials. 2. Design Workflow Create a Webhook: Set up a webhook to trigger the workflow when invoice data is received. Prepare Data: Ensure the incoming request contains fields like "Invoice No", "Bill To", "From", and "Details" (list of items with price and quantity). Customize the HTML: The HTML template for the invoice includes custom styling to give the invoice a professional look. Convert to PDF: The HTML to PDF node is configured with the data generated from the preprocessing step to convert the invoice HTML to a PDF format. Example Invoice Data: { "Invoice No": "1", "Bill To": "John Doe\n1234 Elm St, Apt 567\nCity, Country, 12345", "From": "ABC Corporation\n789 Business Ave\nCity, Country, 67890", "Details": [ { "description": "Web Hosting", "price": 150, "qty": 2 }, { "description": "Domain", "price": 15, "qty": 5 } ], "Email": "support@mycompany.com" } Result PDF File
by Oneclick AI Squad
This automated n8n workflow monitors ingredient price changes from external APIs or manual sources, analyzes historical trends, and provides smart buying recommendations. The system tracks price fluctuations in a PostgreSQL database, generates actionable insights, and sends alerts via email and Slack to help restaurants optimize their purchasing decisions. What is Price Trend Analysis? Price trend analysis uses historical price data to identify patterns and predict optimal buying opportunities. The system analyzes price movements over time and generates recommendations on when to buy ingredients based on current trends and historical patterns. Good to Know Price data accuracy depends on the reliability of external API sources Historical data improves recommendation accuracy over time (recommended minimum 30 days) PostgreSQL database provides robust data storage and complex trend analysis capabilities Real-time alerts help capture optimal buying opportunities Dashboard provides visual insights into price trends and recommendations How It Works Daily Price Check - Triggers the workflow daily to monitor price changes Fetch API Prices - Retrieves the latest prices from an external ingredient pricing API Setup Database - Ensures database tables are ready before inserting new data Store Price Data - Saves current prices to the PostgreSQL database for tracking Calculate Trends - Analyzes historical prices to detect patterns and price movements Generate Recommendations - Suggests actions based on price trends (buy/wait/stock up) Store Recommendations - Saves recommendations for future reporting Get Dashboard Data - Gathers necessary data for dashboard generation Generate Dashboard HTML - Builds an HTML dashboard to visualize insights Send Email Report - Emails the dashboard report to stakeholders Send Slack Alert - Sends key alerts or recommendations to Slack channels Database Structure The workflow uses PostgreSQL with two main tables: price_history - Historical price tracking with columns: id (Primary Key) ingredient (VARCHAR 100) - Name of the ingredient price (DECIMAL 10,2) - Current price value unit (VARCHAR 50) - Unit of measurement (kg, lbs, etc.) supplier (VARCHAR 100) - Source supplier name timestamp (TIMESTAMP) - When the price was recorded created_at (TIMESTAMP) - Record creation time buying_recommendations - AI-generated buying suggestions with columns: id (Primary Key) ingredient (VARCHAR 100) - Ingredient name current_price (DECIMAL 10,2) - Latest price price_change_percent (DECIMAL 5,2) - Percentage change from previous price trend (VARCHAR 20) - Price trend direction (INCREASING/DECREASING/STABLE) recommendation (VARCHAR 50) - Buying action (BUY_NOW/WAIT/STOCK_UP) urgency (VARCHAR 20) - Urgency level (HIGH/MEDIUM/LOW) reason (TEXT) - Explanation for the recommendation generated_at (TIMESTAMP) - When recommendation was created Price Trend Analysis The system analyzes historical price data over the last 30 days to calculate percentage changes, identify trends (INCREASING/DECREASING/STABLE), and generate actionable buying recommendations based on price patterns and movement history. How to Use Import the workflow into n8n Configure PostgreSQL database connection credentials Set up external ingredient pricing API access Configure email credentials for dashboard reports Set up Slack webhook or bot credentials for alerts Run the Setup Database node to create required tables and indexes Test with sample ingredient data to verify price tracking and recommendations Adjust trend analysis parameters based on your purchasing patterns Monitor recommendations and refine thresholds based on actual buying decisions Requirements PostgreSQL database access External ingredient pricing API credentials Email service credentials (Gmail, SMTP, etc.) Slack webhook URL or bot credentials Historical price data for initial trend analysis Customizing This Workflow Modify the Calculate Trends node to adjust the analysis period (currently 30 days) or add seasonal adjustments. Customize the recommendation logic to match your restaurant's buying patterns, budget constraints, or supplier agreements. Add additional data sources like weather forecasts or market reports for more sophisticated predictions.
by ivn
About: This workflow automates the transcription of YouTube videos by processing a video URL provided via a chat message. Designed for users who need quick access to video content in text form, this workflow ensures a seamless experience for transcribing videos on demand, regardless of the topic. Who is this for? This workflow is designed for individuals who need quick and accurate transcriptions of YouTube videos without watching them in full. It is particularly useful for: Students who need text-based notes from educational videos. Researchers looking to extract information from lectures or discussions. Professionals who prefer reading over watching videos. Casual users who want an efficient way to summarize video content. What problem is this workflow solving? Manually transcribing YouTube videos is time-consuming and prone to errors. Watching long videos just to extract key information is inefficient. This workflow automates transcription, allowing users to quickly convert video content into text. Use cases include: Summarizing lectures or webinars. Extracting insights from interviews and discussions. Creating searchable text from video content. Generating reference material without watching entire videos. What This Workflow Does? This workflow automates the transcription of YouTube videos by: Accepting Input: User provide a YouTube video URL through a chat message. Processing the Video: It utilizes an external transcription service to retrieve the full transcript of the YouTube video from the provided URL. Enhancing Output: An AI model (OpenAI) refines the transcription for accuracy and readability. Delivering Results: The final text transcript is returned to the user via the chat interface. Setup: Install n8n: Ensure you have n8n installed and running. Import the Workflow: Copy the JSON workflow file into your n8n instance. Configure API Keys: Set up your Supadata (Supadata) API key for transcription. Configure the OpenAI (OpenAI) API key for additional processing. Run the Workflow: Provide a YouTube video URL and receive a transcription in response. How to customize this workflow to your needs: The workflow is flexible and can be tailored to suit specific requirements. Here are some customization ideas: Language Support:** Adjust the transcription language in both the HTTP Request and OpenAI nodes to support transcriptions in different languages (e.g., French, German). Integrate with Other Services:** Store transcriptions in a database, send them via email, or connect with a document management system. Notification:** Add a notification node (e.g., email or Slack) to alert you when the transcription is complete, especially for long videos. Quality Check:** Integrate an additional AI step to summarize or highlight key points in the transcript for quicker insights. This workflow is designed to be scalable, efficient, and adaptable to various transcription needs. Limitations Video Length Limitation:** Very long videos may not have a complete transcription due to constraints in processing capacity or service limitations. Transcription Dependency:** The accuracy of the transcription relies entirely on the presence of video captions or subtitles. If a video lacks these, no transcription will be generated. Access Restrictions:** Private or restricted YouTube videos may not be accessible for transcription due to permission limitations. Processing Time:** The time required to process a video can vary significantly, especially for longer videos, depending on the transcription service and server resources. Regional Restrictions:** Some YouTube videos may have geographic or regional access limitations, which could prevent the workflow from retrieving the content for transcription.
by Oneclick AI Squad
In this guide, we’ll walk you through setting up an AI-driven workflow that automatically processes highly-rated food photos from a Google Sheet, generates AI-powered captions, shares them to Pinterest, and updates the sheet to reflect the posts. Ready to automate your food photo sharing? Let’s dive in! What’s the Goal? Automatically detect and process highly-rated food photos (4 stars or above) from a Google Sheet. Use AI to generate engaging and relevant captions. Share the photos with captions to Pinterest via the Pinterest API. Update the Google Sheet to mark photos as posted. Enable scheduled automation for consistent posting. By the end, you’ll have a self-running system that shares your best food photos effortlessly. Why Does It Matter? Manual photo sharing is time-consuming and inconsistent. Here’s why this workflow is a game changer: Zero Human Error**: AI ensures consistent captions and posting accuracy. Time-Saving Automation**: Automatically handle photo sharing, boosting efficiency. Scheduled Posting**: Maintain a regular presence on Pinterest without manual effort. Focus on Creativity**: Free your team from repetitive posting tasks. Think of it as your tireless social media assistant that keeps your Pinterest feed vibrant. How It Works Here’s the step-by-step magic behind the automation: Step 1: Trigger the Workflow Detect new photos to post using the Daily Post Scheduler node (e.g., once daily). Initiate the workflow at a scheduled time to check for new food photos. Step 2: Fetch Food Photos from Sheet Retrieve rows from the Google Sheet that contain food photo metadata like image URLs, ratings, and status. Step 3: Filter 4+ Star Dishes Filter only those food entries with high ratings (4 stars or above) and unposted status. Step 4: AI Caption Generator Use AI (e.g., GPT/OpenAI) to create engaging and relevant captions for selected food photos. Step 5: Upload to Pinterest Automatically post the food photo with the generated caption to Pinterest via the Pinterest API. Step 6: Mark as Posted in Sheet Update the Google Sheet to reflect that the photo has been successfully shared. How to Use the Workflow? Importing a workflow in n8n is a straightforward process that allows you to use pre-built workflows to save time. Below is a step-by-step guide to importing the Automated Food Photo Sharing workflow in n8n. Steps to Import a Workflow in n8n Obtain the Workflow JSON Source the Workflow: Workflows are shared as JSON files or code snippets, e.g., from the n8n community, a colleague, or exported from another n8n instance. Format: Ensure you have the workflow in JSON format, either as a file (e.g., workflow.json) or copied text. Access the n8n Workflow Editor Log in to n8n (via n8n Cloud or self-hosted instance). Navigate to the Workflows tab in the n8n dashboard. Click Add Workflow to create a blank workflow. Import the Workflow Option 1: Import via JSON Code (Clipboard): Click the three dots (⋯) in the top-right corner to open the menu. Select Import from Clipboard. Paste the JSON code into the text box. Click Import to load the workflow. Option 2: Import via JSON File: Click the three dots (⋯) in the top-right corner. Select Import from File. Choose the .json file from your computer. Click Open to import. Setup Notes Google Sheet Columns**: Ensure your Google Sheet includes the following columns: Image URL, Rating (numeric, e.g., 1-5), Feedback (text), Pin Title, Pin Description, Destination URL, Board ID, and Status (e.g., "Pending" or "Posted"). Google Sheets Credentials**: Configure OAuth2 settings in the Fetch Food Photos node with your Google Sheet ID and credentials. AI Model**: Set up the AI Caption Generator node with OpenAI credentials (e.g., API key). Pinterest API**: Authorize the Upload to Pinterest node with Pinterest API credentials (e.g., Bearer Token) and obtain the Board ID. Scheduling**: Adjust the Daily Post Scheduler node to your preferred posting time (e.g., daily at 9 AM).
by Dajeel Dulal
Turn any LinkedIn post into a personalized cold email opener that sounds like a human wrote it in seconds. Whether you're in sales, partnerships, or outreach, this tool reads LinkedIn posts like a human, distills the core message, and gives you a smart, conversational opener to kick off the relationship the right way. How It Works: 1.) Paste the post + author info into a short form. 2.) AI reads the post like a B2B sales expert would. 3.) Output = personalized opener, company name, prospect’s name, and next steps. 4.) Copy-paste into your cold email and hit send. The opener isn’t generic fluff — it references real details, sounds natural, and shows you actually paid attention. Perfect For: SDRs and BDRs Agency outreach Partnership prospecting Any cold outreach that starts with a real conversation Setup Steps Setup time: ~2-3 mins 1.) Add your OpenAI credentials (or use n8n’s built-in credits). 2.) Open the form and test it with the sample post. 3.) Tweak the AI prompt if you want to target a different niche or tone. (Optional) Connect to Google Sheets, a CRM, or your email tool. You're live.
by Jimleuk
This n8n template demonstrates an approach to image embeddings for purpose of building a quick image contextual search. Use-cases could for a personal photo library, product recommendations or searching through video footage. How it works A photo is imported into the workflow via Google Drive. The photo is processed by the edit image node to extract colour information. This information forms part of our semantic metadata used to identify the image. The photo is also processed by a vision-capable model which analyses the image and returns a short description with semantic keywords. Both pieces of information about the image are combined with the metadata of the image to form a document describing the image. This document is then inserted into our vector store as a text embedding which is associated with our image. From here, the user can query the vector store as they would any document and the relevant image references and/or links should be returned. Requirements Google account to download image files from Google Drive. OpenAI account for the Vision-capable AI and Embedding models. Customise this workflow Text summarisation is just one of many techniques to generate image embeddings. If the results are unsatisfactory, there are dedicated image embedding models such as Google's vertex AI multimodal embeddings.