by AI/ML API | D1m7asis
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. n8n Workflow Template: AI‑Powered Mental Health Support Bot Overview: This template enables you to build a Telegram bot that delivers real‑time, empathetic mental health support. Incoming messages tagged with #vent, #insight, or #cope are routed to GPT‑4o via the AI/ML API, which returns tailored, compassionate responses. How it works: Telegram Trigger listens for new chat messages or voice notes. Show Typing Indicator immediately signals “typing…” in the chat. Switch Node examines the text prefix and routes to one of four branches (Vent, Insight, Cope, or default). Set Prompt nodes build a JSON payload with a specific role‑play prompt for each branch. AI/ML API node (model gpt-4o) generates the response. Telegram node sends the AI’s answer back to the user. Setup Steps: Connect your Telegram bot token in the Telegram credentials. Add your AI/ML API key (GPT‑4o) in n8n’s credential settings. Activate the workflow and deploy your n8n instance webhook URL to BotFather. Test by sending #vent I’m stressed, #insight Why do I feel…, or any tag in your Telegram chat. This plug‑and‑play workflow brings AI‑driven emotional support directly into Telegram.
by Hunyao
What it does Pulls up to 700 Amazon reviews per product (recent and top-rated) and writes them straight into a Google Sheet tab you choose. Perfect for • Brand and product managers tracking sentiment • Marketplace sellers analysing competitor feedback • Agencies building product-review dashboards Apps used RapidAPI Real-Time Amazon Data, Google Sheets, n8n Form Trigger How it works Form Trigger collects brand, product and sheet info. Code node extracts the ASIN and builds 70 API requests (10 pages × star ratings). Split-in-batches loops through the request list, throttled by two Wait nodes. HTTP Request fetches reviews from RapidAPI. IF node drops empty or error responses. Split Out breaks arrays into single reviews. Google Sheets appends every review to the target tab. Loop continues until all pages finish. Setup Fill in Brand name, Product / Model Name, Amazon Product URL, Tab URL to insert reviews in the form. Grab your X-RapidAPI-Key from RapidAPI → Add as httpHeaderAuth credential. Connect Google Sheets OAuth2 and make the spreadsheet Anyone with the link can edit. Open Workflow Settings → set timezone if you plan to schedule runs. Hit Execute workflow or share the form link. Credentials • Real-Time Amazon Data (RapidAPI HTTP Header Auth) • Google Sheets OAuth2 Limits and notes • \~100 RapidAPI calls for the free plan. Plan quota accordingly. • Assumes Amazon returns 10 pages per star rating; fewer pages skip silently. • Large sheets may hit Google API write quotas. If you have any questions in running the workflow, feel free to reach out to me at my youtube channel: https://www.youtube.com/@lifeofhunyao
by Avkash Kakdiya
How it works This workflow enhances contact intelligence by retrieving new or updated contact data, enriching it using AI and external APIs, and then updating your CRM or contact management system with intelligent insights. It automates the process of gathering, enriching, and organizing contact information to improve targeting, personalization, and engagement. Step-by-step 1. Trigger & Input The workflow is triggered by a scheduler or webhook event. It reads a new contact entry (or an updated one) from your source, such as a spreadsheet or form. Basic fields like name, email, and company are used as the starting point for enrichment. 2. Contact Lookup & Parsing The contact's domain or company is extracted and used to perform a lookup via an external data source. Data such as company details, job title, or LinkedIn profile is retrieved. Parsed and cleaned to remove duplicates, missing values, or invalid results. 3. AI Enrichment The enriched contact is passed through an AI model (such as GPT or another NLP service). The model analyzes job role, seniority, and inferred interests based on available data. Insights like intent, persona category, or engagement score are generated. 4. Validation & Tagging The AI-enriched data is validated to ensure consistency and accuracy. Tags and segments (e.g., "Decision Maker", "Technical Buyer", etc.) are assigned based on rules or AI inference. This enables smart filtering, targeting, and routing later in your CRM or campaigns. 5. Output & Integration The final enriched and validated contact is written back to your CRM, sheet, or marketing platform. The system also: Sends a Slack/Email alert with a summary. Updates the original contact entry with a "Processed" or "Enriched" status. Triggers next steps, such as personalized outreach or nurture sequences. Benefits Enhances Contact Profiles with AI-generated insights and third-party data. Improves Segmentation & Targeting through smart tags and persona classification. Automates Manual Research, saving time and improving accuracy. Easily Extendable by adding more AI models, data sources, or CRM integrations.
by Stephan Koning
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. **Alternatively, you can delete the community node and use the HTTP node instead. ** Most email agent templates are fundamentally broken. They're stateless—they have no long-term memory. An agent that can't remember past conversations is just a glorified auto-responder, not an intelligent system. This workflow is Part 1 of building a truly agentic system: creating the brain. Before you can have an agent that replies intelligently, you need a knowledge base for it to draw from. This system uses a sophisticated parser to automatically read, analyze, and structure every incoming email. It then logs that intelligence into a persistent, long-term memory powered by mem0. The Problem This Solves Your inbox is a goldmine of client data, but it's unstructured, and manually monitoring it is a full-time job. This constant, reactive work prevents you from scaling. This workflow solves that "system problem" by creating an "always-on" engine that automatically processes, analyzes, and structures every incoming email, turning raw communication into a single source of truth for growth. How It Works This is an autonomous, multi-stage intelligence engine. It runs in the background, turning every new email into a valuable data asset. Real-Time Ingest & Prep: The system is kicked off by the Gmail Trigger, which constantly watches your inbox. The moment a new email arrives, the workflow fires. That email is immediately passed to the Set Target Email node, which strips it down to the essentials: the sender's address, the subject, and the core text of the message (I prefer using the plain text or HTML-as-text for reliability). While this step is optional, it's a good practice for keeping the data clean and orderly for the AI. AI Analysis (The Brain): The prepared text is fed to the core of the system: the AI Agent. This agent, powered by the LLM of your choice (e.g., GPT-4), reads and understands the email's content. It's not just reading; it's performing analysis to: Extract the core message. Determine the sentiment (Positive, Negative, Neutral). Identify potential red flags. Pull out key topics and keywords. The agent uses Window Buffer Memory to recall the last 10 messages within the same conversation thread, giving it the context to provide a much smarter analysis. Quality Control (The Parser): We don't trust the AI's first draft blindly. The analysis is sent to an Auto-fixing Output Parser. If the initial output isn't in a perfect JSON format, a second Parsing LLM (e.g., Mistral) automatically corrects it. This is our "twist" that guarantees your data is always perfectly structured and reliable. Create a Permanent Client Record: This is the most critical step. The clean, structured data is sent to mem0. The analysis is now logged against the sender's email address. This moves beyond just tracking conversations; it builds a complete, historical intelligence file on every person you communicate with, creating an invaluable, long-term asset. Optional Use: For back-filling historical data, you can disable the Gmail Trigger and temporarily connect a Gmail "Get Many" node to the Set Target Email node to process your backlog in batches. Setup Requirements To deploy this system, you'll need the following: An active n8n instance. Gmail** API credentials. An API key for your primary LLM (e.g., OpenAI). An API key for your parsing LLM (e.g., Mistral AI). An account with mem0.ai for the memory layer.
by Vitorio Magalhães
Auto-publish NASA APOD to LinkedIn with AI translation and hashtags Transform NASA's daily astronomical wonders into engaging LinkedIn content automatically. This workflow fetches NASA's Astronomy Picture of the Day, translates it to Brazilian Portuguese using AI, generates strategic hashtags, and publishes everything to your LinkedIn profile with the stunning space image attached. Who's it for Content creators, astronomy enthusiasts, science communicators, and anyone wanting to share high-quality educational content consistently on LinkedIn. Perfect for Portuguese-speaking professionals who want to engage their network with fascinating space discoveries while building their personal brand as a science advocate. How it works The workflow runs on a daily schedule and handles the complete content pipeline automatically. It fetches the latest NASA APOD through the official API, including both the image and detailed explanation. The English description gets professionally translated to selected language using Google Gemini 2.5 Flash, while maintaining scientific accuracy and terminology. Smart hashtag generation combines fixed branding tags with content-specific ones, mixing Portuguese and English for maximum reach. The final post includes the NASA image, translated description, and strategic hashtags, then gets published to your LinkedIn profile automatically. How to set up You'll need accounts for Google AI Studio (free), LinkedIn Developer (free), and a Telegram bot for notifications. The setup takes about 15 minutes and uses only free services and APIs. First, create your Google AI Studio account and get an API key for the AI translation services. Then set up a LinkedIn OAuth2 application to enable posting permissions. Create a Telegram bot through BotFather and get your chat ID for notifications. Configure the Settings node with your Telegram chat ID and preferred language. The workflow comes with all prompts and configurations ready to use. Test each component individually before activating the daily automation. Requirements LinkedIn account with posting permissions Google AI Studio API key (free tier available) Telegram bot token and your chat ID Basic understanding of OAuth2 setup for LinkedIn NASA API key (optional - demo key included) All services used have generous free tiers, making this workflow completely free to operate indefinitely. How to customize the workflow The centralized Settings node makes customization simple. Change the target language from Brazilian Portuguese to any other language by updating the translate_to_language variable. Modify the posting schedule in the CRON trigger to match your preferred timing. Customize the post template in the "Create Final Post Text" node to match your personal brand voice. Adjust the hashtag strategy by editing the AI prompt in the "Generate Hashtags" node. Add additional social platforms by duplicating the LinkedIn publisher with different credentials. The AI prompts can be fine-tuned for different writing styles or specific astronomical topics. You can also extend the workflow to include additional content processing, image enhancements, or cross-posting to multiple platforms while maintaining the core NASA APOD automation.
by Mary Newhauser
RAG over a PDF with Weaviate This workflow allows you to upload a PDF file and ask questions about it using the Question and Answer Chain and the Weaviate Vector Store nodes. Who it's for This workflow is the simplest possible implementation of RAG with Weaviate in n8n. It's intended to act as an extendable template for RAG over your own documents. Prerequisites An existing Weaviate cluster. You can view instructions for setting up a local cluster with Docker here or a Weaviate Cloud cluster here. API keys to generate embeddings and power chat models. We use OpenAI, but feel free to switch out the models as you like. Self-hosted n8n instance. See this video for how to get set up in just three minutes. How it works Part 1: Manually upload data In this example, we manually upload a 100+ page article from arXiv called "A Survey of Large Language Models". But you can replace this with your own more advanced data pipeline, if you wish. Part 2: Embed and load data into Weaviate collection Here, we generate embeddings for the full-text of the article and store them in Weaviate. Part 3: Perform RAG over PDF file with Weaviate In this part of the workflow, you can enter your query by running the Chat Node and get a RAG response grounded in context via the Question and Answer Chain node. How to run the workflow Go through the prerequisites, creating a Weaviate cluster (can be local or cloud), downloading self-hosted n8n, and adding your API keys and other credentials. Select the embedding and chat models you'd like to use. Upload a PDF file you want to ask questions about. Execute the rest of the workflow.
by Jay Emp0
MCP Tool — Replicate (Flux) Image Generator → WordPress/Twitter Generates images via Replicate Flux models and uploads to WordPress (and optionally Twitter/X). Built to act as an MCP module that other agents/workflows call for on-demand image creation. Models configured in this workflow:\ black-forest-labs/flux-schnell, black-forest-labs/flux-dev, black-forest-labs/flux-1.1-pro Switch rationale: lower cost 💰, broader model choice 🎯, full control of parameters ⚙️ Leonardo API credits cannot be used in the web UI 🙅♂️; separate spend for API vs UI Links: 📜 Prior Leonardo-based workflow: https://n8n.io/workflows/6363-generate-and-upload-images-with-leonardo-ai-wordpress-and-twitter/ 📰 Blog automation consuming these images: https://n8n.io/workflows/6734-ai-blog-automation-publish-hourly-seo-articles-to-wordpress-and-twitter-v3/ 📥 Inputs | Field | Type | Description | | ------ | ------ | --------------------------------- | | prompt | string | Text description for the image | | slug | string | Filename slug for WP media | | model | string | One of the configured Flux models | Example: { "prompt":"Joker watching a Batman movie on his laptop", "slug":"joker-watching-batman", "model":"black-forest-labs/flux-dev" } 📤 Output { "public_image_url": "https://your-wp.com/wp-content/uploads/2025/08/img-joker-watching-batman.webp", "wordpress": {...}, "twitter": {...} } 🔄 Flow Trigger with prompt, slug, model Build model payload (quality/steps/ratio/output format) Call Replicate: POST /v1/models/{model}/predictions (Prefer: wait) Download the generated image URL Upload to WordPress (returns public URL) Optional: upload to Twitter/X Return URL + metadata 🤖 MCP Use at Scale (emp0.com) Operational pattern: I currently use this setup for my blog where i generate 300 posts/month, each with 4 images (banner + 2 to 3 inline images) → 1,000 images/month produced by this MCP. 💡 Hybrid Cost-Optimized Setup: High-priority images* (banners, main visuals): Generated using *Flux Dev** on Leonardo for slightly better prompt adherence. Low-priority images* (inline blog visuals): Generated using *Flux Schnell** on Replicate for maximum cost efficiency. 💰 Pricing Comparison (per image) Leonardo per-image cost uses API Basic math: $9 / 3,500 credits = $0.0025714 per credit. Flux Schnell (Leonardo)** = 7 credits Flux Dev (Leonardo)** = 7 credits Flux 1.1 Pro equivalent in Leonardo* = *Leonardo Phoenix** based on my experience = 10 credits | Flux Model | Replicate | Leonardo API* | | ------------------------ | ------------------------- | ------------------------------- | | flux-schnell | $0.0030 (=$3/1,000) | $0.0180 (7 × $0.0025714) | | flux-dev | $0.0250 | $0.0180 (7 × $0.0025714) | | flux-1.1-pro / Phoenix | $0.0400 | $0.0257 (10 × $0.0025714) | Replicate pricing: https://replicate.com/pricing\ Leonardo pricing: https://leonardo.ai/pricing/\ Leonardo API usage: https://docs.leonardo.ai/docs/commonly-used-api-values 📊 Monthly Cost Example (1,000 images/month) Mix: 300 ×flux-dev on Leonardo, 700 ×flux-schnell on Replicate. | Platform/Model | Images | Price per Image | Total | | ------------------------ | ------ | --------------- | ---------- | | Leonardo flux-dev | 300 | $0.0180 | $5.40 | | Replicate flux-schnell | 700 | $0.0030 | $2.10 | | Total Monthly Spend | 1000 | — | $7.50 | 💵 If using Leonardo for both: 300 × $0.0180 = $5.40 700 × $0.0180 = $12.60 Total = $18.00** Savings: $10.50/month (≈58% lower) with the hybrid setup. 📌 Notes More Replicate models can be added in Code1 node. Parameters tuned for aspect ratio, inference steps, quality, guidance. Leonardo credit model is API-only; credits are not spendable in Leonardo's web UI.
by Calistus Christian
What this workflow does Automatically triages risky AWS misconfigurations and alerts your team. Pipeline: Security Hub or AWS Config -> EventBridge rules -> SNS (HTTP) -> n8n Webhook -> Normalize -> AI Prioritizer -> Airtable (log) -> Gmail (email) Normalizes incoming findings (S3 / Security Groups / IAM / RDS) into a consistent JSON. Uses an LLM to assign a priority (P0–P3) with rationale and remediation steps. Upserts the finding into Airtable (avoids duplicates). Emails a compact incident summary to your inbox. This can be swapped for Microsoft Teams or Slack, etc. Category: Security / Cloud / Alerting Time to set up: ~10–15 minutes Difficulty: Beginner–Intermediate Cost: Mostly free (n8n CE + AWS SNS/EventBridge; OpenAI + Airtable/Gmail as used) What you’ll need An n8n instance reachable over HTTP. AWS account (one region) with permissions to create SNS topics and EventBridge rules. Security Hub** enabled (or AWS Config rules that emit compliance events). n8n credentials: OpenAI, Airtable, Gmail. Nodes used Webhook** (POST /aws-misconfig) Code:** SNS Handler (token check, confirm/unwrap) IF:** route mode === "confirm" vs notification HTTP Request:** SNS SubscriptionConfirmation (GET) Code:** Normalize Finding Message a model:** AI Prioritizer (JSON out) Airtable:** Create/Upsert Gmail:** Send message Edit Fields:** final JSON response Setup steps Import and activate the workflow in n8n. Webhook Respond: When Last Node Finishes -> First Entry JSON. Append a shared secret to the URL, e.g. ?token=MY_SUPER_TOKEN, and keep the check in the SNS Handler code node. Create an SNS topic (e.g., misconfig-events) in the same region as your EventBridge rules. Create EventBridge rules targeting the SNS topic: Rule A (Security Hub): source = aws.securityhub, detail-type = Security Hub Findings - Imported Rule B (AWS Config): source = aws.config, detail-type = Config Rules Compliance Change Create an SNS subscription with Protocol = HTTP and Endpoint = your production webhook URL: http://YOUR_HOST:5678/webhook/aws-misconfig?token=MY_SUPER_TOKEN (The workflow auto-confirms the subscription on first POST.) Configure Airtable (Upsert on Finding ID) and Gmail recipients.
by Aadarsh Jain
Who is this for? This workflow is designed for DevOps engineers, platform engineers, and Kubernetes administrators who want to interact with their Kubernetes clusters through natural language queries in n8n. It's perfect for teams who need quick cluster insights without memorizing complex kubectl commands or switching between multiple cluster contexts manually. How it works? The workflow operates in three intelligent stages: Cluster Discovery & Context Switching - Automatically lists available clusters from your kubeconfig and switches to the appropriate cluster based on your natural language query Command Generation - Uses GPT-4o to analyze your request and generate the correct kubectl command with proper flags, selectors, and output formatting Command Execution - Executes the generated kubectl command against your selected cluster and returns the results The workflow supports multi-cluster environments and can handle queries like: "Show me all pods in production cluster" "List failing deployments in production" "Get pod details in kube-system namespace" Setup Clone the MCP Server git clone https://github.com/aadarshjain/kubectl-mcp-server cd kubectl-mcp-server Configure your kubeconfig - Ensure your ~/.kube/config contains all the clusters you want to access Set up MCP STDIO credentials in n8n Command: /full/path/to/python-package Arguments: /full/path/to/kubectl-mcp-server/server.py Import the workflow into your n8n instance Configure OpenAI credentials for the GPT-4o models Test the workflow using the chat interface with queries like "show pods in [cluster-name]"
by clancy jack
This n8n workflow recommends Taiwan indie music based on a user's city, mood, birthday, today's weather, and star sign. Here's a concise overview: Trigger: Starts manually with the "When clicking ‘Test workflow’" node. Input Setup: The "infomation" node sets user inputs (e.g., city: Taipei, mood: Happy, birthday: 1996/11/21). Song Recommendation: The "get song recommendation" node uses OpenAI's GPT-4o-mini to: Fetch today's weather for the specified city. Determine the user's zodiac sign from their birthday. Check the zodiac sign's daily fortune. Recommend a Taiwan indie song considering weather and fortune. Explain the song choice and highlight its features. Return results in JSON format. Data Extraction: The "Information Extractor" node parses the JSON output, extracting fields like date, city, weather, zodiac sign, fortune, song, artist, and additional info. Spotify Search: The "Spotify" node searches for the recommended song using the artist and song name, retrieving a Spotify URL. Final Output: The "Final Output" node compiles all data, including the Spotify link, into a structured format. Additional Note: A "Sticky Note" provides context about the workflow's purpose and credits the creator, n8nguide. This workflow integrates AI, weather data, astrology, and Spotify to deliver personalized Taiwan indie music recommendations.
by Custom Workflows AI
Introduction This workflow offers a streamlined solution for uploading multiple files to a GitHub repository simultaneously using GitHub's REST API. It addresses a significant limitation of n8n's native GitHub node, which only supports single-file uploads at a time. By leveraging GitHub's Git Data API, this workflow creates a new Git tree containing multiple files, commits this tree, and updates the target branch—all in a single automated process. The workflow is particularly valuable for automation scenarios that require batch file operations, such as deploying website updates, publishing documentation, or maintaining configuration files across repositories. It eliminates the need for multiple separate API calls when working with multiple files, making your automation more efficient and less prone to partial update issues. By abstracting the complexities of GitHub's Git Data API into a reusable workflow, it provides a practical solution for developers, content managers, and DevOps professionals who need to programmatically manage repository content at scale. Who is this for? This workflow is designed for: Developers and DevOps engineers who need to automate file updates in GitHub repositories Content managers who regularly publish multiple files to GitHub-hosted websites or documentation Automation specialists looking to integrate GitHub operations into larger workflows Teams using n8n for CI/CD processes who need to push code or configuration changes Users should have basic familiarity with GitHub concepts (repositories, branches, commits) and should be comfortable obtaining and using GitHub Personal Access Tokens. While the workflow handles the API complexity, users should understand the fundamentals of version control to effectively utilize and customize it. What problem is this workflow solving? This workflow addresses several key challenges: Limited batch operations: n8n's native GitHub node only supports uploading one file at a time, making multi-file operations cumbersome and inefficient. API complexity: GitHub's Git Data API requires multiple sequential calls with interdependent data to create commits with multiple files, which is complex to implement manually. Automation bottlenecks: Without this workflow, automating multi-file updates would require either multiple separate API calls (risking partial updates) or custom scripting outside of n8n. Consistency issues: When files need to be updated together (e.g., code and corresponding documentation), this workflow ensures they're committed in a single atomic operation. By solving these issues, the workflow enables reliable, atomic updates of multiple files, maintaining repository consistency and simplifying automation processes. What this workflow does Overview This workflow uses GitHub's REST API to push multiple files to a repository in a single operation. It follows Git's internal model by: Retrieving the current state of the repository Creating a new tree with the files to be added or updated Creating a new commit with this tree Updating the branch reference to point to the new commit Process Initialization: The workflow starts with a manual trigger and sets up GitHub credentials and repository information. File Content Definition: Two "Set" nodes define the content for the files to be uploaded. Repository State Retrieval: The workflow fetches the latest commit SHA for the specified branch It then retrieves the base tree SHA from this commit Tree Creation: A new Git tree is created that includes both files (file1.txt and file2.txt), specifying their paths and content. Commit Creation: A new commit is created with the specified commit message, referencing the new tree and the parent commit. Branch Update: Finally, the branch reference is updated to point to the new commit, making the changes visible in the repository. Setup To use this workflow: Import the workflow: Download the workflow JSON and import it into your n8n instance. Create a GitHub Personal Access Token: Go to GitHub Settings → Developer Settings → Personal Access Tokens → Fine-grained tokens Create a new token with "Contents" permission (Read and write) for your target repository Configure the workflow: Update the "Set Github Info" node with: Your GitHub Personal Access Token Your GitHub username Your repository name The target branch (default is "main") A commit message Define file content: Modify the "File 1" and "File 2" nodes with the content you want to upload Adjust file paths if needed: In the "Create new tree" node, update the file paths if you want to change where the files are stored in the repository Save and run the workflow: Click "Test workflow" to execute the process. How to customize this workflow to your needs This workflow can be adapted in several ways: Add more files: Create additional "Set" nodes for more file content In the "Create new tree" node, add more tree entries following the same pattern (path, mode, type, content) Change file locations: Modify the "path" parameters in the "Create new tree" node to place files in different directories Dynamic file content: Replace the static content in the "File" nodes with data from other sources Use previous nodes or HTTP requests to generate file content dynamically Conditional file updates: Add IF nodes to determine which files should be updated based on certain conditions Create separate branches in your workflow for different update scenarios Scheduled updates: Replace the manual trigger with a Schedule node to run the workflow at specific intervals Combine with other triggers like Webhook or database events to push files when certain events occur Error handling: Add Error Trigger nodes to handle potential API failures Implement notification nodes to alert you of successful pushes or failures
by Jimleuk
This n8n template demonstrates how to use OpenAI's Responses API with existing LLM and AI Agent nodes. Though I would recommend just waiting for official support, if you're impatient and would like a round-about way to integrate OpenAI's responses API into your existing AI workflows then this template is sure to satisfy! This approach implements a simple API wrapper for the Responses API using n8n's builtin webhooks. When the base url is pointed to these webhooks using a custom OpenAI credential, it's possible to intercept the request and remap for compatibility. How it works An OpenAI subnode is attached to our agent but has a special custom credential where the base_url is changed to point at this template's webhooks. When executing a query, the agent's request is forwarded to our mini chat completion workflow. Here, we take the default request and remap the values to use with a HTTP node which is set to query the Responses API. Once a response is received, we'll need to remap the output for Langchain compatibility. This just means the LLM or Agent node can parse it and respond to the user. There are two response formats, one for streaming and one for non-streaming responses. How to use You must activate this workflow to be able to use the webhooks. Create the custom OpenAI credential as instructed. Go to your existing AI workflows and replace the LLM node with the custom OpenAI credential. You do not need to copy anything else over to the existing template. Requirements OpenAI account for Responses API Customising this workflow Feel free to experiment with other LLMs using this same technique! Keep up to date with the Responses API announcements and make modifications as required.