by Dr. Firas
Auto-Publish Social Videos to 9 Platforms via Google Sheets and Blotato Who is this workflow for? This workflow is ideal for marketers, content creators, virtual assistants, and automation specialists managing multi-platform video content. It’s especially useful for teams who want to centralize publishing via a spreadsheet and automate social distribution in one shot. What problem does this workflow solve? Manually posting videos to multiple social platforms is tedious and time-consuming. This workflow allows you to streamline video distribution using Blotato’s API — no more switching between platforms or re-uploading the same video multiple times. What this workflow does This automation reads video metadata (URL, caption, title) from a Google Sheet, uploads the video to Blotato, and automatically publishes it to Instagram, YouTube, TikTok, Facebook, LinkedIn, Threads, Twitter (X), Pinterest, and Bluesky. It also updates the sheet to reflect the publishing status (STATUS = DONE), ensuring that your data remains clean and trackable. Setup Set up your Google Sheet with the required columns: PROMPT, DESCRIPTION, URL VIDEO, Titre, row_number, and STATUS. Add your Blotato API key in the headers of the Upload Video and Post to X nodes. Replace the platform-specific IDs in the Assign Social Media IDs node (Instagram ID, Facebook Page ID, etc.). Set the schedule in the Schedule Trigger node to define when the publishing happens. > ⚠️ Disclaimer: This workflow uses Community Nodes. These are only available on self-hosted n8n instances. How to customize this workflow Add logic to skip rows already marked as DONE. Expand to more platforms supported by Blotato. Use a webhook or Telegram trigger instead of the scheduler for more interactivity. Modify content per platform if needed (caption formatting, hashtags, etc.). 📄 Documentation: Notion Guide Demo Video 🎥 Watch the full tutorial here: YouTube Demo
by Yang
Who is this for? This workflow is built for marketers, sales teams, agencies, virtual assistants, and anyone who regularly researches or contacts local businesses. It's ideal for building lead lists, tracking competitors, or creating location-specific outreach campaigns. What problem is this workflow solving? Instead of manually searching Google Maps and copying business info into spreadsheets, this automation pulls structured business data (e.g. restaurants, gyms, service providers) and logs it directly into Google Sheets. It saves hours of work and ensures cleaner, more usable data. What this workflow does The workflow takes a Google Maps search query (like "best restaurants in New York") and sends it to Dumpling AI. It returns a list of places including their name, address, website, phone number, rating, and more. Each result is split into a row and automatically added to a Google Sheet. Setup Dumpling AI Sign up at Dumpling AI Generate your API key In the HTTP Request node, select Header Auth and paste your key in the Authorization field Google Sheets Create a sheet with tab name Leads Add the following column headers to row 1: Name, Address, Phone number, Website, Rating, Price Level, Type, Booking Link, Position Connect your Google Sheets account and link this sheet in the node Customize the Query In the HTTP node, replace the query string (e.g., "best+restaurants+in+New+York") with your own search term Run It Use the manual trigger to test Optionally swap in a Schedule or Webhook node to run it automatically How to customize this workflow to your needs Change the search query to target different cities or business types Use filters to only save leads with a minimum rating or price level Add GPT to summarize listings or qualify leads Swap Google Sheets for Airtable or a CRM system for deeper integration
by Ranjan Dailata
Who this is for? This workflow is designed for professionals and teams who need real-time, structured insights from Google Search results without manual effort. What problem is this workflow solving? This n8n workflow solves the problem of automating Google Search result extraction, cleanup, summarization, and AI-enhanced formatting for downstream use like sending the results to a webhook or another system. What this workflow does Automates Google Search via Bright Data Uses Bright Data’s proxy-based SERP API to run a Google Search query programmatically. Makes the process repeatable and scriptable with different search terms and regions/zones. Cleans and Extracts Useful Content The Google Search Data Extractor uses LLM based cleaning to remove HTML/CSS/JS from the response and extract pure text data. Converts messy, unstructured web content into structured, machine-readable format. Summarizes Search Results Through the Gemini Flash + Summarization Chain, it generates a concise summary of the search results. Ideal for users who don’t have time to read full pages of search results. Formats Data Using AI Agent The AI Agent acts like a virtual assistant that: Understands search results Formats them in a readable, JSON-compatible form Prepares them for webhook delivery Delivers Results to Webhook Sends the final summary + structured search result to a webhook (could be your app, a Slack bot, Google Sheets, or CRM). Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. A Google Gemini API key (or access through Vertex AI or proxy). Update the Google Search query as you wish by navigating to the Set Google Search Query node. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize This Workflow to your needs 1. Change the Search Input Default: It searches a fixed query or dataset. Customize: Accept input from a Google Sheet, Airtable, or a form. Auto-trigger searches based on keywords or schedules. 2. Customize Summarization Style (LLM Output) Default: General summary using Google Gemini or OpenAI. Customize: Add tone: formal, casual, technical, executive-summary, etc. Focus on specific sections: pricing, competitors, FAQs, etc. Translate the summaries into multiple languages. Add bullet points, pros/cons, or insight tags. 3.Choose Where the Results Go Options: Email, Slack, Notion, Airtable, Google Docs, or a dashboard. Auto-create content drafts for WordPress or newsletters. Feed into CRM notes or attach to Salesforce leads.
by explorium
Explorium Event-Triggered Outreach This n8n and agent-based workflow automates outbound prospecting by monitoring Explorium event data (e.g. product launches, new office opening, new investment and more), researching companies, identifying key contacts, and generating tailored sales emails leveraging the Explorium MCP server. Template Workflow Overview Node 1: Webhook Trigger Purpose: Listens for real-time product launch events pushed from Explorium's webhook system. How it works: Explorium sends HTTP POST requests containing event data The webhook payload includes company name, business ID, domain, product name, and event type Pay attention: Product launch is just one example. You can easily enroll to many more meaningful events. to learn about events and how to enroll to events, visit the events documentation. Node 2: Company Research Agent Agent Type: Tools Agent Purpose: Enrich company data after an event occurs. How it works: Uses Explorium MCP via the MCP Client tool to gather additional company data Uses Anthropic Claude (Chat Model) to process and interpret company information for downstream personalization Node 3: Employee Data Retrieval Purpose: Retrieve prospect-level data for targeting. How it works: Uses HTTP Request node to call Explorium's fetch_prospects endpoint Filters prospects by: Company business_id Departments: Product, R&D, etc... Seniority levels: owner, cxo, vp, director, senior, manager, partner, etc... Pay Attention: Follow our fetch prospect documentation for the full list of filter and best practice. Limits results to top 5 relevant employees Code nodes handle: Filtering logic Cleaning API response Formatting data for downstream agents Node 4: Conditional Branch - Prospect Data Check If Node: Checks whether prospect data was successfully retrieved Logic: If prospects found → personalized emails per person If no prospects → fallback to company-level general email Node 5A: Email Writer #1 (No Prospect Data) Agent Type: Tools Agent Purpose: Write generic outbound email using only company-level research and event info. Powered by: Anthropic Chat Model Node 5B: Loop Over Prospects → Email Writer #2 (Personalized) Agent Type: Tools Agent Purpose: Write highly personalized email for each identified employee. How it works: Loops through each individual prospect Passes company research + employee data to LLM agent Generates customized emails referencing: Prospect's title & department Product launch Role-relevant Explorium value proposition Node 6: Slack Notifications Purpose: Posts completed emails to internal Slack channel for review or testing before final deployment. Future State: Can be swapped with an email sequencing platform in production. Setup Requirements Explorium API Access MCP Client credentials for company enrichment and prospect fetching Registered webhook for event listening Get explorium api key n8n Configuration Secure environment variables for API keys & webhook secret Code nodes configured for JSON transformation, filtering & signature validation Customization Options Personalization Logic Update LLM prompt instructions to reflect ICP priorities Modify email templates based on role, department, or tenure logic Adjust fallback behavior when prospect data is unavailable API Request Tuning Adjust page_size for number of prospects retrieved Fine-tune seniority and department filters to match evolving targeting Future Expansion Swap Slack notifications for outbound email automation Integrate call task assignment directly into CRM Introduce engagement scoring feedback loop (opens, clicks, replies) Troubleshooting Tips Validate webhook signature matching to prevent unauthorized requests Ensure correct business_id is passed to prospect fetching endpoint Confirm business enrichment returns sufficient data for company researcher agents Review agent LLM responses for correct output structure and parsing consistency
by ist00dent
This n8n template allows you to automatically create shortened URLs using the TinyURL API by simply sending a webhook request. It's a quick and efficient way to integrate URL shortening into your automated workflows, ideal for sharing long links in social media, emails, or other applications. 🔧 How it works Receive Link Webhook: This node acts as the entry point for the workflow. It listens for incoming POST requests and expects a JSON body containing the url to be shortened and your api_key for TinyURL. Create TinyURL: This node sends a POST request to the TinyURL API, passing the long URL and your API key. It can also accept optional parameters like domain, alias, and description to customize the shortened link. Respond with Shortened URL: This node sends the response from the TinyURL API (which includes the new shortened URL) back to the service that initiated the webhook. 👤 Who is it for? This workflow is ideal for: Content Managers & Marketers: Quickly shorten links for campaigns, social media posts, or tracking. Developers: Automate the process of link shortening within applications or scripts. Automation Enthusiasts: Integrate a URL shortener into various n8n workflows (e.g., after generating a report, before sending a notification). Anyone needing on-demand short links: A flexible solution for ad-hoc link shortening. 📑 Data Structure When you trigger the webhook, send a POST request with a JSON body structured as follows: { "api_key": "YOUR_TINYURL_API_KEY", "url": "https://www.verylongwebsite.com/path/to/specific/page?param1=value1¶m2=value2", "domain": "tinyurl.com", // Optional: defaults to tinyurl.com "alias": "myCustomAlias", // Optional: desired custom alias for the link "description": "My project link" // Optional: description for the link } The workflow will return the JSON response directly from the TinyURL API, which will include the short_url and other details about the newly created link. ⚙️ Setup Instructions Obtain TinyURL API Key: Before importing, make sure you have an API key from TinyURL. You can typically get this by signing up for an account on their website. Import Workflow: In your n8n editor, click "Import from JSON" and paste the provided workflow JSON. Configure Webhook Path: Double-click the Receive Link Webhook node. In the 'Path' field, set a unique and descriptive path (e.g., /shorten-link). Activate Workflow: Save and activate the workflow. 📝 Tips Dynamic Inputs: The workflow is set up to dynamically use the url, api_key, alias, and description from the incoming webhook data. This makes it highly flexible. Error Handling: You can add an Error Trigger node to catch any issues (e.g., invalid API key, malformed URL) during the TinyURL creation process. Configure it to send notifications or log errors for easy troubleshooting. Post-Shortening Actions: After generating the shortened URL, you can insert additional nodes before the Respond with Shortened URL node to perform other actions. For example, you could: Save to a Database: Store the original and shortened URLs in a database like Airtable, Google Sheets, or a PostgreSQL database. Send a Message: Automatically send the shortened URL via Slack, Discord, email, or SMS. Update a Record: Update a CRM record or project management task with the new shortened link. Custom Domains: If you have a custom domain configured with your TinyURL account, you can change the domain parameter in the Create TinyURL node to use it.
by Adrian
📋 Description This template creates an intelligent AI assistant for WhatsApp that can: Respond naturally** to messages using Google Gemini AI Remember previous conversations** for each user Access a knowledge base** for answering frequently asked questions Automatically save** all conversations for long-term memory 🛠️ Requirements 1. WAMM.pro Account (FREE tier available) What is WAMM.pro?** - A platform that enables WhatsApp automation using proprietary API technology Free tier:** 50 messages/month PRO tier:** Unlimited messages + advanced features Link:** wamm.pro 2. Pinecone Account (for AI memory) For storing conversations and knowledge base Free tier available 3. Google AI Account (for Gemini) For the conversational AI model 4. OpenAI Account (for embeddings) For generating memory vectors 🚀 Step-by-step Setup Step 1: WAMM.pro Configuration Create account at wamm.pro Account Manager → Add WhatsApp profile Scan QR code with your WhatsApp Note down: Instance ID and Access Token Step 2: Webhook Configuration In WAMM.pro: Integrations → Webhooks → Messages Webhooks Add Webhook with the n8n URL Required configuration: From others: ✅ Relevant + ✅ Without media + ✅ Exclude no text To others: ✅ Relevant + ✅ Without media + ✅ Exclude no text To myself: ✅ None (to avoid responding to own messages) Step 3: Pinecone Configuration Create 2 indexes: historywa - for conversation memory knowledge - for knowledge base Index settings: Dimensions: 3072 Metric: cosine Embedding model: text-embedding-3-large Step 4: n8n Configuration Configure credentials: WAMM: Instance ID + Access Token Pinecone: API Key Google Gemini: API Key OpenAI: API Key for embeddings 🔧 How it Works Workflow Flow: 📱 WhatsApp Message ↓ (webhook) 🎯 AI Agent (Gemini) ↓ (uses tools) 📚 Memory Tool + Knowledge Tool ↓ (response generated) 📤 WAMM Send Message ↓ (saves) 💾 Pinecone Memory Storage Available AI Tools: Memory Tool - Searches previous conversations with the user Knowledge Tool - Searches the general knowledge base Special Features: Natural conversations** - AI doesn't mention "searching history" Persistent context** - Remembers names, preferences, previous conversations User language detection** - Automatically responds in user's language Organized memory** - Each user has their own memory space 📊 Benefits ✅ Zero maintenance - Runs automatically ✅ Scalable - Supports multiple users simultaneously ✅ Intelligent memory - Uses similarity search for relevant context ✅ Extensible - Easy to add new features ✅ Cost-effective - Free tiers available for all services 🎯 Use Cases Automated customer support** with memory Personal assistant** for WhatsApp Business chatbot** with specific knowledge Conversation automation** with persistent context 🔒 Security Data** stored in Pinecone as vector embeddings No plain text** message storage Each user** has separate memory space API keys** secured in n8n credentials 📈 Possible Extensions CRM** integrations Scheduling** and reminders Advanced multi-language** support Analytics** and conversation reports Custom knowledge bases** per user 💡 Tip: For optimal results, populate the knowledge base with frequently asked questions specific to your business!
by Robert Breen
This n8n workflow reads emails from your Outlook inbox, drafts AI-powered replies using OpenAI, and routes them through the gotoHuman node for human approval before replying automatically. ✅ Key Features Reads Outlook emails** from today only (excluding those from your own address). AI-generated replies** crafted using OpenAI based on the subject and body of the email. Community node integration**: Uses the gotoHuman node for human review and approval of replies before sending. Safe sending**: Only approved responses are automatically sent back via Outlook. Expandable**: Can be easily modified to: Send drafts instead of full replies Include additional email filters Trigger at intervals or via webhook 🧠 Nodes Used Microsoft Outlook – Fetch and reply to emails OpenAI – Generates smart reply text gotoHuman – Human-in-the-loop approval system Loop Over Items, IF, Code, and Set nodes for processing logic Manual Trigger – For testing 🔧 Setup Instructions 1. Connect APIs Outlook OAuth2**: Go to Azure Portal Register an app Add Mail.Read, Mail.Send scopes Set redirect URI: https://api.n8n.cloud/oauth2-credential/callback Paste credentials in n8n credential manager OpenAI API**: Create account at OpenAI Create an API Key Add it to n8n credentials gotoHuman API**: Go to https://gotoHuman.ai and sign in Create a review template (e.g., “Email Responses”) Copy the Template ID and API key into n8n credentials 🪜 Workflow Steps Overview 1. Trigger Use the Manual Trigger to test or schedule execution with a cron node. 2. Filter Emails from Today A Code node outputs today's date in the proper yyyy-mm-dd format. const today = new Date(); today.setHours(0, 0, 0, 0); return [{ json: { searchQuery: received:${today.toISOString().split('T')[0]} } }]; 3. Search and Filter Outlook Messages Uses the Outlook node with a search query like: received:2025-08-06 -from:rbreen@ynteractive.com (Update to your email) 4. Generate AI Response Text prompt to OpenAI: subject: {{ $json.subject }} body: {{ $json.body.content }} System prompt: > You are a personal assistant helping respond to emails. I am an AI automation expert specializing in helping small and medium-size businesses automate processes. Create a short response to the email. Sign the email as Robert Breen. 5. Review with gotoHuman Submit AI output for human approval using the gotoHuman node. The output schema should match the Review Template fields (e.g., "email", "OriginalEmail"). 6. IF Node Decision If status is approved, send reply If not, return to loop for revision or skip ✏️ Customization Ideas ✉️ Send only drafts by skipping the "reply" step and storing results. 🕒 Schedule the workflow with a Cron trigger for automation. 🔎 Add label filters or subject keywords for advanced targeting. 🔗 External Links gotoHuman Community Node OpenAI Microsoft Outlook API Setup 💬 Need More Help? If you'd like help customizing this or building similar automations, reach out: Robert Breen AI & Automation Consultant 🌐 https://ynteractive.com 📧 robert.j.breen@gmail.com 🔗 LinkedIn
by Teddy
Webhook | Paper Summarization Who is this for? This workflow is designed for researchers, students, and professionals who frequently read academic papers and need concise summaries. It is useful for anyone who wants to quickly extract key information from research papers hosted on arXiv. What problem is this workflow solving? Academic papers are often lengthy and complex, making it time-consuming to extract essential insights. This workflow automates the process of retrieving, processing, and summarizing research papers, allowing users to focus on key findings without manually reading the entire paper. What this workflow does This workflow extracts the content of an arXiv research paper, processes its abstract and main sections, and generates a structured summary. It provides a well-organized output containing the Abstract Overview, Introduction, Results, and Conclusion, ensuring that users receive critical information in a concise format. Setup Ensure you have n8n installed and configured. Import this workflow into your n8n instance. Configure an external trigger using the Webhook node to accept paper IDs. Test the workflow by providing an arXiv paper ID. (Optional) Modify the summarization model or output format according to your preferences. How to customize this workflow to your needs Adjust the HTTPRequest node to fetch papers from other sources beyond arXiv. Modify the Summarization Chain node to refine the summary output. Enhance the Reorganize Paper Summary step by integrating additional language models. Add an email or Slack notification step to receive summaries directly. Workflow Steps Webhook receives a request with an arXiv paper ID. Send an HTTP request using "Request to Paper Page" to fetch the HTML content of the paper. Extract the abstract and sections using "Extract Contents". Split out all sections using "Split out All Sections" to process individual paragraphs. Clean up text using "Remove useless links" to remove unnecessary elements. Summarize extracted content using "Summarization Chain". Aggregate summarized content using "Aggregate summarized content". Reorganize the paper summary into structured sections using "Reorganize Paper Summary". Extract key information using "Content Extractor" to classify data into Abstract Overview, Introduction, Results, and Conclusion. Respond to the webhook with the structured summary. Note: This workflow is designed for use with arXiv research papers but can be adapted to process papers from other sources.
by Vincent Belmehel
Purpose This workflow automatically creates a subscriber in a given Beehiiv publication when a new opt-in is registered in a given Systeme.io sales funnel. Good to know: the integration with Systeme.io is done at the sales funnel level, not at the account level. If you have several sales funnels, you can use the same workflow several times. Quick Setup Configure your sales funnel in Systeme.io to create and trigger a webhook after an opt-in Open the “On New Systeme.io Optin” node to find the webhook URL needed to configure your sales funnel on Systeme.io Configure the “Configure Workflow” node Add your Beehiiv publication ID If you know the subscriber's first and last name and want to send it to Beehiiv, configure the custom field names for first and last name Add one or more email addresses to which to send alert notifications in the event of a problem (separated by commas). If you have not already done so : Connect your Beehiiv account in the “Create New Beehiiv Subscriber” node Connect your Gmail account in the “Send Email Alert (Beehiiv API error)” node How It Works As soon as a new opt-in is registered on your sales funnel, Systeme.io triggers the workflow (via a webhook) Only requests actually coming from Systeme.io are considered (whitelisting of their IP addresses for security reasons) A new subscriber is added to your Beehiiv publication (via an API call) If available in Systeme.io, UTM tags (utm_source, utm_medium and utm_campaign) are transferred to Beehiiv to correctly track where your subscribers are coming from If an error occurs during the Beehiiv API call, an alert notification is sent to you (via email) Requirements A Systeme.io account A Beehiiv account with an active publication A Gmail account Benefits Automate & scale your email marketing efforts seamlessly No more manual tasks to keep your subscriber list always up-to-date Focus on creating a newsletter that stands out, not on the technical side Check Out My Other Templates 👉 https://n8n.io/creators/belmehel/
by Mohan Gopal
Personalized Tour Package Recommendations via n8n + Pinecone + Lovable UI I've created an intelligent Travel Itinerary Planner that connects a Lovable front-end UI with a smart backend powered by n8n, Pinecone, and OpenAI to deliver personalized tour packages based on natural language queries. What It Does Users type in their travel destination and duration (e.g., "Paris 5 days trip" or "Bali Trip for 7 Days, would love water sports, adventures and trekking included, also some historical monuments") through a Lovable UI. This triggers a webhook in n8n, which processes the request, searches vectorized tour data in Pinecone, and generates a personalized itinerary using OpenAI’s GPT. The results are then structured and sent back to the frontend UI for display in an interactive, reorderable format. Workflow Architecture Lovable UI ➝ Webhook ➝ Tour Recommendation Agent ➝ Vector Search ➝ OpenAI Response ➝ Structured Output ➝ Response to Lovable Tools & Components Used Webhook Acts as the entry point between the Lovable frontend and n8n. Captures the user query (destination, duration) and forwards it into the workflow. OpenAI Chat Model To interpret the user query. To generate a user-friendly, structured tour package from the matched results. Simple Memory Keeps chat state and context for follow-up queries (extendable for future features like multi-step planning or saved itineraries). Question Answering with Vector Store Searches vector embeddings of pre-loaded tour data. Finds the most relevant tour packages by comparing query embeddings. Pinecone Vector Store Stores tour packages and activity data in vectorized format. Enables fast and scalable semantic search across destinations, themes (e.g., "adventure", "cultural"), and duration. OpenAI Embeddings Embeds all tour and activity documents stored in Pinecone. Converts input user queries into embedding vectors for semantic search. Structured Output Parser Parses the final OpenAI-generated response into a consistent, frontend-consumable JSON format. Frontend (Lovable UI) User types in destination or their travel package needs in the Tour Search. Lovable queries the n8n workflow. Displays beautifully structured, editable itineraries. How to Set It Up Webhook Setup in n8n Create a POST webhook node. Set Webhook URL and connect it with Lovable frontend. Pinecone & Embeddings Convert your static tour package documents (PDFs, JSON, CSV, etc.) into embeddings using OpenAI. Store the embeddings in a Pinecone namespace (e.g., kuala-lumpur-3-days). Configure “Answer with Vector Store” Tool Connect the tool to your Pinecone instance and pass query embedding for matching. Connect to OpenAI Chat Use the GPT model to process query + context from Pinecone to generate an engaging itinerary description. Optionally chain a second model to format it into UI-consumable output. Output Parser & Return Use Structured Output Parser to parse the response and pass it to Respond to Webhook node for UI display. Ideal Use Cases Smart itinerary planning for OTAs or DMCs Personalized travel recommendations in chatbots or apps Travel advisors and agents automating package generation Benefits Highly relevant, contextual travel suggestions Natural query understanding via OpenAI Seamless frontend-backend integration via Webhook If you’re building personalized experiences for travelers using AI, give this approach a try! Let me know if you’d like the JSON for this workflow or help setting up the Pinecone data pipeline.
by Roni Bandini
This workflow receives plain English instructions from a retro console via a webhook. Using an AI agent, it can combine multiple tools to read general RSS news headlines, stock market updates, emails, calendar events, search X, send Telegram messages, and run Linux commands. The idea is to avoid using smartphones or regular laptops in the morning, and instead use a retro console installed on an old notebook or netbook. You will need to copy a Python script onto the notebook, configure the webhook URL, and set up all the required credentials. Steps: Setup Gemini API key, Google Gmail and Calendar credentials from console.google.com Setup X credentials, RSS URL, etc Obtain the webhook URL and paste into the Python code to be executed at the Linux machine Run the python script with python3 console.py Note: if you ask for a Linux command, the command will not only be returned but also executed.
by Jimleuk
This n8n template demonstrates how to use OpenAI's Responses API with existing LLM and AI Agent nodes. Though I would recommend just waiting for official support, if you're impatient and would like a round-about way to integrate OpenAI's responses API into your existing AI workflows then this template is sure to satisfy! This approach implements a simple API wrapper for the Responses API using n8n's builtin webhooks. When the base url is pointed to these webhooks using a custom OpenAI credential, it's possible to intercept the request and remap for compatibility. How it works An OpenAI subnode is attached to our agent but has a special custom credential where the base_url is changed to point at this template's webhooks. When executing a query, the agent's request is forwarded to our mini chat completion workflow. Here, we take the default request and remap the values to use with a HTTP node which is set to query the Responses API. Once a response is received, we'll need to remap the output for Langchain compatibility. This just means the LLM or Agent node can parse it and respond to the user. There are two response formats, one for streaming and one for non-streaming responses. How to use You must activate this workflow to be able to use the webhooks. Create the custom OpenAI credential as instructed. Go to your existing AI workflows and replace the LLM node with the custom OpenAI credential. You do not need to copy anything else over to the existing template. Requirements OpenAI account for Responses API Customising this workflow Feel free to experiment with other LLMs using this same technique! Keep up to date with the Responses API announcements and make modifications as required.