Build Custom AI Agent with LangChain & Gemini (Self-Hosted)

Overview
This workflow leverages the LangChain code node to implement a fully customizable conversational agent. Ideal for users who need granular control over their agent's prompts while reducing unnecessary token consumption from reserved tool-calling functionality (compared to n8n's built-in Conversation Agent).

Setup Instructions
Configure Gemini Credentials: Set up your Google Gemini API key (Get API key here if needed). Alternatively, you may use other AI provider nodes.
Interaction Methods:
Test directly in the workflow editor using the "Chat" button
Activate the workflow and access the chat interface via the URL provided by the When Chat Message Received node

Customization Options
Interface Settings: Configure chat UI elements (e.g., title) in the When Chat Message Received node
Prompt Engineering:
Define agent personality and conversation structure in the Construct & Execute LLM Prompt node's template variable
⚠️ Template must preserve {chat_history} and {input} placeholders for proper LangChain operation
Model Selection: Swap language models through the language model input field in Construct & Execute LLM Prompt
Memory Control: Adjust conversation history length in the Store Conversation History node

Requirements:
⚠️ This workflow uses the LangChain Code node, which only works on self-hosted n8n.
(Refer to LangChain Code node docs)

0
Downloads
4410
Views
9.14
Quality Score
beginner
Complexity
Author:shepard(View Original →)
Created:8/14/2025
Updated:8/25/2025

🔒 Please log in to import templates to n8n and favorite templates

Workflow Visualization

Loading...

Preparing workflow renderer

Comments (0)

Login to post comments