Monitor AI Chat Interactions with Gemini 2.5 and Langfuse Tracing

This workflow contains community nodes that are only compatible with the self-hosted version of n8n.

How it works

This workflow is a simple AI Agent that connects to Langfuse so send tracing data to help monitor LLM interactions.

The main idea is to create a custom LLM model that allows the configuration of callbacks, which are used by langchain to connect applications such Langfuse.

This is achieves by using the "langchain code" node: Connects a LLM model sub-node to obtain the model variables (model name, temp and provider) - Creates a generic langchain initChatModel with the model parameters. Return the LLM to be used by the AI Agent node.

šŸ“‹ Prerequisites Langfuse instance (cloud or self-hosted) with API credentials LLM API key (Gemini, OpenAI, Anthropic, etc.) n8n >= 1.98.0 (required for LangChain code node support in AI Agent)

āš™ļø Setup

Add these to your n8n instance: Langfuse configuration LANGFUSE_SECRET_KEY=your_secret_key LANGFUSE_PUBLIC_KEY=your_public_key LANGFUSE_BASEURL=https://cloud.langfuse.com # or your self-hosted URL

LLM API key (example for Gemini) GOOGLE_API_KEY=your_api_key

Alternative: Configure these directly in the LangChain code node if you prefer not to use environment variables

Import the workflow JSON

Connect your preferred LLM model node

Send a test message to verify tracing appears in Langfuse

0
Downloads
300
Views
9.14
Quality Score
intermediate
Complexity
Author:Eduardo Hales(View Original →)
Created:8/13/2025
Updated:8/25/2025

šŸ”’ Please log in to import templates to n8n and favorite templates

Workflow Visualization

Loading...

Preparing workflow renderer

Comments (0)

Login to post comments