Create a Telegram Bot with Mistral Nemotron AI and Conversation Memory

šŸ¤– Create a Telegram Bot with Mistral AI and Conversation Memory

A sophisticated Telegram bot that provides AI-powered responses with conversation memory. This template demonstrates how to integrate any AI API service with Telegram, making it easy to swap between different AI providers like OpenAI, Anthropic, Google AI, or any other API-based AI model.

šŸ”§ How it works

The workflow creates an intelligent Telegram bot that: šŸ’¬ Maintains conversation history for each user 🧠 Provides contextual AI responses using any AI API service šŸ“± Handles different message types and commands šŸ”„ Manages chat sessions with clear functionality šŸ”Œ Easily adaptable to any AI provider (OpenAI, Anthropic, Google AI, etc.)

āš™ļø Set up steps

šŸ“‹ Prerequisites šŸ¤– Telegram Bot Token (from @BotFather) šŸ”‘ AI API Key (from any AI service provider) šŸš€ n8n instance with webhook capability

šŸ› ļø Configuration Steps

šŸ¤– Create Telegram Bot Message @BotFather on Telegram Create new bot with /newbot command Save the bot token for credentials setup

🧠 Choose Your AI Provider OpenAI: Get API key from OpenAI platform Anthropic: Sign up for Claude API access Google AI: Get Gemini API key NVIDIA: Access LLaMA models Hugging Face: Use inference API Any other AI API service

šŸ” Set up Credentials in n8n Add Telegram API credentials with your bot token Add Bearer Auth/API Key credentials for your chosen AI service Test both connections

šŸš€ Deploy Workflow Import the workflow JSON Customize the AI API call (see customization section) Activate the workflow Set webhook URL in Telegram bot settings

✨ Features

šŸš€ Core Functionality šŸ“Ø Smart Message Routing**: Automatically categorizes incoming messages (commands, text, non-text) 🧠 Conversation Memory**: Maintains chat history for each user (last 10 messages) šŸ¤– AI-Powered Responses**: Integrates with any AI API service for intelligent replies ⚔ Command Support**: Built-in /start and /clear commands

šŸ“± Message Types Handled šŸ’¬ Text Messages**: Processed through AI model with context šŸ”§ Commands**: Special handling for bot commands āŒ Non-text Messages**: Polite error message for unsupported content

šŸ’¾ Memory Management šŸ‘¤ User-specific chat history storage šŸ”„ Automatic history trimming (keeps last 10 messages) 🌐 Global state management across workflow executions

šŸ¤– Bot Commands

/start šŸŽÆ - Welcome message with bot introduction /clear šŸ—‘ļø - Clears conversation history for fresh start Regular text šŸ’¬ - Processed by AI with conversation context

šŸ”§ Technical Details

šŸ—ļø Workflow Structure šŸ“” Telegram Trigger - Receives all incoming messages šŸ”€ Message Filtering - Routes messages based on type/content šŸ’¾ History Management - Maintains conversation context 🧠 AI Processing - Generates intelligent responses šŸ“¤ Response Delivery - Sends formatted replies back to user

šŸ¤– AI API Integration (Customizable) Current Example (NVIDIA): Model: mistralai/mistral-nemotron Temperature: 0.6 (balanced creativity) Max tokens: 4096 Response limit: Under 200 words

šŸ”„ Easy to Replace with Any AI Service:

OpenAI Example: { "model": "gpt-4", "messages": [...], "temperature": 0.7, "max_tokens": 1000 }

Anthropic Claude Example: { "model": "claude-3-sonnet-20240229", "messages": [...], "max_tokens": 1000 }

Google Gemini Example: { "contents": [...], "generationConfig": { "temperature": 0.7, "maxOutputTokens": 1000 } }

šŸ›”ļø Error Handling āŒ Non-text message detection and appropriate responses šŸ”§ API failure handling āš ļø Invalid command processing

šŸŽØ Customization Options

šŸ¤– AI Provider Switching To use a different AI service, modify the "NVIDIA LLaMA Chat Model" node:

šŸ“ Change the URL in HTTP Request node šŸ”§ Update the request body format in "Prepare API Request" node šŸ” Update authentication method if needed šŸ“Š Adjust response parsing in "Save AI Response to History" node

🧠 AI Behavior šŸ“ Modify system prompt in "Prepare API Request" node šŸŒ”ļø Adjust temperature and response parameters šŸ“ Change response length limits šŸŽÆ Customize model-specific parameters

šŸ’¾ Memory Settings šŸ“Š Adjust history length (currently 10 messages) šŸ‘¤ Modify user identification logic šŸ—„ļø Customize data persistence approach

šŸŽ­ Bot Personality šŸŽ‰ Update welcome message content āš ļø Customize error messages and responses āž• Add new command handlers

šŸ’” Use Cases

šŸŽ§ Customer Support**: Automated first-line support with context awareness šŸ“š Educational Assistant**: Homework help and learning support šŸ‘„ Personal AI Companion**: General conversation and assistance šŸ’¼ Business Assistant**: FAQ handling and information retrieval šŸ”¬ AI API Testing**: Perfect template for testing different AI services šŸš€ Prototype Development**: Quick AI chatbot prototyping

šŸ“ Notes

🌐 Requires active n8n instance for webhook handling šŸ’° AI API usage may have rate limits and costs (varies by provider) šŸ’¾ Bot memory persists across workflow restarts šŸ‘„ Supports multiple concurrent users with separate histories šŸ”„ Template is provider-agnostic - easily switch between AI services šŸ› ļø Perfect starting point for any AI-powered Telegram bot project

šŸ”§ Popular AI Services You Can Use

| Provider | Model Examples | API Endpoint Style | |----------|---------------|-------------------| | 🟢 OpenAI | GPT-4, GPT-3.5 | https://api.openai.com/v1/chat/completions | | šŸ”µ Anthropic | Claude 3 Opus, Sonnet | https://api.anthropic.com/v1/messages | | šŸ”“ Google | Gemini Pro, Gemini Flash | https://generativelanguage.googleapis.com/v1beta/models/ | | 🟔 NVIDIA | LLaMA, Mistral | https://integrate.api.nvidia.com/v1/chat/completions | | 🟠 Hugging Face | Various OSS models | https://api-inference.huggingface.co/models/ | | 🟣 Cohere | Command, Generate | https://api.cohere.ai/v1/generate |

Simply replace the HTTP Request node configuration to switch providers!

0
Downloads
5
Views
8.94
Quality Score
beginner
Complexity
Author:Ajith joseph (View Original →)
Created:8/13/2025
Updated:8/25/2025

šŸ”’ Please log in to import templates to n8n and favorite templates

Workflow Visualization

Loading...

Preparing workflow renderer

Comments (0)

Login to post comments