Build an MCP Server which answers questions with Retrieval Augmented Generation
Build an MCP Server which has access to a semantic database to perform Retrieval Augmented Generation (RAG)
Tutorial
Click here to watch the full tutorial on YouTube
How it works This MCP Server has access to a local semantic database (Qdrant) and answers questions being asked to the MCP Client.
AI Agent Template Click here to navigate to the AI Agent n8n workflow which uses this MCP server
Warning This flow only runs local and cannot be executed on the n8n cloud platform because of the MCP Client Community Node.
Installation Install n8n + Ollama + Qdrant using the Self-hosted AI starter kit
Make sure to install Llama 3.2 and mxbai-embed-large as embeddings model.
Activate the n8n flow
Run the "RAG Ingestion Pipeline" and upload some PDF documents
How to use it Run the MCP Client workflow and ask a question. It will be either answered by using the semantic database or the search engine API.
More detailed instructions Missed a step? Find more detailed instructions here: https://brightdata.com/blog/ai/news-feed-n8n-openai-bright-data
Related Templates
Track Demo Bookings with Google Calendar to Meta Conversions API Integration
Who is this workflow for? If you're using Meta Ads to generate new leads to your sales pipeline, this workflow is for yo...
Build a PDF-Based RAG System with OpenAI, Pinecone and Cohere Reranking
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow prov...
Creators Hub: Generate Dynamic SVG Stats with daily updates
n8n Creators Template: Creator Profile Stats Updater This n8n workflow template is designed to automate the process of...
🔒 Please log in to import templates to n8n and favorite templates
Workflow Visualization
Loading...
Preparing workflow renderer
Comments (0)
Login to post comments