by Yuvraj Singh
Purpose This solution enables you to manage all your Notion and Todoist tasks from different workspaces as well as your calendar events in a single place. This is 2 way sync with partial support for recurring How it works The realtime sync consists of two workflows, both triggered by a registered webhook from either Notion or Todoist. To avoid overwrites by lately arriving webhook calls, every time the current task is retrieved from both sides. Redis is used to prevent from endless loops, since an update in one system triggers another webhook call again. Using the ID of the task, the trigger is being locked down for 80 seconds. Depending on the detected changes, the other side is updated accordingly .Generally Notion is treaded as the main source. Using an "Obsolete" Status, it is guaranteed, that tasks never get deleted entirely by accident. The Todoist ID is stored in the Notion task, so they stay linked together An additional full sync workflow daily fixes inconsistencies, if any of them occurred, since webhooks cannot be trusted entirely. Since Todoist requires a more complex setup, a tiny workflow helps with activating the webhook. Another tiny workflow helps generating a global config, which is used by all workflows for mapping purposes. Mapping (Notion >> Todoist) Name: Task Name Priority: Priority (1: do first, 2: urgent, 3: important, 4: unset) Due: Date Status: Section (Done: completed, Obsolete: deleted) <page_link>: Description (read-only) Todoist ID: <task_id> Current limitations Changes on the same task cannot be made simultaneously in both systems within a 15-20 second time frame. Subtasks are not linked automatically to their parent yet. Tasks names do not support URL’s yet. Credentials Follow the video: Setup credentials for Notion (access token), Todoist (access token) and Redis. Todoist Follow this video to get Todoist to obtain API Token. Todoist Credentials.mp4 Notion Follow this video to get Notion Integration Secret. Redis Follow this video to get Redis Setup The setup involves quite a lot of steps, yet many of them can be automated for business internal purposes. Just follow the video or do the following steps: Setup credentials for Notion (access token), Todoist (access token) and Redis - you can also create empty credentials and populate these later during further setup Clone this workflow by clicking the "Use workflow" button and then choosing your n8n instance - otherwise you need to map the credentials of many nodes. Follow the instructions described within the bundle of sticky notes on the top left of the workflow How to use You can apply changes (create, update, delete) to tasks both in Notion and Todoist which then get synced over within a couple of seconds (this is handled by the differential realtime sync) The daily running full sync, resolves possible discrepancies in Todoist. This workflow incorporates ideas and techniques inspired by Mario (https://n8n.io/creators/octionic/) whose expertise with specific nodes helped shape parts of this automation. Significant enhancements and customizations have been made to deliver a unique and improved solution.
by Axiomlab.dev
Tasks Briefing This template posts a clean, Slack-ready morning summary of your Google Tasks due today. It fetches tasks, filters only those due “today” in your timezone, asks a local LLM (via LangChain + Ollama) to produce a short summary (no steps, just a concise brief), strips any hidden <think> blocks, and delivers the message to your chosen Slack channel. How it works Trigger at Morning (Cron) – runs at 7:00 AM (you can change the hour) to kick things off daily. Get many tasks (Google Tasks node) – pulls tasks from your selected Google Tasklist. Code (Filter Due Today) – normalizes dates to your timezone, keeps only tasks due today, and emits a fallback flag if none exist. If – routes: True (has tasks) → continues to the LLM summary path. False (no tasks) → sends a “No tasks due today” message to Slack. Code (Build LLM Prompt) – builds a compact, Markdown-only prompt for the model (no tool calls). Basic LLM Chain (LangChain) + Ollama Model – generates a short summary for Slack. Code (Cleanup) – removes any <think>…</think> content if the model includes it. Send a message (Slack) – posts the final brief to your Slack channel. Required credentials Google Tasks OAuth2 API** – to read tasks from your Google Tasklist. Slack API** – to post the summary into a channel. Ollama** – local model endpoint (e.g., qwen3:4b); used by the LangChain LLM nodes. Setup Instructions Google Tasks credential In Google Cloud Console: enable Google Tasks API, create an OAuth Client (Web), and set the redirect URI shown by n8n. In n8n Credentials, add Google Tasks OAuth2 API with scope: https://www.googleapis.com/auth/tasks (read/write) or https://www.googleapis.com/auth/tasks.readonly (read-only). In the Get many tasks node, select your credential and your Tasklist. Slack credential & channel In n8n Credentials, add Slack API (bot/user token with chat:write). In Send a message nodes, select your Slack credential and set the Channel (e.g., #new-leads). Ollama model (LangChain) Ensure Ollama is running on your host (default http://localhost:11434). Pull a model (e.g., ollama pull qwen3:4b) or use another supported model (llama3:8b, etc.). In Ollama Model node, select your Ollama credential and set the model name to match what you pulled. Timezone & schedule The Cron node is set to 7:00 AM. Adjust as needed. The Code (Filter Due Today) node is configured for Asia/Dhaka; change the TZ constant if you prefer a different timezone. (Optional) Cleanup safety The template includes a Code (Cleanup) node that strips <think>…</think> blocks from model output. Keep this connected before the Slack node. Test the flow Run the workflow once manually: If you have tasks due today, you should see a concise summary posted to your Slack channel. If none are due, you’ll receive a friendly “No tasks due today” message. Activate When everything looks good, toggle the workflow Active to receive the daily summary automatically.
by Dahiana
YouTube Transcript Extractor This n8n template demonstrates how to extract transcripts from YouTube videos using two different approaches: automated Google Sheets monitoring and direct webhook API calls. Use cases: Content creation, research, accessibility, meeting notes, content repurposing, SEO analysis, or building transcript databases for analysis. How it works Google Sheets Integration:** Monitor a sheet for new YouTube URLs and automatically extract transcripts Direct API Access:** Send YouTube URLs via webhook and get instant transcript responses Smart Parsing:** Extracts video ID from various YouTube URL formats (youtube.com, youtu.be, embed) Rich Metadata:** Returns video title, channel, publish date, duration, and category alongside transcript Fallback Handling:** Gracefully handles videos without available transcripts Two Workflow Paths Automated Sheet Processing: Add URLs to Google Sheet → Auto-extract → Save results to sheet Webhook API: Send POST request with video URL → Get instant transcript response How to set up Replace "Dummy YouTube Transcript API" credentials with your YouTube Transcript API key Create your own Google Sheet with columns: "url" (input sheet) and "video title", "transcript" (results sheet) Update Google Sheets credentials to connect your sheets Test each workflow path separately Customize the webhook path and authentication as needed Requirements YouTube Transcript API access (youtube-transcript.io or similar) Google Sheets API credentials (for automated workflow) n8n instance (cloud or self-hosted) YouTube videos How to customize Modify transcript processing in the Code nodes Add additional metadata extraction Connect to other storage solutions (databases, CMS) Add text analysis or summarization steps Set up notifications for new transcripts
by Abdulrahman Alhalabi
Arabic OCR Telegram Bot How it Works Receive PDF Files - Users send PDF documents via Telegram to the bot OCR Processing - Mistral AI's OCR service extracts Arabic text from document pages Text Organization - Processes and formats extracted content with page numbers Create Google Doc - Generates a formatted document with all extracted text Deliver Results - Sends users a clickable link to their processed document Set up Steps Setup Time: ~20 minutes Create Telegram Bot - Get bot token from @BotFather on Telegram Configure APIs - Set up Mistral AI OCR and Google Docs API credentials Set Folder Permissions - Create Google Drive folder for storing results Test Bot - Send a sample Arabic PDF to verify OCR accuracy Deploy Webhook - Activate the Telegram webhook for real-time processing Detailed API configuration and Arabic text handling notes are included as sticky notes within the workflow. What You'll Need: Telegram Bot Token (free from @BotFather) Mistral AI API key (OCR service) Google Docs/Drive API credentials Google Drive folder for document storage Sample Arabic PDF files for testing Key Features: Real-time progress updates (5-step process notifications) Automatic page numbering in Arabic Direct Google Docs integration Error handling for non-PDF files
by Cheng Siong Chin
Introduction Exams create significant stress for students. This workflow automates syllabus analysis and predicts exam trends using AI, helping educators and students better prepare for GCE 'O' Level Mathematics in Singapore. How It Works Trigger → Fetch Syllabus → Extract & Prepare Data → Load History → AI Analyze → Parse → Format → Convert → Publish → Notify Workflow Template Manual Trigger → Fetch O-Level Math Syllabus → Extract Syllabus Text → Prepare Analysis Data → Load Historical Context → AI Analysis Agent → Parse AI Output → Format Report → Convert to HTML → Publish to WordPress → Send Slack Summary Data Collection & AI Processing HTTP retrieves O-Level Math syllabus from SEAB and extracts text. Loads 3-5 years exam history. OpenRouter compares syllabus vs trends, predicts topics with confidence scores. Report Generation & Publishing Formats AI insights to Markdown (topics, trends, recommendations), converts to HTML. Auto-publishes to WordPress and sends Slack summary with report link. Workflow Steps Fetch & extract syllabus from SEAB site Load historical exam content AI analyzes syllabus + trends via OpenRouter model Parse and format AI output to Markdown/HTML Auto-publish report to WordPress and Slack Setup Instructions Connect HTTP node to SEAB syllabus URL Configure OpenRouter AI model with API key Set WordPress and Slack credentials for publishing Prerequisites OpenRouter account, WordPress API access, Slack webhook, SEAB syllabus link. Use Cases Predict 2025 GCE Math topics, generate AI insights, publish summaries for educators. Customization Adapt for other subjects or boards by changing syllabus source and analysis prompt. Benefits Enables fast, data-driven exam forecasting and automated report publication.
by Richard Besier
📤 Search Products from Facebook Ads on Amazon Once connected, this automation automatically scrapes Facebook ads from a specific Facebook Ad Library URL and searches for that same product on Amazon. Can be useful for Amazon FBA or dropshipping. 🔨 Setup This automation workflow is connected with two Apify scrapers. Make sure to connect the two scrapers mentioned in the blue and orange box, with their specific API endpoints. 👋 Need Help? If you need further help, or want a specific automation to be built for you, feel free to contact me via richard@advetica-systems.com.
by Avkash Kakdiya
How it works This workflow automates the generation of ad-ready product images by combining product and influencer photos with AI styling. It runs on a scheduled trigger, fetches data from Google Sheets, and retrieves product and influencer images from Google Drive. The images are processed with OpenAI and OpenRouter to generate enhanced visuals, which are then saved back to Google Drive. Finally, the result is logged into Google Sheets with a ready-to-publish status. Step-by-step 1. Trigger & Data preparation Schedule Trigger** – Runs workflow automatically on a set schedule. Google Sheets (Get the Raw)** – Retrieves today’s product and model URLs. Google Drive (Download Product Image)** – Downloads the product image. Google Drive (Download Influencer Image)** – Downloads the influencer image. Extract from File (Binary → Base64)** – Converts both product and model images for AI processing. 2. AI analysis & image generation OpenAI (Analyze Image)** – Creates an ad-focused visual description (lighting, mood, styling). HTTP Request (OpenRouter Gemini)** – Generates an AI-enhanced image combining product + influencer. Code Node (Cleanup)** – Cleans base64 output to remove extra prefixes. Convert to File** – Transforms AI output into a proper image file. 3. Save & update Google Drive (Upload Image)** – Uploads generated ad image to target folder. Google Sheets (Append/Update Row)** – Stores the Drive link and updates publish status. Why use this? Automates the entire ad image creation process without manual design work. Ensures product visuals are consistent, styled, and ad-ready. Saves final creatives in Google Drive for easy access and sharing. Keeps campaign tracking organized by updating Google Sheets automatically. Scales daily ad production efficiently for multiple products or campaigns.
by DIGITAL BIZ TECH
AI Website Scraper & Company Intelligence Description This workflow automates the process of transforming any website URL into a structured, intelligent company profile. It's triggered by a form, allowing a user to submit a website and choose between a "basic" or "deep" scrape. The workflow extracts key information (mission, services, contacts, SEO keywords), stores it in a structured Supabase database, and archives a full JSON backup to Google Drive. It also features a secondary AI agent that automatically finds and saves competitors for each company, building a rich, interconnected database of company intelligence. Quick Implementation Steps Import the Workflow: Import the provided JSON file into your n8n instance. Install Custom Community Node: You must install the community node from: 👉 https://www.npmjs.com/package/n8n-nodes-crawl-and-scrape FIRECRAWL N8N Documentation 👉 https://docs.firecrawl.dev/developer-guides/workflow-automation/n8n Install Additional Nodes: n8n-nodes-crawl-and-scrape and n8n-nodes-mcp fire crawl mcp . Set up Credentials: Create credentials in n8n for FIRE CRAWL API,Supabase, Mistral AI, and Google Drive. Configure API Key (CRITICAL): Open the Web Search tool node. Go to Parameters → Headers and replace the hardcoded Tavily AI API key with your own. Configure Supabase Nodes: Assign your Supabase credential to all Supabase nodes. Ensure table names (e.g., companies, competitors) match your schema. Configure Google Drive Nodes: Assign your Google Drive credential to the Google Drive2 and save to Google Drive1 nodes. Select the correct Folder ID. Activate Workflow: Turn on the workflow and open the Webhook URL in the “On form submission” node to access the form. What It Does Form Trigger Captures user input: “Website URL” and “Scraping Type” (basic or deep). Scraping Router A Switch node routes the flow: Deep Scraping →** AI-based MCP Firecrawler agent. Basic Scraping →** Crawlee node. Deep Scraping (Firecrawl AI Agent) Uses Firecrawl and Tavily Web Search. Extracts a detailed JSON profile: mission, services, contacts, SEO keywords, etc. Basic Scraping (Crawlee) Uses Crawl and Scrape node to collect raw text. A Mistral-based AI extractor structures the data into JSON. Data Storage Stores structured data in Supabase tables (companies, company_basicprofiles). Archives a full JSON backup to Google Drive. Automated Competitor Analysis Runs after a deep scrape. Uses Tavily web search to find competitors (e.g., from Crunchbase). Saves competitor data to Supabase, linked by company_id. Who's It For Sales & Marketing Teams:** Enrich leads with deep company info. Market Researchers:** Build structured, searchable company databases. B2B Data Providers:** Automate company intelligence collection. Developers:** Use as a base for RAG or enrichment pipelines. Requirements n8n instance** (self-hosted or cloud) Supabase Account:** With tables like companies, competitors, social_links, etc. Mistral AI API Key** Google Drive Credentials** Tavily AI API Key** (Optional) Custom Nodes: n8n-nodes-crawl-and-scrape How It Works Flow Summary Form Trigger: Captures “Website URL” and “Scraping Type”. Switch Node: deep → MCP Firecrawler (AI Agent). basic → Crawl and Scrape node. Scraping & Extraction: Deep path: Firecrawler → JSON structure. Basic path: Crawlee → Mistral extractor → JSON. Storage: Save JSON to Supabase. Archive in Google Drive. Competitor Analysis (Deep Only): Finds competitors via Tavily. Saves to Supabase competitors table. End: Finishes with a No Operation node. How To Set Up Import workflow JSON. Install community nodes (especially n8n-nodes-crawl-and-scrape from npm). Configure credentials (Supabase, Mistral AI, Google Drive). Add your Tavily API key. Connect Supabase and Drive nodes properly. Fix disconnected “basic” path if needed. Activate workflow. Test via the webhook form URL. How To Customize Change LLMs:** Swap Mistral for OpenAI or Claude. Edit Scraper Prompts:** Modify system prompts in AI agent nodes. Change Extraction Schema:** Update JSON Schema in extractor nodes. Fix Relational Tables:** Add Items node before Supabase inserts for arrays (social links, keywords). Enhance Automation:** Add email/slack notifications, or replace form trigger with a Google Sheets trigger. Add-ons Automated Trigger:** Run on new sheet rows. Notifications:** Email or Slack alerts after completion. RAG Integration:** Use the Supabase database as a chatbot knowledge source. Use Case Examples Sales Lead Enrichment:** Instantly get company + competitor data from a URL. Market Research:** Collect and compare companies in a niche. B2B Database Creation:** Build a proprietary company dataset. WORKFLOW IMAGE Troubleshooting Guide | Issue | Possible Cause | Solution | |-------|----------------|-----------| | Form Trigger 404 | Workflow not active | Activate the workflow | | Web Search Tool fails | Missing Tavily API key | Replace the placeholder key | | FIRECRAWLER / find competitor fails | Missing MCP node | Install n8n-nodes-mcp | | Basic scrape does nothing | Switch node path disconnected | Reconnect “basic” output | | Supabase node error | Wrong table/column names | Match schema exactly | Need Help or More Workflows? Want to customize this workflow for your business or integrate it with your existing tools? Our team at Digital Biz Tech can tailor it precisely to your use case from automation logic to AI-powered enhancements. Contact: shilpa.raju@digitalbiz.tech For more such offerings, visit us: https://www.digitalbiz.tech
by Summer
Website Leads to Voice Demo and Scheduling Creator: Summer Chang AI Booking Agent Setup Guide Overview This automation turns your website into an active booking agent. When someone fills out your form, it automatically: Adds their information to Notion AI researches their business from their website Calls them immediately with a personalized pitch Updates Notion with call results Total setup time: 30-45 minutes What You Need Before starting, create accounts and gather these: n8n account (cloud or self-hosted) Notion account - Free plan works duplicate my notion template OpenRouter API key - Get from openrouter.ai Vapi account - Get from vapi.ai Create an AI assistant Set up a phone number Copy your API key, Assistant ID, and Phone Number ID How It Works The Complete Flow Visitor fills form on your website Form submission creates new record in Notion with Status = "New" Notion Trigger detects new record (checks every minute) Main Workflow executes: Fetches lead's website AI analyzes their business Updates Notion with analysis Makes Vapi call with personalized intro Call happens between your AI agent and the lead When call ends, Vapi sends webhook to n8n Webhook Workflow executes: Fetches call details from Vapi AI generates call summary Updates Notion with results and recording
by Rajeet Nair
Overview This workflow implements a complete Retrieval-Augmented Generation (RAG) system for document ingestion and intelligent querying. It allows users to upload documents, convert them into vector embeddings, and query them using natural language. The system retrieves relevant document context and generates accurate AI responses while using caching to improve performance and reduce costs. This workflow is ideal for building AI knowledge bases, document assistants, and internal search systems. How It Works 1. Input & Configuration Receives requests via webhook (rag-system) Supports two actions: upload → process documents query → answer questions Defines: Chunk size & overlap TopK retrieval count Database table names Document Upload Flow Text Extraction Extracts text from uploaded PDF documents Text Chunking Splits text into overlapping chunks for better retrieval accuracy Document Structuring Converts chunks into structured documents Embedding Generation Generates vector embeddings using OpenAI Vector Storage Stores embeddings in PGVector (Postgres) Upload Logging Logs document metadata (user, filename, timestamp) Response Returns success message via webhook Query Flow Cache Check Checks if query result exists in cache (last 1 hour) Cache Routing If cached → return cached response If not → proceed to retrieval Cache Hit Flow Format Cached Response Standardizes cached output format Respond to User Returns cached answer with cached: true Cache Miss Flow Vector Retrieval Retrieves top relevant document chunks from PGVector AI Answer Generation Uses LLM with retrieved context Generates accurate, context-based answer Cache Storage Saves query + response in database for reuse Response Returns generated answer with cached: false Setup Instructions Webhook Setup Configure endpoint (rag-system) Send payload with: action: upload / query user_id document or query OpenAI Setup Add API credentials for: Embeddings Chat model Postgres + PGVector Enable PGVector extension Create tables: documents query_cache upload_log Configure Parameters Adjust: Chunk size (e.g., 1000) Overlap (e.g., 200) TopK (e.g., 5) Optional Enhancements Add authentication layer Add multi-tenant filtering (user_id) Use Cases AI document search systems Internal knowledge base assistants Customer support knowledge retrieval Legal or compliance document analysis SaaS AI chat with custom data Requirements OpenAI API key Postgres database with PGVector n8n instance (cloud or self-hosted) Key Features Full RAG architecture (upload + query) PDF document ingestion pipeline Semantic search with vector embeddings Context-aware AI responses Query caching for performance optimization Multi-user support via metadata filtering Scalable and modular design Summary A complete RAG-based AI system that enables document ingestion, semantic search, and intelligent query answering. It combines vector databases, LLMs, and caching to deliver fast, accurate, and scalable AI-powered knowledge retrieval.
by Rajeet Nair
Overview This workflow implements a policy-driven LLM orchestration system that dynamically routes AI tasks to different language models based on task complexity, policies, and performance constraints. Instead of sending every request to a single model, the workflow analyzes each task, applies policy rules, and selects the most appropriate model for execution. It also records telemetry data such as latency, token usage, and cost, enabling continuous optimization. A built-in self-tuning mechanism runs weekly to analyze historical telemetry and automatically update routing policies. This allows the system to improve cost efficiency, performance, and reliability over time without manual intervention. This architecture is useful for teams building AI APIs, agent platforms, or multi-model LLM systems where intelligent routing is needed to balance cost, speed, and quality. How It Works Webhook Task Input The workflow begins when a request is sent to the webhook endpoint. The request contains a task and optional priority metadata. Task Classification A classifier agent analyzes the task and categorizes it into: extraction classification reasoning generation The agent also returns a confidence score. Policy Engine Policy rules are loaded from a database. These rules define execution constraints such as: preferred model size latency limits token budgets retry strategies cost ceilings. Model Routing A decision engine evaluates classification results and policy rules. Tasks are routed to either a small model (fast and cost-efficient) or a large model (higher reasoning capability). Task Execution The selected LLM processes the task and generates the response. Telemetry Collection Execution metrics are captured including: latency tokens used estimated cost model used success status. These metrics are stored in a database. Weekly Self-Optimization A scheduled workflow analyzes telemetry from the past 7 days. If performance trends change, routing policies are automatically updated. Setup Instructions Configure a Postgres database Create two tables: policy_rules telemetry Add LLM credentials Configure Anthropic credentials for the language model nodes. Configure policy rules Define preferred models, cost limits, and latency thresholds in the policy_rules table. Configure workflow settings Adjust parameters in the Workflow Configuration node: maximum latency cost ceiling token limits retry behavior. Deploy the API endpoint Send requests to the webhook endpoint: Use Cases AI API Gateway Route requests to different models based on complexity and cost constraints. Multi-Model AI Platforms Automatically choose the best model for each task without manual configuration. Cost-Optimized AI Systems Prefer smaller models for simple tasks while reserving larger models for complex reasoning. LLM Observability Track token usage, latency, and cost for each AI request. Self-Optimizing AI Infrastructure Automatically improve routing policies using real execution telemetry. Requirements n8n with LangChain nodes enabled Postgres database Anthropic API credentials Tables: policy_rules telemetry Optional: Monitoring dashboards connected to telemetry data External policy management systems
by Dietmar
Build a PDF to Vector RAG System: Mistral OCR, Weaviate Database and MCP Server A comprehensive RAG (Retrieval-Augmented Generation) workflow that transforms PDF documents into searchable vector embeddings using advanced AI technologies. 🚀 Features PDF Document Processing**: Upload and extract text from PDF files using Mistral's OCR capabilities Vector Database Storage**: Store document embeddings in Weaviate vector database for efficient retrieval AI-Powered Search**: Search through documents using semantic similarity with Cohere embeddings MCP Server Integration**: Expose the knowledge base as an AI tool through MCP (Model Context Protocol) Document Metadata**: Basic document metadata including filename, content, source, and upload timestamp Text Chunking**: Automatic text splitting for optimal vector storage and retrieval 🛠️ Technologies Used Mistral AI**: OCR and text extraction from PDF documents Weaviate**: Vector database for storing and retrieving document embeddings Cohere**: Multilingual embeddings and reranking for improved search accuracy MCP (Model Context Protocol)**: AI tool integration for external AI workflows n8n**: Workflow automation and orchestration 📋 Prerequisites Before using this template, you'll need to set up the following credentials: Mistral Cloud API: For PDF text extraction Weaviate API: For vector database operations Cohere API: For embeddings and reranking HTTP Header Auth: For MCP server authentication 🔧 Setup Instructions Import the template into your n8n instance Configure credentials for all required services Set up Weaviate collection named "KnowledgeDocuments" Configure webhook paths for the MCP server and form trigger Test the workflow by uploading a PDF document 📊 Workflow Overview PDF Upload → Text Extraction → Document Processing → Vector Storage → AI Search ↓ ↓ ↓ ↓ ↓ Form Trigger → Mistral OCR → Prepare Metadata → Weaviate DB → MCP Server 🎯 Use Cases Knowledge Base Management**: Create searchable repositories of company documents Research Documentation**: Process and search through research papers and reports Legal Document Search**: Index and search through legal documents and contracts Technical Documentation**: Make technical manuals and guides searchable Academic Literature**: Process and search through academic papers and publications ⚠️ Important Notes Model Consistency**: Use the same embedding model for both storage and retrieval Collection Management**: Ensure your Weaviate collection is properly configured API Limits**: Be aware of rate limits for Mistral, Cohere, and Weaviate APIs Document Size**: Consider chunking large documents for optimal processing 🔗 Related Resources n8n Documentation Weaviate Documentation Mistral AI Documentation Cohere Documentation MCP Protocol Documentation 📝 License This template is provided as-is for educational and commercial use.