by Jimleuk
This n8n workflow demonstrates how to manage your Qdrant vector store when there is a need to keep it in sync with local files. It covers creating, updating and deleting vector store records ensuring our chatbot assistant is never outdated or misleading. Disclaimer This workflow depends on local files accessed through the local filesystem and so will only work on a self-hosted version of n8n at this time. It is possible to amend this workflow to work on n8n cloud by replacing the local file trigger and read file nodes. How it works A local directory where bank statements are downloaded to is monitored via a local file trigger. The trigger watches for the file create, file changed and file deleted events. When a file is created, its contents are uploaded to the vector store. When a file is updated, its previous records are replaced. When the file is deleted, the corresponding records are also removed from the vector store. A simple Question and Answer Chatbot is setup to answer any questions about the bank statements in the system. Requirements A self-hosted version of n8n. Some of the nodes used in this workflow only work with the local filesystem. Qdrant instance to store the records. Customising the workflow This workflow can also work with remote data. Try integrating accounting or CRM software to build a managed system for payroll, invoices and more. Want to go fully local? A version of this workflow is available which uses Ollama instead. You can download this template here: https://drive.google.com/file/d/189F1fNOiw6naNSlSwnyLVEm_Ho_IFfdM/view?usp=sharing
by Eric Francis
How it works This workflow reads a list of URLs every 15 minutes, and sends an HTTP request to every URL on the list. Set up steps Schedule the workflow to run at your desired frequency (default is every 15 minutes). Add your desired URLs to the list. The list should be in the same format as the image below (Don't forget to have single quotes around every URL in the list, and separate each one with a comma!): Turn the workflow ON. Ideas to customize the workflow for your own use cases: Change the HTTP method Add headers Add a request body
by Jonathan
How it works This template uses a slack app to connect with your google calendar, generate an instant google meet link and post it as a message in a slack channel Setup steps Firstly, you'll need to create a slack app Authenticate and connect your slack account Connect and choose the Google calendar you want to generate Google meet links for Customize your slack message Then using a /meet command in slack, you can instantly generate and post your Google meet links
by Oskar
With this workflow you can extract data from resume documents uploaded via a Telegram bot. Workflow transform readable content of PDF resume into structured data, using AI nodes and returns PDF with formatted, plain HTML. You can modify this workflow to perform other actions with structured data (e.g. insert it into database or create other, well-formatted documents). Functionality of this workflow was presented during the n8n community call on March 7, 2024 - recording of presentation available here. ⚠️ Workflow made for demo purposes. If you want to use it in real life, please make sure necessary measures for personal data protection are set. How it works? User uploads readable PDF resume document into Telegram bot. After authentication based on chat ID parameter, workflow extracts text from the PDF and transfers it into AI chain with connected sub-nodes: OpenAI Chat Model and Structured Output (JSON) Parser. Then, each extracted section (employment history, projects etc.) is formatted into desired HTML structure. Finally, the document is converted into new, structured PDF using Gotenberg. 💡 This workflow requires installed Gotenberg. If you are not familiar with this software, please have a look on my YouTube tutorial. You can also replace call to Gotenberg with other PDF generation service (such as PDFMonkey or ApiTemplate). Set up steps Create Telegram bot and add its credentials in n8n. Set your chat ID parameter in Auth node. Adjust JSON schema in Structured Output Parser according to your needs. Optionally: replace HTTP call to Gotenberg with PDF generation service of your choice. If you like this workflow, please subscribe to my YouTube channel and/or my newsletter.
by Mihai Farcas
This workflow implements a Retrieval Augmented Generation (RAG) chatbot that answers employee questions based on company documents stored in Google Drive. It automatically indexes new or updated documents in a Pinecone vector database, allowing the chatbot to provide accurate and up-to-date information. The workflow uses Google's Gemini AI for both embeddings and response generation. How it works The workflow uses two Google Drive Trigger nodes: one for detecting new files added to a specified Google Drive folder, and another for detecting file updates in that same folder. Automated Indexing: When a new or updated document is detected The Google Drive node downloads the file. The Default Data Loader node loads the document content. The Recursive Character Text Splitter node breaks the document into smaller text chunks. The Embeddings Google Gemini node generates embeddings for each text chunk using the text-embedding-004 model. The Pinecone Vector Store node indexes the text chunks and their embeddings in a specified Pinecone index. 7.The Chat Trigger node receives user questions through a chat interface. The user's question is passed to an AI Agent node. The AI Agent node uses a Vector Store Tool node, linked to a Pinecone Vector Store node in query mode, to retrieve relevant text chunks from Pinecone based on the user's question. The AI Agent sends the retrieved information and the user's question to the Google Gemini Chat Model (gemini-pro). The Google Gemini Chat Model generates a comprehensive and informative answer based on the retrieved documents. A Window Buffer Memory node connected to the AI Agent provides short-term memory, allowing for more natural and context-aware conversations. Set up steps Google Cloud Project and Vertex AI API: Create a Google Cloud project. Enable the Vertex AI API for your project. Google AI API Key: Obtain a Google AI API key from Google AI Studio. Pinecone Account: Create a free account on the Pinecone website. Obtain your API key from your Pinecone dashboard. Create an index named company-files in your Pinecone project. Google Drive: Create a dedicated folder in your Google Drive where company documents will be stored. Credentials in n8n: Configure credentials in your n8n environment for: Google Drive OAuth2 Google Gemini(PaLM) Api (using your Google AI API key) Pinecone API (using your Pinecone API key) Import the Workflow: Import this workflow into your n8n instance. Configure the Workflow: Update both Google Drive Trigger nodes to watch the specific folder you created in your Google Drive. Configure the Pinecone Vector Store nodes to use your company-files index.
by Dhruv Dalsaniya
This workflow connects Telegram to Midjourney through GoAPI, enabling automated image generation and upscaling directly from chat. How it Works Telegram Command Trigger**: The workflow activates upon receiving a message in Telegram. Image Generation**: Your prompt is sent to GoAPI, which then generates an image using Midjourney. Upscale Selection**: You receive the generated image and select an option for upscaling. Image Upscaling**: The selected image is upscaled via GoAPI. Notifications and Logs**: Progress updates are sent to Telegram, and all images are logged in Discord. Set Up Steps Create a Telegram Bot and update credentials in Telegram nodes. Create a GoAPI account, obtain an API key, and update the three HTTP nodes: "Get Generation Task," "Upscale," and "Get Upscale Task". (Optional) Configure the Discord node for logging if desired. Setup takes approximately 10-15 minutes. Detailed descriptions are available in sticky notes within the workflow.
by Mind-Front
Workflow Description This workflow is a powerful, fully automated web query and semantic reranking system that allows users to perform precise, detailed searches, intelligently rank search results and provide high-quality, structured output. Built with AI-powered components, the workflow leverages semantic query generation, result re-ranking, and real-time reporting to deliver actionable insights. It is particularly well-suited for real-time data retrieval, market research, and any domain requiring automated yet customizable search result processing. How It Works Webhook Integration for Input: The workflow begins with a Webhook Node that captures the user's search query as input, enabling seamless integration with other systems. Step 1: Semantic Query Generation (Powered by "Semantic Search - Query Maker"): Using AI (Google Gemini), the initial query is refined and transformed into a context-aware, expert-level search query. The process ensures that the search engine retrieves the most relevant and precise results. Step 2: Web Search Execution: A free Brave Search API processes the refined query to fetch search results, ensuring speed and cost efficiency. Step 3: Semantic Re-Ranking of Results (Powered by "Semantic Search - Result Re-Ranker"): The workflow reranks the search results based on relevance to the original question, prioritizing the most relevant URLs dynamically. Results are passed through AI-powered intelligent reranking to ensure the final output reflects optimal relevance and quality. Step 4: Structured Output Generation: Results are converted into a well-structured, organized JSON format, ranking the top 10 search results with their titles, links, and descriptions. Missing ranks (if fewer than 10 results) are handled gracefully with placeholders, ensuring consistency. Step 5: Real-Time Reporting: The reranked search results are sent back to the user or integrated system via the Webhook Node in a JSON-formatted response. Reports are highly structured and ready for downstream processing or consumption. Key Features AI-Powered Query Refinement: Transforms basic queries into detailed, expert-level search terms for optimal results. Dual-Stage Semantic Search: Combines query generation and result reranking for precise, high-relevance outputs. Top 10 Result Reranking: Dynamically ranks and organizes the top 10 results based on semantic relevance to the query. Customizable Integration: Fully modifiable for alternative APIs or integrations, such as other search engines or custom ranking logic. JSON-Formatted Structured Results: Outputs reranked results in a standardized format, ideal for integration into systems requiring machine-readable data. Webhook-Based Flexibility: Works seamlessly with Webhook inputs for easy deployment in diverse workflows. Cost-Effective API Usage: Pre-integrated with the free Brave Search API, minimizing operational costs while delivering accurate search results. Instructions for API Setup Brave Search API: Visit api.search.brave.com to obtain a free-tier API key for web search. AI Integration (Google Gemini): Visit Google AI Studio and generate an API key for semantic query generation and reranking. Webhook Configuration: Set up the input Webhook to capture search queries and the output Webhook to deliver reranked results. Why Choose This Workflow? Precision and Relevance**: Combines AI-based query generation with advanced reranking for accurate results. Fully Customizable**: Easily adapt the workflow to alternative APIs, search engines, or ranking logic. Real-Time Insights**: Provides structured, real-time output ready for immediate use. Scalable and Modular**: Ideal for businesses, researchers, and data analysts needing a robust, repeatable solution. Tags AI Workflow, Semantic Search, Query Refinement, Search Result Reranking, Real-Time Search, Web Search Automation, Google Search, Brave Search, News Search, API Integration, Market Research, Competitive Intelligence, Business Intelligence,Google Gemini, Anthropic Claude, OpenAI, GPT, LLM
by David Ashby
🛠️ CircleCI Tool MCP Server Complete MCP server exposing all CircleCI Tool operations to AI agents. Zero configuration needed - all 3 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every CircleCI Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n CircleCI Tool tool with full error handling 📋 Available Operations (3 total) Every possible CircleCI Tool operation is included: 🔧 Pipeline (3 operations) • Get a pipeline • Get many pipelines • Trigger a pipeline 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native CircleCI Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every CircleCI Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Niklas Hatje
Use Case This workflow is beneficial when you're automatically adding new leads to your Pipedrive CRM. Usually, you'd have to manually review each lead to determine if they're a good fit. This process is time-consuming and increases the chances of missing important leads. This workflow ensures every new lead is promptly evaluated upon addition. What this workflow does The workflow runs every 5 minutes. On every run, it checks your new Pipedrive leads and enriches them with Clearbit. It then marks items as enriched and checks if the company of the new lead matches certain criteria (in this case if they are B2B and have more than 100 employees) and sends a Slack alert to a channel for every match. Pre Conditions You must have Pipedrive, Clearbit, and Slack accounts. You also need to set up the custom fields Domain and Enriched at in Pipedrive. Setup Go to Company Settings -> Data fields -> Organization and add Domain as a custom field Go to Company Settings -> Data fields -> Leads and add Enriched at as a custom date field Add your Pipedrive, Clearbit and Slack credentials. Fill the setup node below. To get the ID of your custom domain fields, simply run the Show only custom organization fields and Show only custom lead fields nodes below and copy the keys of your domain, and enriched at fields. How to adjust this workflow to your needs Modify the criteria to suit your definition of an interesting lead. If you only want to focus on interesting leads in Pipedrive, add a node that archives all others. This workflow was built using n8n version 1.29.1
by Ranjan Dailata
Who this is for The Real Estate Intelligence Tracker is a powerful automated workflow designed for real estate analysts, investors, proptech startups, and market researchers who need to collect and analyze structured data from real estate listings across the web at scale. This workflow is tailored for: Real Estate Analysts** - Tracking property prices, locations, and market trends Investment Firms** - Sourcing high-opportunity listings for portfolio decisions PropTech Developers** - Automating listing insights for SaaS platforms Market Researchers** - Extracting insights from competitive housing data Growth Teams** - Monitoring geographic property trends and pricing fluctuations What problem is this workflow solving? Collecting structured real estate listing data from property websites is difficult due to bot protections and unstructured HTML content. Manual data collection is slow and error-prone, and traditional scrapers often get blocked or miss context. This workflow solves: Automated bypass of anti-bot protection using Bright Data Web Unlocker Conversion of unstructured HTML content into clean text using a Markdown-to-text LLM pipeline Structured extraction of key listing data like price, location, property type, and features using OpenAI Aggregation and delivery of insights to Google Sheets, local storage, and webhook-based alerts What this workflow does Convert to Text: Transforms scraped HTML/markdown into clean text using a Basic LLM Chain Structured Data Extraction: Uses OpenAI GPT-4o with the Information Extractor node to parse property attributes (price, address, area, type, etc.) Aggregate & Merge: Combines data from multiple pages or listings into a cohesive structure Outbound Data Handling: Google Sheets** – Appends the structured real estate data for further analysis Save to Disk** – Persists structured JSON/text data locally Webhook Notification** – Sends data alerts or summaries to any third-party platform Pre-conditions You need to have a Bright Data account and do the necessary setup as mentioned in the "Setup" section below. You need to have an OpenAI Account. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, Configure the Google Sheet Credentials with your own account. Follow this documentation - Set Google Sheet Credential In n8n, configure the OpenAi account credentials. Ensure the URL and Bright Data zone name are correctly set in the Set URL, Filename and Bright Data Zone node. Set the desired local path in the Write a file to disk node to save the responses. How to customize this workflow to your needs Target Multiple Sites or Locations Update the Bright Data URL node dynamically with a list of regional real estate websites Loop through different city/state filter URLs Customize Extracted Fields Modify the Information Extractor prompt to extract fields like: Property size, number of bedrooms/bathrooms Days on market Nearby amenities or schools Agent contact details Integrate with More Destinations Add nodes to export data to Notion, Airtable, HubSpot, or your custom database Generate automated reports using PDF generators and email them Data Quality and Logging Add validation checks (e.g., missing price or address) Save intermediate files (markdown, raw HTML, JSON output) to disk for audit purposes
by David Ashby
Complete MCP server exposing 2 Catalog API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Catalog API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Catalog API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.ebay.com{basePath} • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (2 total) 🔧 Product (1 endpoints) • GET /product/{epid}: Get {Epid} 🔧 Product_Summary (1 endpoints) • GET /product_summary/search: Search Product Summaries 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Catalog API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Nazmy
Based on Jonathan & Solomon work. > The only addition I've made is a Set node. This node organizes workflows into subfolders within the GitHub repository based on their respective tags. How it works This workflow will backup your workflows to GitHub. It uses the n8n API node to export all workflows. It then loops over the data, checks in GitHub to see if a file exists that uses the credential's ID. Once checked it will: update the file on GitHub if it exists; create a new file if it doesn't exist; ignore if it's the same. Who is this for? People wanting to backup their workflows outside the server for safety purposes or to migrate to another server.