by Joseph
📄 Google Script Workflow: Upload File from URL to Google Drive (via n8n) 🔧 Purpose: This lightweight Google Apps Script acts as a server endpoint that receives a file URL (from n8n), downloads the file, uploads it to your specified Google Drive folder, and responds with the file’s metadata (like Drive file ID and URL). This is useful for large video/audio files that n8n cannot handle directly via HTTP Download nodes. 🚀 Setup Steps: 1. Create a New Script Project Go to https://script.google.com Click “New Project” Rename the project to something like: DriveUploader 2. Paste the Script Code Replace the default Code.gs content with the following (your custom script): function doPost(e) { const SECRET_KEY = 'your-strong-secret-here'; // Set your secret key here try { const data = JSON.parse(e.postData.contents); // 🔒 Check for correct secret key if (!data.secret || data.secret !== SECRET_KEY) { return ContentService.createTextOutput("Unauthorized") .setMimeType(ContentService.MimeType.TEXT); } const videoUrl = data.videoUrl; const folderId = 'YOUR_FOLDER_ID_HERE'; // Replace with your target folder ID const folder = DriveApp.getFolderById(folderId); const response = UrlFetchApp.fetch(videoUrl); const blob = response.getBlob(); const file = folder.createFile(blob); file.setName('uploaded_video.mp4'); // You can customize the name return ContentService.createTextOutput(file.getUrl()) .setMimeType(ContentService.MimeType.TEXT); } catch (err) { return ContentService.createTextOutput("Error: " + err.message) .setMimeType(ContentService.MimeType.TEXT); } } 3. Generate & Set Up Secret Key To allow authorized post requests to your script only, we need to generate a secret key from aany reliable key generator. You can head over to acte, click generate and copy the "Encryption key 256". Paste it in the 'your-strong-secret-here' placeholder in your script then click save const SECRET_KEY = 'your-strong-secret-here'; // Set your secret key here; 4. Replace Folder ID in Code Open the target Drive folder in your browser The folder ID is the part of the URL after /folders/ Example: https://drive.google.com/drive/u/0/folders/1Xabc12345678defGHIJklmn Paste that ID in the script: var folderId = "1Xabc12345678defGHIJklmn"; 5. Set Up Deployment as Web App Click “Deploy” > “Manage Deployments” > “New Deployment” Under Select type, choose Web app Description: Upload from URL to Drive Execute as: Me Who has access: Anyone Click Deploy Authorize the script when prompted Copy the Web App URL 📤 How to Use in n8n 1. HTTP Request Node Method: POST URL: (your web app URL) Secret Key: (Secret Key set in script) Body Content Type: JSON Paste code: { "videoUrl": "https://example.com/path/to/your.mp4", "secret": "your-strong-secret-here" } videoUrl: The file download URL secret: The generated and set up secret key 2. Rename Node A simple drive update node to rename the file using the file drive url returned from the script.
by Jimleuk
Ever wanted to build your own RAG search over Youtube videos? Well, now you can! This n8n template shows how you can build a very capable Youtube search engine powered by Apify, Qdrant and your LLM of choice to quickly and efficiently browse over many videos for research. I originally started to template to ask questions on the "n8n @ scale office-hours" livestream videos but then extended it to include the latest videos on the official channel. Check out a demo here: https://jimleuk.app.n8n.cloud/webhook/n8n_videos How it works Stage 1 is to collect the Youtube video transcripts and push them into a vector database. For this, I've used Apify to scrape Youtube and Qdrant to store the embeddings. Transcripts are broken down into smaller chunks and carefully tagged with metadata to assist in later search and filtering. Stage 2 is to build a web frontend for the user to query the vectorised transcripts. I'm using a webhook to serve a simple web app and API to dynamically fetch the results. When searching for a video, I've opted to use Qdrant's search groups API which in this use-case, performs better as it returns a wider range of videos results. In the web frontend, when the user clicks on the results, the matching Youtube video plays in an embedded video player. How to use Once credentials are all set, first run steps 1 - 3 to populate your vector store. Next, set the workflow to active to expose the web frontend. Visit the webhook URL in your browser to use it. If only for personal use, you may want to remove the rate limiting mechanism in step 4. Requirements Apify for Youtube Channel and Video Scraping Qdrant for Vector store OpenAI for LLM and Embeddings Customising the template Not interested in official n8n videos? Swap to a different channel - this template will work on many as long as videos are not private or set to prevent embeds. Technically any vector store should work but may not have the same grouping API. Use the simple vector store node and revert back to basic searching instead.
by Agniva Mahata
How it Works: Trigger: The workflow is triggered by a webhook, initiated by an Airtable automation. This automation sends the Book or Chapter record ID and the desired action (e.g., "Generate Book Details," "Generate Chapters," "Generate Chapter Research," "Generate Chapter Content"). Action Routing: A "Switch" node directs the workflow based on the action query parameter received from the webhook. This determines which part of the book creation process will be executed. Data Retrieval: The workflow fetches the relevant book or chapter data from Airtable using the provided recordId. AI Processing: Book Details Generation: If the action is "Generate Book Details," an AI Agent (powered by a Large Language Model (LLM) like Google Gemini and the Perplexity search tool) researches the book idea. It focuses on crafting a compelling book description, identifying the target audience, and conducting general book research to maximize bestseller potential. The research brief is then saved back to Airtable. Chapter Generation: If the action is "Generate Chapters," an LLM generates 7-10 chapter titles and descriptions based on the book idea and previous research. A structured output parser ensures the chapter data is in the correct format. The chapters are then split into individual items and saved as separate records in the "Chapter" table in Airtable, linked to the main book record. Chapter Research Generation: If the action is "Generate Chapter Research," another AI Agent conducts in-depth research on a specific chapter, using the Perplexity search tool multiple times. It focuses on finding stories, case studies, historical events, and expert perspectives to make the chapter engaging and credible. The research is saved back to the "Chapter" record in Airtable. Chapter Content Generation: If the action is "Generate Chapter Content," an LLM writes the full content of the chapter, using the research gathered in the previous step, the overall book research, and the chapter description. The generated content is saved back to the "Chapter" record in Airtable. Airtable Updates: In each of the AI processing steps, the workflow updates the corresponding Airtable record (either "Book" or "Chapter") with the generated results (research, chapter details, or content) and sets the "Action" field back to "Idle." Set Up Steps: Airtable Setup (Estimated time: 10-15 minutes): Copy the Airtable base blueprint: https://airtable.com/appfkz4KUlKvOjtbp/shra78TlDfqLRdSfT. This will create the "Book" and "Chapter" tables with the necessary fields. In the "Book" table, create three Airtable Automations: Trigger: When a record matches conditions -> Action is Generate Book Details Action: Run a script. Use the following script: let autoRoute = input.config(); await fetch(autoRoute.webhookUrl + "?recordId=" + autoRoute.recordId + "&action=" + autoRoute.action); In the script action's configuration, add three "Input variables": webhookUrl (map it to your n8n webhook URL, obtained in the next step) recordId (map it to the Airtable record ID) action (map it to Action) Repeat this process to create two more automations in the "Book" table, identical except triggered when Action is Generate Chapters, respectively. In the "Chapter" table, create two Airtable Automations: Trigger: When a record matches conditions -> Action is Generate Chapter Research Action: Run a script (use the same script as above, with the same input variables). Create a second automation, identical except triggered when Action is Generate Chapter Content. n8n Setup (Estimated time: 15-20 minutes): Import the provided JSON workflow into n8n. Webhook Node: Copy the "Test URL" from the Webhook node. This is the webhookUrl you'll use in the Airtable automations. Important: Once you've tested and are ready to go live, switch to the "Production URL." Airtable Nodes: Configure all Airtable nodes (there are eight). You'll need to connect your Airtable account using OAuth 2. Select the correct Base ("Book Agency \[v1] Cobuild" or whatever you named it) and Table ("Book" or "Chapter") for each node. The field mappings are already defined in the template, but double-check them. LLM Nodes (Google Gemini & OpenAI): Connect your Google Gemini and OpenAI accounts to the respective LLM nodes. You'll need API keys for both. You may also configure different LLM Models. Perplexity Nodes Connect your Perplexity AI API to the Perplexity nodes. You'll need API keys for that. Activate the workflow. Testing (Estimated Time: 5-10 minutes): Go to your Airtable "Book" table. Create a New Record. Fill in the "Idea" field with a book concept. Change the "Action" field to "Generate Book Details". The Airtable automation should trigger, sending a request to your n8n webhook. Monitor the n8n execution log to see the workflow in action. Check the Airtable record to see if the "Research" field is populated. Repeat the testing for Generate Chapters, Generate Chapter Research and Generate Chapter Content.
by Johan Denoyer
How it works 1) Extracts all company entries in Agile CRM 2) Search for company name in French INSEE OpenData database to extract address and government ID (SIREN) 3) Updates entries with data extracted from French Insee OpenData dabase Workflow also has a readonly feature to make sure entry is not overwritten. Setup steps Add your AgileCRM credentials Add your INSEE OpenData credentials Add two company custom fields in your Agile CRM (for SIREN data and ReadOnly support)
by Mario
Purpose Use a lightweight Voice Interface, for you and your entire organization, to interact with an AI Supervisor, a personal AI Assistant, which has access to your custom workflows. You can also connect the supervisor to your already existing Agents. Demo & Explanation How it works After recording a message in the Vagent App, it gets transcribed and sent in combination with a session ID to the registered webhook The Main Agent acts as a router. I interprets the message while using the stored chat history (bound to the session ID) and chooses which tool to use to perform the required action and. Tools on this level are workflows, which contain subordinated Agents. Since the Main Agent interprets the original message, the raw input is passed to the Tools/Sub-Agents as a separate parameter Within the Sub-Agents the actual processing takes place. Each of those has it’s separate chat memory (with a suffix to the main session ID), to achieve a clear separation of concerns Depending on the required action an HTTP Request Tool is called. The result is being formatted in Markdown and returned to the Main Agent with an additional short prompt, so it does not get interpreted by the Main Agent. Drafts are separated from a short message by added indentation (angle brackets). If some information is missing, no tool is called just yet, instead a message is returned back to the user The Main Agent then outputs the result from the called Sub-Agent. If a draft is included, it gets separated from the spoken output Finally the formatted output is returned as response to the webhook. The message is split into a spoken and a text version, which enables the App to read out loud unnecessary information like drafts in this example See the full documentation of Vagent: https://vagent.io/docs Setup Import this workflow into your n8n instance Follow the instructions given in the sticky notes on the canvas Setup your credentials. OpenAI can be replaced by another LLM in the workflow, but is required for the App to work. Google Calendar and Notion are required for all scenarios to work Copy the Webhook URL from the Webhook node of the main workflow Download the Vagent App from https://vagent.io In the settings paste your OpenAI API Token, the Webhook URL and the password defined for Header Auth Now you can use the App to interact with the Multi-Agent using your Voice by tapping the Mic symbol in the App to record your message. To use the chat trigger (for testing) properly, temporarily disable the nodes after the Tools Agent.
by Jimleuk
This n8n workflows builds another example of creating a knowledgebase assistant but demonstrates how a more deliberate and targeted approach to ingesting the data can produce much better results for your chatbot. In this example, a government tax code policy document is used. Whilst we could split the document into chunks by content length, we often lose the context of chapters and sections which may be required by the user. Our approach then is to first split the document into chapters and sections before importing into our vector store. Additionally, using metadata correctly is key to allow filtering and scoped queries. Example Human: "Tell me about what the tax code says about cargo for intentional commerce?" AI: "Section 11.25 of the Texas Property Tax Code pertains to "MARINE CARGO CONTAINERS USED EXCLUSIVELY IN INTERNATIONAL COMMERCE." In this section, a person who is a citizen of a foreign country or an en..." How it works The tax code policy document is downloaded as a zip file from the government website and its pages are extracted as separate chapters. Each chapter is then parsed and split into its sections using data manipulation expressions. Each section is then inserted into our Qdrant vector store tagged with its source, chapter and section numbers as metadata. When our AI Agent needs to retrieve data from our vector store, we use a custom workflow tool to perform the query to Qdrant. Because we're relying on Qdrant's advanced filtering capabilities, we perform the search using the Qdrant API rather than the Qdrant node. When the AI Agent, needs to pull full wording or extracts, we can use Qdrant's scroll API and metadata filtering to do so. This makes Qdrant behave like a key-value store for our document. Requirements A Qdrant instance is required for the vector store and specifically for it's filtering functionality. Mistral.ai account for Embeddings and AI models. Customising this workflow Depending on your use-case, consider returning actual PDF pages (or links) to the user for the extra confirmation and to build trust. Not using Mistral? You are able to replace but note to match the distance and dimension size of Qdrant collection to your chosen embedding model.
by Jimleuk
This n8n workflow demonstrates how to manage your Qdrant vector store when there is a need to keep it in sync with local files. It covers creating, updating and deleting vector store records ensuring our chatbot assistant is never outdated or misleading. Disclaimer This workflow depends on local files accessed through the local filesystem and so will only work on a self-hosted version of n8n at this time. It is possible to amend this workflow to work on n8n cloud by replacing the local file trigger and read file nodes. How it works A local directory where bank statements are downloaded to is monitored via a local file trigger. The trigger watches for the file create, file changed and file deleted events. When a file is created, its contents are uploaded to the vector store. When a file is updated, its previous records are replaced. When the file is deleted, the corresponding records are also removed from the vector store. A simple Question and Answer Chatbot is setup to answer any questions about the bank statements in the system. Requirements A self-hosted version of n8n. Some of the nodes used in this workflow only work with the local filesystem. Qdrant instance to store the records. Customising the workflow This workflow can also work with remote data. Try integrating accounting or CRM software to build a managed system for payroll, invoices and more. Want to go fully local? A version of this workflow is available which uses Ollama instead. You can download this template here: https://drive.google.com/file/d/189F1fNOiw6naNSlSwnyLVEm_Ho_IFfdM/view?usp=sharing
by David Ashby
🛠️ CircleCI Tool MCP Server Complete MCP server exposing all CircleCI Tool operations to AI agents. Zero configuration needed - all 3 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every CircleCI Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n CircleCI Tool tool with full error handling 📋 Available Operations (3 total) Every possible CircleCI Tool operation is included: 🔧 Pipeline (3 operations) • Get a pipeline • Get many pipelines • Trigger a pipeline 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native CircleCI Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every CircleCI Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Niklas Hatje
Use Case This workflow is beneficial when you're automatically adding new leads to your Pipedrive CRM. Usually, you'd have to manually review each lead to determine if they're a good fit. This process is time-consuming and increases the chances of missing important leads. This workflow ensures every new lead is promptly evaluated upon addition. What this workflow does The workflow runs every 5 minutes. On every run, it checks your new Pipedrive leads and enriches them with Clearbit. It then marks items as enriched and checks if the company of the new lead matches certain criteria (in this case if they are B2B and have more than 100 employees) and sends a Slack alert to a channel for every match. Pre Conditions You must have Pipedrive, Clearbit, and Slack accounts. You also need to set up the custom fields Domain and Enriched at in Pipedrive. Setup Go to Company Settings -> Data fields -> Organization and add Domain as a custom field Go to Company Settings -> Data fields -> Leads and add Enriched at as a custom date field Add your Pipedrive, Clearbit and Slack credentials. Fill the setup node below. To get the ID of your custom domain fields, simply run the Show only custom organization fields and Show only custom lead fields nodes below and copy the keys of your domain, and enriched at fields. How to adjust this workflow to your needs Modify the criteria to suit your definition of an interesting lead. If you only want to focus on interesting leads in Pipedrive, add a node that archives all others. This workflow was built using n8n version 1.29.1
by Ranjan Dailata
Who this is for The Real Estate Intelligence Tracker is a powerful automated workflow designed for real estate analysts, investors, proptech startups, and market researchers who need to collect and analyze structured data from real estate listings across the web at scale. This workflow is tailored for: Real Estate Analysts** - Tracking property prices, locations, and market trends Investment Firms** - Sourcing high-opportunity listings for portfolio decisions PropTech Developers** - Automating listing insights for SaaS platforms Market Researchers** - Extracting insights from competitive housing data Growth Teams** - Monitoring geographic property trends and pricing fluctuations What problem is this workflow solving? Collecting structured real estate listing data from property websites is difficult due to bot protections and unstructured HTML content. Manual data collection is slow and error-prone, and traditional scrapers often get blocked or miss context. This workflow solves: Automated bypass of anti-bot protection using Bright Data Web Unlocker Conversion of unstructured HTML content into clean text using a Markdown-to-text LLM pipeline Structured extraction of key listing data like price, location, property type, and features using OpenAI Aggregation and delivery of insights to Google Sheets, local storage, and webhook-based alerts What this workflow does Convert to Text: Transforms scraped HTML/markdown into clean text using a Basic LLM Chain Structured Data Extraction: Uses OpenAI GPT-4o with the Information Extractor node to parse property attributes (price, address, area, type, etc.) Aggregate & Merge: Combines data from multiple pages or listings into a cohesive structure Outbound Data Handling: Google Sheets** – Appends the structured real estate data for further analysis Save to Disk** – Persists structured JSON/text data locally Webhook Notification** – Sends data alerts or summaries to any third-party platform Pre-conditions You need to have a Bright Data account and do the necessary setup as mentioned in the "Setup" section below. You need to have an OpenAI Account. Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, Configure the Google Sheet Credentials with your own account. Follow this documentation - Set Google Sheet Credential In n8n, configure the OpenAi account credentials. Ensure the URL and Bright Data zone name are correctly set in the Set URL, Filename and Bright Data Zone node. Set the desired local path in the Write a file to disk node to save the responses. How to customize this workflow to your needs Target Multiple Sites or Locations Update the Bright Data URL node dynamically with a list of regional real estate websites Loop through different city/state filter URLs Customize Extracted Fields Modify the Information Extractor prompt to extract fields like: Property size, number of bedrooms/bathrooms Days on market Nearby amenities or schools Agent contact details Integrate with More Destinations Add nodes to export data to Notion, Airtable, HubSpot, or your custom database Generate automated reports using PDF generators and email them Data Quality and Logging Add validation checks (e.g., missing price or address) Save intermediate files (markdown, raw HTML, JSON output) to disk for audit purposes
by David Ashby
Complete MCP server exposing 2 Catalog API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Catalog API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Catalog API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.ebay.com{basePath} • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (2 total) 🔧 Product (1 endpoints) • GET /product/{epid}: Get {Epid} 🔧 Product_Summary (1 endpoints) • GET /product_summary/search: Search Product Summaries 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Catalog API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Nazmy
Based on Jonathan & Solomon work. > The only addition I've made is a Set node. This node organizes workflows into subfolders within the GitHub repository based on their respective tags. How it works This workflow will backup your workflows to GitHub. It uses the n8n API node to export all workflows. It then loops over the data, checks in GitHub to see if a file exists that uses the credential's ID. Once checked it will: update the file on GitHub if it exists; create a new file if it doesn't exist; ignore if it's the same. Who is this for? People wanting to backup their workflows outside the server for safety purposes or to migrate to another server.