by David Ashby
Complete MCP server exposing 2 NPR Station Finder Service API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add NPR Station Finder Service credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the NPR Station Finder Service API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://station.api.npr.org • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (2 total) 🔧 V3 (2 endpoints) • GET /v3/stations: Get Station 1 • GET /v3/stations/{stationId}: Retrieve metadata for the station with the given numeric ID 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native NPR Station Finder Service API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by simonscrapes
Use Case Research search engine rankings for SEO analysis: You need to track keyword rankings for your website You want to analyze competitor positions in search results You need data for SEO competition analysis You want to monitor SERP changes over time What this Workflow Does The workflow uses ScrapingRobot API to fetch Google search results: Retrieves SERP data for your target keywords Captures URL rankings and page titles Processes up to 5000 searches with free account Organizes results for SEO analysis Setup Create a ScrapingRobot account and get your API key Add your ScrapingRobot API key to the HTTP Request node's GET SERP token parameter Either connect your keyword database (column name "Keyword") or use the "Set Keywords" node Configure your preferred output database connection How to Adjust it to Your Needs Modify keyword source to pull from different databases Adjust the number of SERP results to capture Customize output format for your reporting needs More templates and n8n workflows >>> @simonscrapes
by simonscrapes
Use Case Transform web pages into AI-friendly markdown format: You need to process webpage content for LLM analysis You want to extract both content and links from web pages You need clean, formatted text without HTML markup You want to respect API rate limits while crawling pages What this Workflow Does The workflow uses Firecrawl.dev API to process webpages: Converts HTML content to markdown format Extracts all links from each webpage Handles API rate limiting automatically Processes URLs in batches from your database Setup Create a Firecrawl.dev account and get your API key Add your Firecrawl API key to the HTTP Request node's Authorization header Connect your URL database to the input node (column name must be "Page") or edit the array in Example fields from data source Configure your preferred output database connection How to Adjust it to Your Needs Modify input source to pull URLs from different databases Adjust rate limiting parameters if needed Customize output format for your specific use case More templates and n8n workflows >>> @simonscrapes
by Mohan Gopal
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. 🤖 AI-Powered Document QA System using Webhook, Pinecone + OpenAI + n8n This project demonstrates how to build a Retrieval-Augmented Generation (RAG) system using n8n, and create a simple Question Answer system using Webhook to connect with User Interface (created using Lovable): 🧾 Downloads the pdf file format documents from Google Drive (contract document, user manual, HR policy document etc...) 📚 Converts them into vector embeddings using OpenAI 🔍 Stores and searches them in Pinecone Vector DB 💬 Allows natural language querying of contracts using AI Agents 📂 Flow 1: Document Loading & RAG Setup This flow automates: Reading documents from a Google Drive folder Vectorizing using text-embedding-3-small Uploading vectors into Pinecone for later semantic search 🧱 Workflow Structure A [Manual Trigger] --> B[Google Drive Search] B --> C[Google Drive Download] C --> D[Pinecone Vector Store] D --> E[Default Data Loader] E --> F[Recursive Character Text Splitter] E --> G[OpenAI Embedding] 🪜 Steps Manual Trigger: Kickstarts the workflow on demand for loading new documents. Google Drive Search & Download Node: Google Drive (Search: file/folder) Downloads PDF documents Apply Recursive Text Splitter: Breaks long documents into overlapping chunks Settings: Chunk Size: 1000 Chunk Overlap: 100 OpenAI Embedding Model: text-embedding-3-small Used for creating document vectors Pinecone Vector Store Host: url Index: index Batch Size: 200 Pinecone Settings: Type: Dense Region: us-east-1 Mode: Insert Documents 💬 Flow 2: Chat-Based Q&A Agent This flow enables chat-style querying of stored documents using OpenAI-powered agents with vector memory. 🧱 Workflow Diagram A[Webhook (chat message)] --> B[AI Agent] B --> C[OpenAI Chat Model] B --> D[Simple Memory] B --> E[Answer with Vector Store] E --> F[Pinecone Vector Store] F --> G[Embeddings OpenAI] 🪜 Components Chat (Trigger): Receives incoming chat queries AI Agent Node Handles query flow using: Chat Model: OpenAI GPT Memory: Simple Memory Tool: Question Answer with Vector Store Pinecone Vector Store: Connected via same embedding index as Flow 1 Embeddings: Ensures document chunks are retrievable using vector similarity Response Node: Returns final AI response to user via webhook 🌐 Flow 3: UI-Based Query with Lovable This flow uses a web UI built using Lovable to query contracts directly from a form interface. 📥 Webhook Setup for Lovable Webhook Node Method: POST URL:url Response: Using 'Respond to Webhook' Node 🧱 Workflow Logic A[Webhook (Lovable Form)] --> B[AI Agent] B --> C[OpenAI Chat Model] B --> D[Simple Memory] B --> E[Answer with Vector Store] E --> F[Pinecone Vector Store] F --> G[Embeddings OpenAI] B --> H[Respond to Webhook] 💡 Lovable UI Users can submit: Full Name Email Department Freeform Query: User can enter any freeform query. Data is sent via webhook to n8n and responded with the answer from contract content. 🔍 Use Cases Contract Querying for Legal/HR teams Procurement & Vendor Agreement QA Customer Support Automation (based on terms) RAG Systems for private document knowledge ⚙️ Tools & Tech Stack 📌 Final Notes Pinecone Index: package1536 Dimension: 1536 Chunk Size: 1000, Overlap: 100 Embedding Model: text-embedding-3-small Feel free to fork the workflow or request the full JSON export. Looking forward to your suggestions and improvements!
by Mario
Purpose This workflow allows you to import any workflow from a file or another n8n instance and map the credentials easily. How it works A multi-form setup guides you through the entire process At the beginning you have two options: Upload a workflow file (JSON) Copy workflow from a remote n8n instance If you choose the second option, you get to choose one of your predefined (in the Settings node) remote instances first, then it retrieves a list of all the workflows using the n8n API which you then can choose a workflow from. Now both initial options come together - the workflow file is being processed In parallel all credentials of the current instance are being retrieved using the Execute Command node The next form page enables a mapping of all the credentials used in the workflow. The matching happens between the names (because one workflow can contain different credentials of the same type) of the original credentials and the ones available on the current instance. Every option then shows all available credentials of the same type. In addition the user has always the choice to create a new credential on the fly. For every option which was set to create a new credential, an empty credential is being created on the current instance using the n8n API. An emoji is being appended to the name, which indicates that it needs to be populated. Finally the workflow gets updated with the new credential ID’s and created on the current instance using the n8n API. Then the user gets a message, if the process has succeeded or not. Setup Select your credentials in the nodes which require those Configure your remote instance(s) in the Settings node. (You can skip this step, if you only want to use the File Upload feature) Every instance is defined as object with the keys name, apiKey and baseUrl. Those instances are then wrapped inside an array. You can find an example described within a note on the workflow canvas. How to use Grab the (production) URL of the Form from the first node Open the URL and follow the instructions given in the multi-form Disclaimer Security: Beware, that all credentials are being decrypted and processed within the workflow. Also the API keys to other n8n instances are stored within the workflow. This solution is primarily meant for transferring data between testing environments. For production use consider the n8n enterprise edition which provides a reliable way to deploy workflows between different environments without the need of manual credential mapping.
by Stefan
Overview This comprehensive n8n workflow provides a sophisticated solution for dynamically selecting and using AI models while maintaining GDPR compliance. It leverages Requesty's European-based AI routing service to ensure data privacy and automatically updates available model options based on real-time API availability. Choose Your Integration Approach Before diving into the setup, it's crucial to understand that this workflow offers two completely independent AI integration approaches: Approach 1: Dynamic HTTP Request Workflow (Advanced) Complete infrastructure with dynamic model selection What it includes: Automatic model discovery from Requesty's API Dynamic dropdown updates in web forms Model selection persistence in Google Sheets Complex workflow orchestration with multiple phases Full control over API parameters and response handling Best for: Teams needing multiple AI models for different tasks Organizations requiring model usage auditing Users who want maximum flexibility and control Advanced n8n users comfortable with complex workflows Setup complexity: High (requires multiple components and configurations) Approach 2: Standalone AI Agent (Simple) Plug-and-play solution without complexity What it includes: Direct use of n8n's native OpenAI Chat Model node Simple configuration: just set base URL to https://router.requesty.ai/v1 Immediate GDPR compliance through European infrastructure No model discovery or selection infrastructure needed Best for: Users wanting quick GDPR-compliant AI integration Single-model use cases Simple chat interfaces Users preferring minimal configuration Setup complexity: Low (5-minute setup) Quick Start: Approach 2 (Simple AI Agent) If you want to get started quickly with GDPR-compliant AI, follow these steps: Step 1: Register with Requesty Visit https://www.requesty.ai Complete the registration process Choose "OpenAI-compatible" integration Note your API endpoint: https://router.requesty.ai/v1 Create an API key (name it "n8n Integration") Step 2: Configure n8n Add a new OpenAI credential in n8n Set the base URL to: https://router.requesty.ai/v1 Enter your Requesty API key Add an OpenAI Chat Model node to your workflow Select your Requesty credential Step 3: Test Your AI agent is now ready and GDPR-compliant! All requests will be routed through Requesty's European infrastructure. Advanced Setup: Approach 1 (Dynamic HTTP Workflow) For users who need dynamic model selection and advanced features, follow this comprehensive setup: Prerequisites n8n instance (self-hosted or cloud) Requesty API credentials Google Sheets integration Basic understanding of n8n workflows Phase 1: Requesty Account Setup 1.1 Registration Process Navigate to https://www.requesty.ai Sign up with your email address Complete the welcome process 1.2 Integration Configuration Choose Integration Type: Select "OpenAI-compatible" Note API Endpoint: https://router.requesty.ai/v1 Create API Key: Provide a descriptive name (e.g., "n8n Dynamic Workflow") Click "Create API Key" Important: Save this key securely - you'll need it for n8n configuration Phase 2: Google Sheets Preparation 2.1 Create Storage Sheet Create a new Google Sheet named "AI Model Selections" Add the following column: A1: "Selected Model" Note the Google Sheet ID from the URL 2.2 Configure Google Sheets API Enable Google Sheets API in Google Cloud Console Create service account credentials Share your sheet with the service account email Download the credentials JSON file Phase 3: n8n Workflow Configuration 3.1 Import Workflow Download the workflow JSON file Import into your n8n instance Review all nodes and connections 3.2 Configure Credentials Requesty API Credentials: Go to n8n Credentials section Create new HTTP Request credential Set authentication type to "Header Auth" Header name: "Authorization" Header value: "Bearer YOUR_REQUESTY_API_KEY" Google Sheets Credentials: Create new Google Sheets credential Upload your service account JSON file Test the connection Google Sheets Nodes: Update sheet ID in all Google Sheets nodes Verify column mappings match your sheet structure Phase 4: Troubleshooting Guide Common Issues and Solutions Models Not Loading: Verify Requesty API credentials Check network connectivity and API endpoint URL Selection Not Persisting: Verify Google Sheets credentials and write permissions Check sheet ID configuration Chat Not Responding: Verify selected model availability Check API request formatting and response processing Debug Procedures Enable debug mode and detailed logging Check node outputs and data flow Validate API calls with external tools Review n8n execution logs Conclusion The choice between approaches depends on your specific requirements: Simple AI Agent**: Perfect for straightforward AI integration with minimal setup Dynamic HTTP Workflow**: Ideal for complex requirements with multiple models and advanced features
by Artem Boiko
How it works This template automates the conversion of CAD and BIM files Revit, AutoCAD, IFC, MicroStation (e.g. .rvt, .ifc, .dwg, .dgn) into structured Excel databases and lightweight 3D geometry .dae files using the DataDrivenConstruction open-source converter. 📦 High-level steps: Set file paths and converter path in the Set node Trigger conversion via Execute Command (runs .exe converter offline) Output includes .xlsx (data) and .dae (3D model) files Includes sticky note instructions for troubleshooting and GitHub repo info Set up steps 🕒 Setup time: ~10 minutes You’ll need: Windows machine (offline or airgapped OK) Path to the converter .exe file Path to a sample .rvt (or .ifc, .dwg, .dgn) file 🧷 Setup paths in the Set node: path_to_converter = "C:\\...\\RvtExporter.exe" path_project_file = "C:\\...\\project.rvt" Docs & Issues: Full Readme on GitHub
by Yaron Been
Automatically monitor and track funding rounds in the US Fintech and Healthtech sectors using Crunchbase API, with daily updates pushed to Google Sheets for easy analysis and monitoring. 🚀 What It Does Daily Monitoring**: Automatically checks for new funding rounds every day at 8 AM Smart Filtering**: Focuses on US-based Fintech and Healthtech companies Data Enrichment**: Extracts and formats key funding information Automated Storage**: Pushes data to Google Sheets for easy access and analysis 🎯 Perfect For VC firms tracking investment opportunities Startup founders monitoring market activity Market researchers analyzing funding trends Business analysts tracking competitor funding ⚙️ Key Benefits ✅ Real-time funding round monitoring ✅ Focused industry tracking (Fintech & Healthtech) ✅ Automated data collection and organization ✅ Structured data output in Google Sheets ✅ Complete funding details including investors and amounts 🔧 What You Need Crunchbase API key Google Sheets account n8n instance Basic spreadsheet setup 📊 Data Collected Company Name Industry Funding Round Type Announced Date Money Raised (USD) Investors Crunchbase URL 🛠️ Setup & Support Quick Setup Deploy in 30 minutes with our step-by-step configuration guide 📺 Watch Tutorial 💼 Get Expert Support 📧 Direct Help Stay ahead of market movements with automated funding round tracking. Transform manual research into an efficient, automated process.
by Aditya Gaur
Who is this template for? This template is designed for teams who need to automate data retrieval from SharePoint lists using n8n. It is ideal for users who want to authenticate via OAuth and then use the token to access SharePoint API endpoints, pulling in list data directly into n8n. How it works The template first generates an OAuth token using the Microsoft OAuth API. This token is then used to authenticate requests to the SharePoint List API, allowing the workflow to fetch data from a specified SharePoint list. By following the n8n workflow, the user can configure the necessary credentials and endpoints to automate SharePoint data access securely. Setup steps Step 1: Replace {tenant_id}, {client_id}, and {client_secret} with your Azure AD details for OAuth authentication. Step 2: Specify the SharePoint list API endpoint in the template (under "SharePoint List Fetch" node). Step 3: Configure the SharePoint list URL and make adjustments for specific data fields if necessary.
by Aleksandr
This template processes webhooks received from amoCRM in a URL-encoded format and transforms the data into a structured array that n8n can easily interpret. By default, n8n does not automatically parse URL-encoded webhook payloads into usable JSON. This template bridges that gap, enabling seamless data manipulation and integration with subsequent processing nodes. Key Features: Input Handling: Processes URL-encoded data received from amoCRM webhooks. Data Transformation: Converts complex, nested keys into a structured JSON array. Ease of Use: Simplifies access to specific fields for further workflow automation. Setup Guide: Webhook Trigger Node: Configure the Webhook Trigger node to receive data from amoCRM. URL-Encoding Parsing: Use the provided nodes to transform the input URL-encoded data into a structured array. Access Transformed Data: Use the resulting JSON structure for subsequent nodes in your workflow, such as filtering, updating records, or triggering external systems. Example Data Transformation: Sample Input (URL-Encoded): The following input format is typically received from amoCRM: $json.body'leads[updatecustom_fields[id]'] Output (Structured JSON): After processing, the data is transformed into an easily accessible JSON array format: {{ $json.leads.update[‘0’].id }} This output allows you to work with clean, structured JSON, simplifying field extraction and workflow continuation. Code Explanation: This workflow parses URL-encoded key-value pairs using n8n nodes to restructure the data into a nested JSON object. By doing so, the template improves transparency, ensures data integrity, and makes further automation tasks straightforward.
by Geoffrey Saxena
👤 Who is this for? This workflow is great for n8n users who want to prevent duplicate or overlapping workflow runs. If you're a developer, DevOps engineer, or automation enthusiast managing tasks like database updates, syncing tools, or hitting rate-limited APIs, this one’s for you. 🧩 What problem does this solve? In the real world, automations can get triggered at the same time—whether that’s because of multiple webhook calls, overlapping schedules, or retries. And when two workflows try to do the same thing at once (like updating a record or syncing data), it can cause conflicts, data corruption, or wasted API calls. This workflow helps avoid that problem by using Redis as a lock system, so only one instance runs at a time. Think of it like putting up a “🚧 Workflow in Progress” sign while your logic is running. ⚙️ What this workflow does When the workflow starts, it tries to set a Redis key as a lock with a short expiry. If the lock is free: Your main business logic runs. Once it's done, the lock is cleared. If the lock is already taken (i.e., another run is in progress): The workflow will wait and retry a few times. If a duplicate request shows up while one is already being processed: It skips that duplicate to avoid unnecessary work. You can customize both the timeout and retry logic to match your needs. 🛠️ Setup guide To use this template: You’ll need access to a Redis instance (either self-hosted or managed like Upstash, Redis Cloud, etc). Set up your Redis credentials in the n8n Redis node. Swap out the webhook node with your actual trigger or logic. Adjust the lock timeout to match how long your task typically takes. > 💡 Bonus Tip: Use this pattern wherever you need idempotency or want to avoid duplicate processing. 🧪 Example use case Let’s say you have a workflow that syncs ClickUp tickets to Google Sheets. It runs daily at 9 AM and updates tickets, adds notes, and makes sure nothing is missed. But what if two runs start at the same time? Or someone triggers a manual sync while the scheduled one is still working? By wrapping that whole sync inside this Redis locking template, you can make sure it only runs one at a time, saving your APIs (and your sanity).
by Ranjan Dailata
Who this is for The Async Structured Bulk Data Extract with Bright Data Web Scraper workflow is designed for data engineers, market researchers, competitive intelligence teams, and automation developers who need to programmatically collect and structure high-volume data from the web using Bright Data's dataset and snapshot capabilities. This workflow is built for: Data Engineers - Building large-scale ETL pipelines from web sources Market Researchers - Collecting bulk data for analysis across competitors or products Growth Hackers & Analysts - Mining structured datasets for insights Automation Developers - Needing reliable snapshot-triggered scrapers Product Managers - Overseeing data-backed decision-making using live web information What problem is this workflow solving? Web scraping at scale often requires asynchronous operations, including waiting for data preparation and snapshots to complete. Manual handling of this process can lead to timeouts, errors, or inconsistencies in results. This workflow automates the entire process of submitting a scraping request, waiting for the snapshot, retrieving the data, and notifying downstream systems all in a structured, repeatable fashion. It solves: Asynchronous snapshot completion handling Reliable retrieval of large datasets using Bright Data Automated delivery of scraped results via webhook Disk persistence for traceability or historical analysis What this workflow does Set Bright Data Dataset ID & Request URL: Takes in the Dataset ID and Bright Data API endpoint used to trigger the scrape job HTTP Request: Sends an authenticated request to the Bright Data API to start a scraping snapshot job Wait Until Snapshot is Ready: Implements a loop or wait mechanism that checks snapshot status (e.g., polling every 30 seconds) until completion i.e ready state Download Snapshot: Downloads the structured dataset snapshot once ready Persist Response to Disk: Saves the dataset to disk for archival, review, or local processing Webhook Notification: Sends the final result or a summary of it to an external webhook Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. Update the Set Dataset Id, Request URL for setting the brand content URL. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs Polling Strategy : Adjust polling interval (e.g., every 15–60 seconds) based on snapshot complexity Input Flexibility : Accept datasetId and request URL dynamically from a webhook trigger or input form Webhook Output : Send notifications to - Internal APIs – for use in dashboards Zapier/Make – for multi-step automation Persistence Save output to: Remote FTP or SFTP storage Amazon S3, Google Cloud Storage etc.