by David Ashby
🛠️ MailerLite Tool MCP Server Complete MCP server exposing all MailerLite Tool operations to AI agents. Zero configuration needed - all 4 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every MailerLite Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n MailerLite Tool tool with full error handling 📋 Available Operations (4 total) Every possible MailerLite Tool operation is included: 🔧 Subscriber (4 operations) • Create a subscriber • Get a subscriber • Get many subscribers • Update a subscriber 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native MailerLite Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every MailerLite Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by NonoCode
Who is this template for? This workflow template is designed for teams involved in training management and feedback analysis. It is particularly useful for: HR Departments**: Automating the collection and response to training feedback. Training Managers**: Streamlining the process of handling feedback and ensuring timely follow-up. Corporate Trainers**: Receiving direct feedback and taking actions to improve training sessions. This workflow offers a comprehensive solution for automating feedback management, ensuring timely responses, and improving the quality of training programs. How it works This workflow operates with an Airtable trigger but can be easily adapted to work with other triggers like webhooks from external applications. Once feedback data is captured, the workflow evaluates the feedback and directs it to the appropriate channel for action. Tasks are created in Usertask based on the feedback rating, and notifications are sent to relevant parties. Here’s a brief overview of this n8n workflow template: Airtable Trigger**: Captures new or updated feedback entries from Airtable. Switch Node**: Evaluates the feedback rating and directs the workflow based on the rating. Webhook**: Retrieves the result of a Usertask task. Task Creation**: Creates tasks in Usertask for poor feedback. Creates follow-up tasks for fair to good feedback. Documents positive feedback and posts recognition on LinkedIn for very good to excellent ratings. Notifications**: Sends email notifications to responsible parties for urgent actions. Sends congratulatory emails and posts on LinkedIn for positive feedback. To summarize Flexible Integration**: This workflow can be triggered by various methods like Airtable updates or webhooks from other applications. Automated Task Management**: It creates tasks in Usertask based on feedback ratings to ensure timely follow-up. Multichannel Notifications**: Sends notifications via email and LinkedIn to keep stakeholders informed and recognize successes. Comprehensive Feedback Handling**: Automates the evaluation and response to training feedback, improving efficiency and response time. Instructions: Set Up Airtable: Create a table in Airtable to capture training feedback. Configure n8n: Set up the Airtable trigger in n8n to capture new or updated feedback entries. Set Up Usertask: Configure the Usertask nodes in n8n to create and manage tasks based on feedback ratings. Configure Email and LinkedIn Nodes: Set up the email and LinkedIn nodes to send notifications and post updates. Test the Workflow: Run tests to ensure the workflow captures feedback, creates tasks, and sends notifications correctly. Video : https://youtu.be/U14MhTcpqeY Remember, this template was created in n8n v1.38.2.
by Tom Cao
🔐 Advanced SSL Health Monitor 👤 Who is this for? This workflow is designed for DevOps engineers, IT administrators, and security professionals who need comprehensive SSL certificate monitoring and health assessment across multiple domains — featuring dual verification and professional reporting without relying on expensive monitoring services. 🧩 What It Does Daily Trigger runs the workflow every morning for proactive monitoring. URL Collection fetches the list of website URLs to monitor from your data source. Dual SSL Analysis: Free SSL Assessment Script — Get from sysadmin-toolkit on Github SSL-Checker.io API — External verification for cross-validation Comprehensive Health Check: Certificate expiration monitoring (customizable threshold) SSL configuration security assessment Protocol support analysis (TLS 1.3, 1.2, deprecated protocols) Cipher suite strength evaluation Vulnerability scanning (POODLE, BEAST, etc.) Compliance checking (PCI DSS, NIST, FIPS) Smart Alert System sends Discord notifications when: Certificates expire within threshold (default: 30 days) SSL configuration issues detected (weak ciphers, deprecated protocols) Security vulnerabilities found Compliance standards not met Grade drops below acceptable level (configurable) 🎯 Key Features 🔄 Dual Verification**: Cross-checks results between internal scanner and external API 📊 SSL Labs-Style Grading**: A+ to F rating system with detailed analysis 🛡️ Security Assessment**: Vulnerability detection and compliance checking 📱 Discord Integration**: Rich embed notifications with color-coded alerts ⚙️ Setup Instructions Data Source: Configure your URL source from Notion Ensure it contains a URL column with domains to monitor Credentials: Set up Discord webhook for alert notifications Configure any required API credentials for data sources Customize Thresholds: Expiration Alert: Days before expiry (default: 30 days) Grade Threshold: Minimum acceptable SSL grade (default: B) Alert Severity: Choose which issues trigger notifications Advanced Configuration: Modify vulnerability checks based on your security requirements Adjust compliance standards for your industry needs Customize Discord message formatting and alert channels 🧠 Technical Notes Dual-Check Reliability**: Combines custom Bubobot scanner with ssl-checker.io for maximum accuracy No Vendor Lock-in**: Uses free public APIs and open-source tools Professional Reporting**: Generates SSL Labs-quality assessments Security-First Approach**: Comprehensive vulnerability and compliance checking Flexible Alerting**: Discord integration with rich formatting and conditional logic This workflow provide a comprehensive SSL security monitoring solution that rivals enterprise-grade tools while remaining completely open-source and free.
by Ranjan Dailata
The Scrape and Analyze Amazon Product Info with Decodo + OpenAI workflow automates the process of extracting product information from an Amazon product page and transforming it into meaningful insights. The workflow then uses OpenAI to generate descriptive summaries, competitive positioning insights, and structured analytical output based on the extracted information. Disclaimer Please note - This workflow is only available on n8n self-hosted as it’s making use of the community node for the Decodo Web Scraping Who this is for? This workflow is ideal for: E-commerce product researchers Marketplace sellers (Amazon, Flipkart, Shopify, etc.) Competitive intelligence teams Product comparison bloggers and reviewers Pricing and product analytics engineers Automation builders needing AI-powered product insights What problem is this workflow solving? Manually extracting Amazon product details, ads, pricing, reviews, and competitive signals is: Time-consuming Requires switching across tools Difficult to analyze at scale Not structured for reporting Hard to compare products objectively This workflow automates: Web scraping of Amazon product pages Extraction of product features and ad listings AI-generated product summaries Competitive positioning analysis Generation of structured product insight output Export to Google Sheets for tracking and reporting What this workflow does This workflow performs an end-to-end product intelligence pipeline, including: Data Collection Scrapes an Amazon product page using Decodo Retrieves product details and advertisement placements Data Extraction Extracts: Product specs Key feature descriptions Ads data Supplemental metadata AI-Driven Analysis Generates: Descriptive product summary Competitive positioning insights Structured product insight schema Data Consolidation Merges descriptive, analytical, and structured outputs Export & Persistence Aggregates results Writes final dataset to Google Sheets for: tracking comparison reporting product research archives Setup Prerequisites If you are new to Decode, please signup on this link visit.decodo.com n8n instance** Decodo API credentials** OpenAI API credentials** Make sure to install the Decodo Community Node. Required Credentials Decodo API Go to Credentials Add Decodo API Enter API key Save as: Decodo Credentials account OpenAI API Go to Credentials Select OpenAI Enter API key Save as: OpenAi account Google Sheets Add Google Sheets OAuth Authorize via Google Save as desired account Inputs to configure Modify in Set the Input Fields node: product_url = https://www.amazon.in/Sony-DualSense-Controller-Grey-PlayStation/dp/B0BQXZ11B8 How to customize this workflow to your needs You can easily adapt this workflow for various use cases. Change the product being analyzed Modify: product_url Change AI model In OpenAI nodes: Replace gpt-4.1-mini Use Gemini, Claude, Mistral, Groq (if supported) Customize the insight schema Edit Product Insights node to include: sustainability markers sentiment extraction pricing bands safety compliance brand comparisons Expand data extraction You may extract: product reviews FAQs Q&A seller information delivery and logistics signals Change output destination Replace Google Sheets with: PostgreSQL MySQL Notion Slack Airtable Webhook delivery CSV export Turn it into a batch processor Loop over: multiple ASINs category listings search results pages Summary This workflow provides a complete automated product intelligence engine, combining Decodo’s scraping capabilities with OpenAI’s analytical reasoning to transform Amazon product pages into structured insights, competitive analysis, and summarized evaluations automatically stored for reporting and comparison.
by Davide
The "Voice RAG Chatbot with ElevenLabs and OpenAI" workflow in n8n is designed to create an interactive voice-based chatbot system that leverages both text and voice inputs for providing information. Ideal for shops, commercial activities and restaurants How it works: Here's how it operates: Webhook Activation: The process begins when a user interacts with the voice agent set up on ElevenLabs, triggering a webhook in n8n. This webhook sends a question from the user to the AI Agent node. AI Agent Processing: Upon receiving the query, the AI Agent node processes the input using predefined prompts and tools. It extracts relevant information from the knowledge base stored within the Qdrant vector database. Knowledge Base Retrieval: The Vector Store Tool node interfaces with the Qdrant Vector Store to retrieve pertinent documents or data segments matching the user’s query. Text Generation: Using the retrieved information, the OpenAI Chat Model generates a coherent response tailored to the user’s question. Response Delivery: The generated response is sent back through another webhook to ElevenLabs, where it is converted into speech and delivered audibly to the user. Continuous Interaction: For ongoing conversations, the Window Buffer Memory ensures context retention by maintaining a history of interactions, enhancing the conversational flow. Set up steps: To configure this workflow effectively, follow these detailed setup instructions: ElevenLabs Agent Creation: Begin by creating an agent on ElevenLabs (e.g., named 'test_n8n'). Customize the first message and define the system prompt specific to your use case, such as portraying a character like a waiter at "Pizzeria da Michele". Add a Webhook tool labeled 'test_chatbot_elevenlabs' configured to receive questions via POST requests. Qdrant Collection Initialization: Utilize the HTTP Request nodes ('Create collection' and 'Refresh collection') to initialize and clear existing collections in Qdrant. Ensure you update placeholders QDRANTURL and COLLECTION accordingly. Document Vectorization: Use Google Drive integration to fetch documents from a designated folder. These documents are then downloaded and processed for embedding. Employ the Embeddings OpenAI node to generate embeddings for the downloaded files before storing them into Qdrant via the Qdrant Vector Store node. AI Agent Configuration: Define the system prompt for the AI Agent node which guides its behavior and responses based on the nature of queries expected (e.g., product details, troubleshooting tips). Link necessary models and tools including OpenAI language models and memory buffers to enhance interaction quality. Testing Workflow: Execute test runs of the entire workflow by clicking 'Test workflow' in n8n alongside initiating tests on the ElevenLabs side to confirm all components interact seamlessly. Monitor logs and outputs closely during testing phases to ensure accurate data flow between systems. Integration with Website: Finally, integrate the chatbot widget onto your business website replacing placeholder AGENT_ID with the actual identifier created earlier on ElevenLabs. By adhering to these comprehensive guidelines, users can successfully deploy a sophisticated voice-driven chatbot capable of delivering precise answers utilizing advanced retrieval-augmented generation techniques powered by OpenAI and ElevenLabs technologies.
by HoangSP
SEO Blog Generator with GPT-4o, Perplexity, and Telegram Integration This workflow helps you automatically generate SEO-optimized blog posts using Perplexity.ai, OpenAI GPT-4o, and optionally Telegram for interaction. 🚀 Features 🧠 Topic research via Perplexity sub-workflow ✍️ AI-written blog post generated with GPT-4o 📊 Structured output with metadata: title, slug, meta description 📩 Integration with Telegram to trigger workflows or receive outputs (optional) ⚙️ Requirements ✅ OpenAI API Key (GPT-4o or GPT-3.5) ✅ Perplexity API Key (with access to /chat/completions) ✅ (Optional) Telegram Bot Token and webhook setup 🛠 Setup Instructions Credentials: Add your OpenAI credentials (openAiApi) Add your Perplexity credentials under httpHeaderAuth Optional: Setup Telegram credentials under telegramApi Inputs: Use the Form Trigger or Telegram input node to send a Research Query Subworkflow: Make sure to import and activate the subworkflow Perplexity_Searcher to fetch recent search results Customization: Edit prompt texts inside the Blog Content Generator and Metadata Generator to change writing style or target industry Add or remove output nodes like Google Sheets, Notion, etc. 📦 Output Format The final blog post includes: ✅ Blog content (1500-2000 words) ✅ Metadata: title, slug, and meta description ✅ Extracted summary in JSON ✅ Delivered to Telegram (if connected) Need help? Reach out on the n8n community forum
by Ranjan Dailata
Notice Community nodes can only be installed on self-hosted instances of n8n. Who this is for This workflow automates the real-time extraction of Job Descriptions and Salary Information from job listing pages using Bright Data MCP and analyzes content using OpenAI GPT-4o mini. This workflow is ideal for: Recruiters & HR Tech Startups**: Automate job data collection from public listings Market Intelligence Teams**: Analyze compensation trends across companies or geographies Job Boards & Aggregators**: Power search results with structured, enriched listings AI Workflow Builders**: Extend to other career platforms or automate resume-job match analysis Analysts & Researchers**: Track hiring signals and salary benchmarks in real time What problem is this workflow solving? Traditional scraping of job portals can be challenging due to cluttered content, anti-scraping measures, and inconsistent formatting. Manually analyzing salary ranges and job descriptions is tedious and error-prone. This workflow solves the problem by: Simulating user behavior using Bright Data MCP Client to bypass anti-scraping systems Extracting structured, clean job data in Markdown format Using OpenAI GPT-4o mini to analyze and extract precise salary details and refined job descriptions Merging and formatting the result for easy consumption Delivering final output via webhook, Google Sheets, or file system What this workflow does Components & Flow Input Nodes job_search_url: The job listing or search result URL job_role: The title or role being searched for (used in logging/formatting) MCP Client Operations MCP Salary Data Extractor Simulates browser behavior and scrapes salary-related content (if available) MCP Job Description Extractor Extracts full job description as structured Markdown content OpenAI GPT-4o mini Nodes Salary Information Extractor Uses GPT-4o mini to detect, clean, and standardize salary range data (if any) Job Description Refiner Extracts role responsibilities, qualifications, and benefits from unstructured text Company Information Extractor Uses Bright Data MCP and GPT-4o mini to extract the company information Merge Node Combines the refined job description and extracted salary information into a unified JSON response object Aggregate node Aggregates the job description and salary information into a single JSON response object Final Output Handling The output is handled in three different formats depending on your downstream needs: Save to Disk** Output stored with filename including timestamp and job role Google Sheet Update** Adds a new row with job role, salary, summary, and link Webhook Notification** Pushes merged response to an external system Pre-conditions Knowledge of Model Context Protocol (MCP) is highly essential. Please read this blog post - model-context-protocol You need to have the Bright Data account and do the necessary setup as mentioned in the Setup section below. You need to have the Google Gemini API Key. Visit Google AI Studio You need to install the Bright Data MCP Server @brightdata/mcp You need to install the n8n-nodes-mcp Setup Please make sure to setup n8n locally with MCP Servers by navigating to n8n-nodes-mcp Please make sure to install the Bright Data MCP Server @brightdata/mcp on your local machine. Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. Create a Web Unlocker proxy zone called mcp_unlocker on Bright Data control panel. In n8n, configure the OpenAi account credentials. In n8n, configure the credentials to connect with MCP Client (STDIO) account with the Bright Data MCP Server as shown below. Make sure to copy the Bright Data API_TOKEN within the Environments textbox above as API_TOKEN=<your-token> How to customize this workflow to your needs Modify Input Source Change the job_search_url to point to any job board or aggregator Customize job_role to reflect the type of jobs being analyzed Tweak LLM Prompts (Optional) Refine GPT-4o mini prompts to extract additional fields like benefits, tech stacks, remote eligibility Change Output Format Customize the merged object to output JSON, CSV, or Markdown based on downstream needs Add additional destinations (e.g., Slack, Airtable, Notion) via n8n nodes
by Roman Rozenberger
How it works • Extract AI Overviews from Google Search - Receives data from browser extension via webhook • Convert HTML to Markdown - Automatically processes and cleans AI Overview content • Store in Google Sheets - Archives all extracted AI Overviews with metadata and sources • Generate SEO Guidelines - AI analyzes page content vs AI Overview to suggest improvements • Automate Analysis - Batch process multiple URLs and schedule regular checks Set up steps • Import workflow - Load the JSON template into your n8n instance (2 minutes) • Configure Google Sheets - Set up OAuth connection and create spreadsheet with required columns (5 minutes) • Set up AI provider - Add OpenRouter API credentials for Gemini 2.5 Pro (3 minutes) • Install browser extension - Deploy the companion Chrome/Firefox extension for data extraction (5 minutes) • Test webhook endpoint - Verify the connection between extension and n8n workflow (2 minutes) Total setup time: ~15 minutes What you'll need: Google account for Sheets integration Google Sheet template with required columns OpenRouter API key for Gemini 2.5 Pro model access Browser extension: Chrome Extension or Firefox Add-on n8n instance (local or cloud) Use cases: SEO agencies** - Monitor AI Overview presence for client keywords Content marketers** - Analyze what content gets featured in AI Overviews E-commerce** - Track AI Overview coverage for product-related searches Research** - Build datasets of AI Overview content across different topics The workflow comes with a free browser extension (Chrome | Firefox) that automatically extracts AI Overview content from Google Search and sends it via webhook to your n8n workflow for processing and analysis. GitHub Repository: https://github.com/romek-rozen/ai-overview-extractor/ Detailed Setup Instructions - AI Overview Extractor Prerequisites n8n instance** (local or cloud) - version 1.95.3+ Google account** for Sheets integration OpenRouter API account** for Gemini 2.5 Pro access Browser** (Chrome/Firefox) for the extension Step 1: Import the Workflow Open n8n and navigate to Workflows Click "Add workflow" → "Import from JSON" Upload the AI_OVERVIES_EXTRACTOR_TEMPLATE.json file Save the workflow Step 2: Configure Google Sheets Create Google Sheets Document Create new Google Sheet with these columns: extractedAt | searchQuery | sources | markdown | myURL | task | guidelines | key Here is public google sheet template: https://docs.google.com/spreadsheets/d/15xqZ2dTiLMoyICYnnnRV-HPvXfdgVeXowr8a7kU4uHk/edit?gid=0#gid=0 Copy the Google Sheets URL (you'll need it for the workflow) Set up Google Sheets Credentials In n8n, go to Settings → Credentials Click "Add credential" → "Google Sheets OAuth2 API" Follow the OAuth setup to authorize n8n access to Google Sheets Name the credential (e.g., "Google Sheets AI Overview") Configure Google Sheets Nodes Update these nodes with your Google Sheets URL: Get URLs to Analyze Save AI Overview to Sheets Save SEO Guidelines to Sheets In each node: Set documentId to your Google Sheets URL Set sheetName to your Google Sheets URL Select your Google Sheets credential Step 3: Configure AI Provider (OpenRouter) Get OpenRouter API Key Sign up at https://openrouter.ai/ Generate API key in your account settings Add credits to your account Set up OpenRouter Credentials In n8n, go to Settings → Credentials Click "Add credential" → "OpenRouter API" Enter your API key Name the credential (e.g., "OpenRouter AI Overview") Configure OpenRouter Node Select the Gemini 2.5 Pro Model node Choose your credential from the dropdown Verify the model (default: google/gemini-2.5-pro-preview) Step 4: Install Browser Extension Install in Chrome Official Extension (Recommended) Visit: https://chromewebstore.google.com/detail/ai-overview-extractor/cbkdfibgmhicgnmmdanlhnebbgonhjje Click "Add to Chrome" Install in Firefox Official Add-on Visit: https://addons.mozilla.org/en-US/firefox/addon/ai-overview-extractor/ Click "Add to Firefox" Step 5: Configure Webhook Connection Get Webhook URL In n8n workflow, click on the Webhook node Copy the webhook URL (should be like: http://localhost:5678/webhook/ai-overview-extractor-template-123456789) Configure Extension Go to Google Search and perform any search with AI Overview Click the browser extension button (AI Overview Extractor) In webhook configuration section, paste your webhook URL Click "Test" - should show ✅ Test successful Click "Save" to store the configuration Step 6: Activate and Test Activate Workflow In n8n, toggle the workflow to "Active" (top right switch) Verify all nodes are properly configured Test End-to-End Go to Google Search Search for something that shows AI Overview Use the extension to extract AI Overview Send via webhook - check your Google Sheets for the data Verify the markdown conversion worked correctly Optional: Batch Analysis Setup For SEO Analysis Features In your Google Sheets, add URLs in the myURL column Set task column to "create guidelines" Run the workflow manually or wait for the 15-minute scheduler Check guidelines column for AI-generated SEO recommendations Troubleshooting Webhook Issues Ensure n8n is running on port 5678 Check if workflow is activated Verify webhook URL format Google Sheets Errors Confirm OAuth credentials are working Check sheet URL format Verify column names match exactly Ensure nodes Get URLs to Analyze, Save AI Overview to Sheets, and Save SEO Guidelines to Sheets are properly configured OpenRouter Issues Check API key validity Ensure sufficient account credits Try different models if Gemini 2.5 Pro fails Verify the Gemini 2.5 Pro Model node is properly connected Extension Problems Check browser console for errors Verify extension is properly installed Ensure you're on google.com/search pages Confirm webhook URL is correctly configured in extension Next Steps Customize AI prompts** in the Generate SEO Recommendations node for your specific needs Adjust scheduler frequency** (default: 15 minutes) Add more URL analysis** by populating Google Sheets Monitor usage** and API costs Support GitHub Issues**: https://github.com/romek-rozen/ai-overview-extractor/issues n8n Community**: https://community.n8n.io/ Template Documentation**: Check the included README files
by David Ashby
Complete MCP server exposing all LoneScale Tool operations to AI agents. Zero configuration needed - all 2 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every LoneScale Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n LoneScale Tool tool with full error handling 📋 Available Operations (2 total) Every possible LoneScale Tool operation is included: 📝 List (1 operations) • Create a list 🔧 Item (1 operations) • Create a item 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native LoneScale Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every LoneScale Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 2 NPR Station Finder Service API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add NPR Station Finder Service credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the NPR Station Finder Service API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://station.api.npr.org • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (2 total) 🔧 V3 (2 endpoints) • GET /v3/stations: Get Station 1 • GET /v3/stations/{stationId}: Retrieve metadata for the station with the given numeric ID 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native NPR Station Finder Service API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing all Jina AI Tool operations to AI agents. Zero configuration needed - all 3 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Jina AI Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Jina AI Tool tool with full error handling 📋 Available Operations (3 total) Every possible Jina AI Tool operation is included: 🔧 Reader (2 operations) • Read URL content • Search web 🔧 Research (1 operations) • Perform deep research 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Jina AI Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Jina AI Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Jean-Marie Rizkallah
🧩 Jamf Policies Export to Slack Quickly export and review your entire Jamf policy configuration—including triggers, frequencies, and scope—directly in Slack. This enables IT and security teams to audit policy setups without logging into Jamf or generating reports manually. ❗The Problem Jamf Pro lacks a straightforward way to quickly review or share a list of all configured policies, including key attributes like frequency, scope, or triggers. Security teams often need this for audit or compliance reviews, but navigating Jamf’s UI or exporting via the API is time-consuming. 🔧 This Fixes It This workflow fetches all policies, extracts the most relevant fields, compiles them into a csv file, and posts that readble file into a designated Slack channel—automatically or on demand. ✅ Prerequisites • A Jamf Pro API key (OAuth2) with read access to policies • A Slack app with permission to post files into your chosen channel 🔍 How it works • Manually trigger or use the webhook to initiate the flow • Retrieve all policies from Jamf via the XML API • Convert the XML response into JSON • Split and loop through each policy ID • Retrieve detailed data for each policy • Format relevant fields (ID, name, trigger, scope, etc.) • Convert the final data set into an .csv file • Upload the file to your Slack channel ⚙️ Set up steps • Takes ~10 minutes to configure • Set the Jamf BaseURL in the “Jamf Server” node • Configure Jamf OAuth2 credentials in the HTTP Request nodes • Adjust the fields for export in the “Set-fields” node • Set your Slack credentials and target channel in the “Post to Slack” node • Optional: Customize the exported fields or filename 🔄 Automation Ready Schedule this flow daily/weekly, or tie it to change events to keep your team informed.