by David Ashby
Complete MCP server exposing 1 Listing API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Listing API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Listing API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.ebay.com{basePath} • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (1 total) 🔧 Item_Draft (1 endpoints) • POST /item_draft/: Create eBay Listing Draft 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Listing API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 1 IP2Proxy Proxy Detection API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add IP2Proxy Proxy Detection credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the IP2Proxy Proxy Detection API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.ip2proxy.com • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (1 total) 🔧 General (1 endpoints) • GET /: Check Proxy IP 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native IP2Proxy Proxy Detection API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 1 Image Moderation API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Image Moderation credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Image Moderation API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.moderatecontent.com/moderate/ • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (1 endpoints) General (1 operation) Detect Nudity in Images 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Image Moderation API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Jimleuk
This n8n template runs daily to track and report on any changes made to workflows on any n8n instance. Useful if a team is working within a single instance and you want to be notified of what workflows have changed since you last visited them. Another use-case might be monitoring your managed instances for clients and being alerted when changes are made without your knowledge. See a sample Gsheet here: https://docs.google.com/spreadsheets/d/1dOHSfeE0W_qPyEWj5Zz0JBJm8Vrf_cWp-02OBrA_ZYc/edit?usp=sharing How it works A scheduled trigger is set to run once a day to review all available workflows. An n8n node imports the workflows as json. The workflows are brought into a loop where each is first checked to see if it exists in the designated google sheet. If not, a new entry is created and skipped. If the workflow has been captured before, then the comparison subworkflow can be executed using the previous and current versions of the workflow json data. The subworkflow uses the compare dataset tool to calculate the changes to nodes and connections for the given workflow. The results are then recorded back to the google sheet for review. How to use Start with the n8n node and try to filter by the workflows you're interested in tracking. Set the scheduled trigger interval to match the frequency to suit how often your workflows are being edited. Customising the workflow Want to get fancy? Add in an AI agent to help determine changes between the previous and current versions of the workflow. Add contextual explanations to reveal the impact of the changes.
by Nskha
Overview This N8N workflow facilitates advanced URL parsing and shortening, incorporating metadata extraction, OpenGraph tag handling, and integration with Switchy API for link management. It employs various nodes for URL processing, metadata extraction, and creation or updating of shortened links with enriched metadata. Features URL Metadata Extraction:** Parses URLs to extract metadata such as titles, descriptions, images, and favicons. OpenGraph API Integration:** Utilizes OpenGraph API for detailed metadata retrieval. Switchy API Integration:** Manages shortened links via the Switchy API. GitHub API Integration:** Uses GitHub for hosting images related to the URLs. Screenshot Capabilities:** Captures screenshots of web pages as part of metadata. API Authorization and Configuration:** Manages API keys and configurations for external service integration. Workflow Structure Split In Batches: Processes URLs in batches. API Auth: Configures API authorization. URL Processing Nodes: Extract metadata using various nodes like Get Headers, OpenGraph API, and Meta tags Scraper. Conditional Nodes: Include IF OpenGraph Invalid and If - Enable ScreenShots for logic handling. Data Aggregation: Uses nodes like Method 1 - META for final metadata aggregation. Switchy API: Handles link creation and updating. GitHub Integration: Hosts screenshots and images on a personal GitHub repository. Final Output: Provides the shortened URL after processing. API Stack | API | Description | |---------------------------------|-------------------------------------------------| | switchy | For creating and updating shortened links. | | opengraph | To retrieve URL metadata using OpenGraph tags. | | dub.sh | Used for scraping meta tags from web pages. | | microlink | Captures screenshots of web pages. | | pxl.to | Alternative service for capturing screenshots. | | favicone | Retrieves favicons for given URLs. | | github | Hosts images and screenshots on GitHub repo. | | statically | Used for CDN services and image hosting. | | Other APIs | Additional APIs used for various purposes. | GitHub Repository Setup To use this workflow, ensure your GitHub API is linked for hosting images. Set up a repository where the workflow can upload screenshots and other related images. This repository will be referenced in the workflow nodes where images are handled. Configuration Before running the workflow, set up the necessary API keys and configurations in the API Auth node. Adjust batch size and other parameters as needed. Error Handling The workflow includes nodes like Stop and Error for robust error handling, post an issue and mention the creator using N8N community. Contributions This workflow is open for community contributions. Enhancements and improvements are welcome.
by Jimleuk
This n8n workflow builds an appointment scheduling AI agent which can Take enquiries from prospective customers and help them book an appointment by checking appointment availability Where no appointment is booked, the Agent is able to send follow-up messages to re-engage leads. After an appointment is booked, the agent is able reschedule or even cancel the booking for the user without human intervention. For small outfits, this workflow could contribute the necessary "man-power" required to increase business sales. The sample Airtable can be found here: https://airtable.com/appO2nHiT9XPuGrjN/shroSFT2yjf87XAox 2024-10-22 Updated to Cal.com API v2. How it works The customer sends an enquiry via SMS to trigger our workflow. For this trigger, we'll use a Twilio webhook. The prospective or existing customer's number is logged in an Airtable Base which we'll be using to track all our enquries. Next, the message is sent to our AI Agent who can reply to the user and decide if an appointment booking can be made. The reply is made via SMS using Twilio. A scheduled trigger which runs every day, checks our chat logs for a list of prospective customers who have yet to book an appointment but still show interest. This list is sent to our AI Agent to formulate a personalised follow-up message to each lead and ask them if they want to continue with the booking. The follow-up interaction is logged so as to not to send too many messages to the customer. Requirements A Twilio account to receive customer messages. An Airtable account and Base to use as our datastore for enquiries. Cal.com account to use as our scheduling service. OpenAI account for our AI model. Customising this workflow Not using Airtable? Swap this out for your CRM of choice such as hubspot or your own service. Not using Cal.com? Swap this out for API-enabled services such as Acuity Scheduling or your own service.
by DUBCOM
Workflow: Snapshot Contabo How it Works This workflow automates daily backups (snapshots) of VPS instances hosted on Contabo. Each day at midnight, it checks for existing snapshots and ensures that only the latest backups are retained by removing older ones. It provides a seamless, hands-off backup process to keep your data secure. Setup Steps Setting up this workflow is quick, typically taking about 10-15 minutes. The essential part of the setup is providing the necessary credentials, which you can easily retrieve from your Contabo control panel. Import the Workflow: Download and upload the workflow JSON into n8n. Configure Credentials: Add CLIENT_ID, CLIENT_SECRET, API_USER, and API_PASSWORD in the credential node. Activate the Workflow: Enable it to run automatically at midnight every day. Flow Overview Schedule Trigger (00:00 daily):** Automatically initiates the workflow. Formatted Date:** Prepares a timestamp for naming the snapshot. List Snapshots:** Verifies if an existing snapshot is available for each VPS. Conditional Logic:** No Snapshot? Proceeds to create a new one. Snapshot Found? Deletes the old snapshot before creating a new one. Key Points Snapshot Retention:** Old snapshots are deleted to ensure only the latest backups are stored. Unique Identifiers:** UUIDs are used to track and guarantee unique operations.
by Jez
🎯 Overview This n8n workflow automates the process of ingesting documents from multiple sources (Google Drive and web forms) into a Qdrant vector database for semantic search capabilities. It handles batch processing, document analysis, embedding generation, and vector storage - all while maintaining proper error handling and execution tracking. 🚀 Key Features Dual Input Sources**: Accepts files from both Google Drive folders and web form uploads Batch Processing**: Processes files one at a time to prevent memory issues and ensure reliability AI-Powered Analysis**: Uses Google Gemini to extract metadata and understand document context Vector Embeddings**: Generates OpenAI embeddings for semantic search capabilities Automated Cleanup**: Optionally deletes processed files from Google Drive (configurable) Loop Processing**: Handles multiple files efficiently with Split In Batches nodes Interactive Chat Interface**: Built-in chatbot for testing semantic search queries against indexed documents 📋 Use Cases Knowledge Base Creation**: Build searchable document repositories for organizations Document Compliance**: Process and index legal/regulatory documents (like Fair Work documents) Content Management**: Automatically categorize and store uploaded documents Research Libraries**: Create semantic search capabilities for research papers or reports Customer Support**: Enable instant answers to policy and documentation questions via chat interface 🔧 Workflow Components Input Methods Google Drive Integration Monitors a specific folder for new files Processes existing files in batch mode Supports automatic file conversion to PDF Web Form Upload Public-facing form for document submission Accepts PDF, DOCX, DOC, and CSV files Processes multiple file uploads in a single submission Processing Pipeline File Splitting: Separates multiple uploads into individual items Document Analysis: Google Gemini extracts document understanding Text Extraction: Converts documents to plain text Embedding Generation: Creates vector embeddings via OpenAI Vector Storage: Inserts documents with embeddings into Qdrant Loop Control: Manages batch processing with proper state handling Key Nodes Split In Batches**: Processes files one at a time with reset: false to maintain state Google Gemini**: Analyzes documents for context and metadata Langchain Vector Store**: Handles Qdrant insertion with embeddings HTTP Request**: Direct API calls for custom operations Chat Interface**: Interactive chatbot for testing vector search queries 🛠️ Technical Implementation Batch Processing Logic The workflow uses a clever looping mechanism: Split In Batches with batchSize: 1 ensures single-file processing reset: false maintains loop state across iterations Loop continues until all files are processed Error Handling All nodes include continueOnFail options where appropriate Execution logs are preserved for debugging File deletion only occurs after successful insertion Data Flow Form Upload → Split Files → Batch Loop → Analyze → Insert → Loop Back Google Drive → List Files → Batch Loop → Download → Analyze → Insert → Delete → Loop Back 📊 Performance Considerations Processing Time**: ~20-30 seconds per file Batch Size**: Set to 1 for reliability (configurable) Memory Usage**: Optimized for files under 10MB API Costs**: Uses OpenAI embeddings (text-embedding-3-large model) 🔐 Required Credentials Google Drive OAuth2: For file access and management OpenAI API: For embedding generation Qdrant API: For vector database operations Google Gemini API: For document analysis 💡 Implementation Tips Start Small: Test with a few files before processing large batches Monitor Costs: Track OpenAI API usage for embedding generation Backup First: Consider archiving instead of deleting processed files Check Collections: Ensure Qdrant collection exists before running 🎨 Customization Options Change Embedding Model**: Switch to text-embedding-3-small for cost savings Adjust Chunk Size**: Modify text splitting parameters for different document types Add Metadata**: Extend the Gemini prompt to extract specific fields Archive vs Delete**: Replace delete operation with move to "processed" folder 📈 Real-World Application This workflow was developed to process business documents and legal agreements, making them searchable through semantic queries. It's particularly useful for organizations dealing with large volumes of regulatory documentation that need to be quickly accessible and searchable. Chat Interface Testing The integrated chatbot interface allows users to: Query processed documents using natural language Test semantic search capabilities in real-time Verify document indexing and retrieval accuracy Ask questions about specific topics (e.g., "What are the pay rates for junior employees?") Get instant AI-powered responses based on the indexed content 🌟 Benefits Automation**: Eliminates manual document processing Scalability**: Handles individual files or bulk uploads Intelligence**: AI-powered understanding of document content Flexibility**: Multiple input sources and processing options Reliability**: Robust error handling and state management 👨💻 About the Creator Jeremy Dawes is the CEO of Jezweb, specializing in AI and automation deployment solutions. This workflow represents practical, production-ready automation that solves real business challenges while maintaining simplicity and reliability. 📝 Notes The workflow intelligently handles the n8n form upload pattern where multiple files create a single item with multiple binary properties (Files_0, Files_1, etc.) The Split In Batches pattern with reset: false is crucial for proper loop execution Direct API integration provides more control than pure Langchain implementations 🔗 Resources Qdrant Documentation OpenAI Embeddings n8n Documentation Jezweb - AI & Automation Solutions This workflow demonstrates practical automation that bridges document management with modern AI capabilities, creating intelligent document processing systems that scale with your needs.
by Sam Nesler
Syncs assignments and completion states to and fro between Canvas LMS and a Notion database. Automatically triggers every 2 hours during the schoolday by default (meaning 7 times a day), but also supports manual refreshing via webhooks. Setup You'll need a few things to get started: A Canvas API key. You can generate one by going to your Canvas account settings and clicking on the "New Access Token" button. The URL looks like https://canvas.wisc.edu/profile/settings You'll also need to replace URLs in Canvas nodes with your institution's domain, unless you're a student at UW-Madison. Canvas nodes are all the HTTP Request nodes except the one labelled "OpenAI Categorization", which is an OpenAI node and will require a key in a later step. A Notion integration token. You can find this by going to your Notion integrations page and clicking "Create new integration". You can make it a "Internal Integration". A Notion database to sync to. I made a template for use with the workflow, but you can use any database that has the following fields: Status (status): Status with at least the options "Not Started" and "Completed" - assignments start out "Not Started", and are marked "Completed" when they are submitted on Canvas. Estimate (select): Select with at least the options "XS", "S", "M", "L", "XL" - this is where the estimated time to complete the assignment will be stored. Even if you don't use AI, they'll start out as "M" Priority (select): Select with at least the options "Could Do", "Should Do", "Must Do" - assignments start out "Should Do" ID (text): this is where the ID of the assignment will be stored. We use this to sync without having a database on the server Due Date (date): this is where the due date of the assignment will be stored Class (text): this is where the name of the class will be stored Link (URL): this is where the link to the assignment will be stored The ID of the Notion database you want to sync to. You can find this by clicking "Share" in the top right of your database and copying the link. The ID is the part of the link that comes after https://www.notion.so/ and before ?v=. So for https://www.notion.so/tsuniiverse/1976e99d91128076b034e7379464560f?v=1976e99d911281e7bd4b000c2cbec692&pvs=4, the ID would be 1976e99d91128076b034e7379464560f. An OpenAI key for assignment length estimation or disable the node. Manual Refreshing Embed the production URL from the Webhook Trigger inside a "toggle list" or "toggle heading" inside Notion, then expand the heading to refresh, like so:
by Solomon
This n8n workflow automates lead extraction from Google Maps, enriches data with AI, and stores results for cold outreach. It uses the Bright Data community node and Bright Data MCP for scraping and AI message generation. How it works Form Submission User provides Google Maps starting location, keyword and country. Bright Data Scraping Bright Data community node triggers a Maps scraping job, monitors progress, and downloads results. AI Message Generation Uses Bright Data MCP with LLMs to create a personalized cold call script and talking points for each lead. Database Storage Enriched leads and scripts are upserted to Supabase. How to use Set up all the credentials, create your Postgres table and submit the form. The rest happens automatically. Requirements LLM account (OpenAI, Gemini…) for API usage. Bright Data account for API and MCP usage. Supabase account (or other Postgres database) to store information.
by Angel Menendez
Upload Public-Facing Images to an S3 Cloudflare Bucket via Slack Modal 🛠 Who is this for? This workflow is for teams that use Slack for internal communication and need a streamlined way to upload public-facing images to an S3 Cloudflare bucket. It's especially beneficial for DevOps, marketing, or content management teams who frequently share assets and require efficient cloud storage integration. 💡 What problem does this workflow solve? Manually uploading images to cloud storage can be time-consuming and disruptive, especially if you're already working in Slack. This workflow automates the process, allowing you to upload images directly from Slack via a modal popup. It reduces friction and keeps your workflow within a single platform. 🔍 What does this workflow do? This workflow connects Slack with an S3 Cloudflare bucket to simplify the image-uploading process: Slack Modal Interaction**: Users trigger a Slack modal to select images for upload. Dynamic Folder Management**: Choose to create a new folder or use an existing one for uploads. S3 Integration**: Automatically uploads the images to a specified S3 Cloudflare bucket. Slack Confirmation**: After upload, Slack sends a confirmation with the uploaded file URLs. 🚀 Setup Instructions Prerequisites Slack Bot with the following permissions: commands files:write files:read chat:write Cloudflare S3 Credentials: Create an API token with write access to your S3 bucket. n8n Instance: Ensure n8n is properly set up with webhook capabilities. Steps Configure Slack Bot: Set up a Slack app and enable the Events API. Add your n8n webhook URL to the Events Subscription section. Add Credentials: Add your Slack API and S3 Cloudflare credentials to n8n. Customize the Workflow: Open the Idea Selector Modal node and update folder options to suit your needs. Update the Post Image to Channel node with your Slack channel ID. Deploy the Workflow: Activate the workflow and test by triggering the Slack modal. 🛠 How to Customize This Workflow Adjust the Slack Modal You can modify the modal layout in the Idea Selector Modal node to add additional fields or adjust the styling. Change the Bucket Structure Update the Upload to S3 Bucket node to customize the folder paths or change naming conventions. 🔗 References and Helpful Links Slack API Documentation Cloudflare S3 Setup n8n Documentation 📓 Workflow Notes Key Features: Slack Integration**: Uses Slack modal interactions to streamline the upload process. Cloud Storage**: Automatically uploads to a Cloudflare S3 bucket. User Feedback**: Sends a Slack message with file URLs upon successful upload. Setup Dependencies: Slack API token Cloudflare S3 credentials n8n webhook configuration Sticky Notes Included Sticky notes are embedded within the workflow to guide you through configuration and explain node functionality. 🌟 Why Use This Workflow? This workflow keeps your image-uploading process intuitive, efficient, and fully integrated with tools you already use. By leveraging n8n's flexibility, you can ensure smooth collaboration and quick sharing of public-facing assets without switching contexts.
by Akram Kadri
Who is this template for? This workflow template is designed for sales, marketing, and business development professionals who want a cost-effective and efficient way to generate leads. By leveraging n8n core nodes, it scrapes business emails from Google Maps without relying on third-party APIs or paid services, ensuring there are no additional costs involved. Ideal for small business owners, freelancers, and agencies, this template automates the process of collecting contact information for targeted outreach, making it a powerful tool for anyone looking to scale their lead generation efforts without incurring extra expenses. You can watch the video tutorial here: https://youtu.be/HaiO-UeiKBA How it works This template streamlines email scraping from Google Maps using only n8n core nodes, ensuring a completely free and self-contained solution. Here’s how it operates: Input Queries You provide a list of queries, each consisting of keywords related to the type of business you want to target and the specific region or subregion you’re interested in. Iterates through Queries The workflow processes each query one at a time. For each query, it triggers a sub-workflow dedicated to handling the scraping tasks. Scrapes Google Maps for URLs Using these queries, the workflow scrapes Google Maps to collect URLs of business listings matching the provided criteria. Fetches HTML Content The workflow then fetches the HTML pages of the collected URLs for further processing. Extracts Emails Using a Code Node with custom JavaScript, the workflow runs regular expressions on the HTML content to extract business email addresses. Setup Add Queries: Open the first node, "Run Workflow" and input a list of queries, each containing the business keywords and the target region. Configure the Google Sheets Node: Open the Google Sheets node and select a document and specific sheet where the scraped results will be saved. Run the workflow: Click on "Test workflow" and watch your Google Sheets document gradually receive business email addresses. Customize as Needed: You can adjust the regular expressions in the Code Node to refine the email extraction logic or add logic to extract other kinds of information.