by Oskar
This workflow with AI agent is designed to navigate through the page to retrieve specific type of information (in this example: social media profile links). The agent is equipped with 2 tools: text tool:** to retrieve all the text from the page, URLs tool:** to extract all possible links from the page. 💡 You can edit prompt and JSON schema connected to the agent in order to return other data then social media profile links. 👉 This workflow uses Supabase as storage (input/output). Feel free to change it to any other database of your choice. 🎬 See this workflow in action in my YouTube video. How it works? The workflow uses the input URL (website) as a starting point to retrieve the data (e.g. example.com). Using the "URLs tool", the agent is able to retrieve all links from the page and navigate to them. For example, if you want to retrieve contact information, agent will try to find a subpage that might contain this information (e.g. example.com/contact) and extract the information using the text tool. Set up steps Connect database with input data (website addresses) or pin sample data to trigger node. Configure the crawling agent to retrieve the desired data (e.g. modify prompt and/or parsing schema). Set credentials for OpenAI. Optionally: split agent tools to separate workflows. If you like this workflow, please subscribe to my YouTube channel and/or my newsletter.
by David Ashby
Complete MCP server exposing 1 Listing API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Listing API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Listing API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.ebay.com{basePath} • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (1 total) 🔧 Item_Draft (1 endpoints) • POST /item_draft/: Create eBay Listing Draft 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Listing API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing all Mandrill Tool operations to AI agents. Zero configuration needed - all 2 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Mandrill Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Mandrill Tool tool with full error handling 📋 Available Operations (2 total) Every possible Mandrill Tool operation is included: 💬 Message (2 operations) • Send a message based on a template • Send a message based on HTML 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Mandrill Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Mandrill Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Marcial Ambriz
Remixed Backup your workflows to GitHub from Solomon's work. Check out his templates. How it works This workflow will backup your typebots to GitHub. It uses the Typebot API to export all typebots. It then loops over the data, checks in GitHub to see if a file exists that uses the credential's ID. Once checked it will: update the file on GitHub if it exists; create a new file if it doesn't exist; ignore if it's the same. In addition, it also checks if any flow have been deleted from typebot workspace. If a flow no longer exists in workspace, the corresponding file will be removed from the repository to keep everything in sync. Who is this for? People wanting to backup their typebots(flows) outside the server for safety purposes or to migrate to another server.
by Jimleuk
This n8n template runs daily to track and report on any changes made to workflows on any n8n instance. Useful if a team is working within a single instance and you want to be notified of what workflows have changed since you last visited them. Another use-case might be monitoring your managed instances for clients and being alerted when changes are made without your knowledge. See a sample Gsheet here: https://docs.google.com/spreadsheets/d/1dOHSfeE0W_qPyEWj5Zz0JBJm8Vrf_cWp-02OBrA_ZYc/edit?usp=sharing How it works A scheduled trigger is set to run once a day to review all available workflows. An n8n node imports the workflows as json. The workflows are brought into a loop where each is first checked to see if it exists in the designated google sheet. If not, a new entry is created and skipped. If the workflow has been captured before, then the comparison subworkflow can be executed using the previous and current versions of the workflow json data. The subworkflow uses the compare dataset tool to calculate the changes to nodes and connections for the given workflow. The results are then recorded back to the google sheet for review. How to use Start with the n8n node and try to filter by the workflows you're interested in tracking. Set the scheduled trigger interval to match the frequency to suit how often your workflows are being edited. Customising the workflow Want to get fancy? Add in an AI agent to help determine changes between the previous and current versions of the workflow. Add contextual explanations to reveal the impact of the changes.
by Nskha
Overview This N8N workflow facilitates advanced URL parsing and shortening, incorporating metadata extraction, OpenGraph tag handling, and integration with Switchy API for link management. It employs various nodes for URL processing, metadata extraction, and creation or updating of shortened links with enriched metadata. Features URL Metadata Extraction:** Parses URLs to extract metadata such as titles, descriptions, images, and favicons. OpenGraph API Integration:** Utilizes OpenGraph API for detailed metadata retrieval. Switchy API Integration:** Manages shortened links via the Switchy API. GitHub API Integration:** Uses GitHub for hosting images related to the URLs. Screenshot Capabilities:** Captures screenshots of web pages as part of metadata. API Authorization and Configuration:** Manages API keys and configurations for external service integration. Workflow Structure Split In Batches: Processes URLs in batches. API Auth: Configures API authorization. URL Processing Nodes: Extract metadata using various nodes like Get Headers, OpenGraph API, and Meta tags Scraper. Conditional Nodes: Include IF OpenGraph Invalid and If - Enable ScreenShots for logic handling. Data Aggregation: Uses nodes like Method 1 - META for final metadata aggregation. Switchy API: Handles link creation and updating. GitHub Integration: Hosts screenshots and images on a personal GitHub repository. Final Output: Provides the shortened URL after processing. API Stack | API | Description | |---------------------------------|-------------------------------------------------| | switchy | For creating and updating shortened links. | | opengraph | To retrieve URL metadata using OpenGraph tags. | | dub.sh | Used for scraping meta tags from web pages. | | microlink | Captures screenshots of web pages. | | pxl.to | Alternative service for capturing screenshots. | | favicone | Retrieves favicons for given URLs. | | github | Hosts images and screenshots on GitHub repo. | | statically | Used for CDN services and image hosting. | | Other APIs | Additional APIs used for various purposes. | GitHub Repository Setup To use this workflow, ensure your GitHub API is linked for hosting images. Set up a repository where the workflow can upload screenshots and other related images. This repository will be referenced in the workflow nodes where images are handled. Configuration Before running the workflow, set up the necessary API keys and configurations in the API Auth node. Adjust batch size and other parameters as needed. Error Handling The workflow includes nodes like Stop and Error for robust error handling, post an issue and mention the creator using N8N community. Contributions This workflow is open for community contributions. Enhancements and improvements are welcome.
by Rostislav
This n8n template provides a complete solution for Optical Character Recognition (OCR) of image and PDF files directly within Telegram Users can simply send PNG, JPEG, or PDF documents to your Telegram bot, and the workflow will process them, extract text using Mistral OCR, and return the content as a downloadable Markdown (.md) text file. Key Features & How it Works: Effortless OCR via Telegram**: Users send a file to the bot, and the system automatically detects the file type (PNG, JPEG, or PDF). File Size Validation: The workflow enforces a **25 MB file size limit, in line with Telegram Bot API restrictions, ensuring smooth operation. Mistral-Powered Recognition: Leveraging **Mistral OCR, the template accurately extracts text from various document types. Markdown Output**: Recognized text is automatically converted into a clean Markdown (.md) text file, ready for easy editing, storage, or further processing. Secure File Delivery: The processed Markdown file is delivered back to the user via Telegram. For this, the workflow ingeniously uses a **GET request to itself (acting as a file downloader proxy). This generated link allows Telegram to fetch the .md file directly. Please note: This download functionality requires the workflow to be in an Active status. Optional Whitelist Security: Enhance your bot's security with an **optional whitelist feature. You can configure specific Telegram User IDs to restrict access, ensuring only authorized users can interact with your bot. Simplified Webhook Management**: The template includes dedicated utility flows for convenient management of your Telegram bot's webhooks (for both development and production environments). This template is ideal for digitizing documents on the go, extracting text from scanned files, or converting image-based content into versatile, searchable text. Getting Started To get this powerful OCR bot up and running, follow these two main steps: Set Up Your Telegram Bot: First, you'll need to configure your Telegram bot and its webhooks. Follow the instructions detailed in the Telegram Bot Webhook Setup section to create your bot, obtain its API token, and set up the necessary webhook URLs. Configure Bot Settings: Next, you'll need to define key operational parameters for your bot. Proceed to the Settings Configuration section and populate the variables according to your preferences, including options for whitelist access.
by Cyril Nicko Gaspar
📌 AI Agent via GoHighLevel SMS with Website-Based Knowledgebase This n8n workflow enables an AI agent to interact with users through GoHighLevel SMS, leveraging a knowledgebase dynamically built by scraping the company's website. ❓ Problem It Solves Traditional customer support systems often require manual data entry and lack real-time updates from the company's website. This workflow automates the process by: Scraping the company's website at set intervals to update the knowledgebase. Integrating with GoHighLevel SMS to provide users with timely and accurate information. Utilizing AI to interpret user queries and fetch relevant information from the updated knowledgebase. 🧰 Pre-requisites Before deploying this workflow, ensure you have: An active n8n instance (self-hosted or cloud). A valid OpenAI API key (or any compatible AI model). A Bright Data account with Web Unlocker setup. A GoHighLevel SMS LeadConnector account. A GoHighLevel Marketplace App configured with the necessary scopes. Installed n8n-nodes-brightdata community node for Bright Data integration (if self-hosted). ⚙️ Setup Instructions 1. Install the Bright Data Community Node in n8n For self-hosted n8n instances: Navigate to Settings → Community Nodes. Click on Install. In the search bar, enter n8n-nodes-brightdata. Select the node from the list and click Install. Docs: https://docs.n8n.io/integrations/community-nodes/installation/gui-install 2. Configure Bright Data Credentials Obtain your API key from Bright Data. In n8n, go to Credentials → New, select HTTP Request. Set authentication to Header Auth. In Name, enter Authorization. In Value, enter Bearer <your_api_key_from_Bright_Data>. Save the credentials. 3. Configure OpenAI Credentials Add your OpenAI API key to the relevant nodes. If you want to use a different model, replace all OpenAI nodes accordingly. 4. Set Up GoHighLevel Integration a. Create a GoHighLevel Marketplace App Go to https://marketplace.gohighlevel.com Click My Apps → Create App Set Distribution Type to Sub-Account Add the following scopes: locations.readonly contacts.readonly contacts.write opportunities.readonly opportunities.write users.readonly conversations/message.readonly conversations/message.write Add your n8n OAuth Redirect URL as a redirect URI in the app settings. Save and copy the Client ID and Client Secret. b. Configure GoHighLevel Credentials in n8n Go to Credentials → New Choose OAuth2 API Input: Client ID Client Secret Authorization URL: https://auth.gohighlevel.com/oauth/authorize Access Token URL: https://auth.gohighlevel.com/oauth/token Scopes: locations.readonly contacts.readonly contacts.write opportunities.readonly opportunities.write users.readonly conversations/message.readonly conversations/message.write Save and authenticate to complete setup. Docs: https://docs.n8n.io/integrations/builtin/credentials/highlevel 🔄 Workflow Functionality (Summary) Scheduled Scraping**: Scrapes website at user-defined intervals. Edit Fields** node: User defines the homepage or site to scrape. Bright Data Node* (self-hosted) OR *HTTP Node** (cloud users) used to perform scraping. Knowledgebase Update**: The scraped content is stored or indexed. GoHighLevel SMS**: Incoming user queries are received through SMS. AI Processing**: AI matches queries to relevant content. Response Delivery**: AI-generated answers are sent back via SMS. 🧩 Use Cases Customer Support Automation**: Provide instant, accurate responses. Lead Qualification**: Automatically answer potential customer inquiries. Internal Knowledge Distribution**: Keep staff updated via SMS based on website info. 🛠️ Customization Scraping URLs**: Adjust targets in the Edit Fields node. Model Swap**: Replace OpenAI nodes to use a different LLM. Format Response**: Customize output to match your tone or brand. Other Channels**: Expand to include chat apps or email responses. Vector Databases**: It is advisable to store the data into a third-party vector database services like Pinecone, Supabase, etc. Chat Memory Node**: This workflow is using Redis as a chat memory but you can use N8N built-in chat memory. ✅ Summary This n8n workflow combines Bright Data’s scraping tools and GoHighLevel’s SMS interface with AI query handling to deliver a real-time, conversational support experience. Ideal for businesses that want to turn their website into a live knowledge source via SMS, this agent keeps itself updated, smart, and customer-ready.
by Cyril Nicko Gaspar
📌 HubSpot Lead Enrichment with Bright Data MCP This template enables natural-language-driven automation using Bright Data's MCP tools, triggered directly by new leads in HubSpot. It dynamically extracts and executes the right tool based on lead context—powered by AI and configurable in N8N. ❓ What Problem Does This Solve? Manual lead enrichment is slow, inconsistent, and drains valuable time. This solution automates the process using a no-code workflow that connects HubSpot, Bright Data MCP, and an AI agent—without requiring scripts or technical skills. Perfect for marketing, sales, and RevOps teams. 🧰 Prerequisites To use this template, you’ll need: A self-hosted or cloud instance of N8N A Bright Data MCP API token A valid OpenAI API key (or compatible AI model) A HubSpot account Either a Private App token or OAuth credentials for HubSpot Basic familiarity with N8N workflows ⚙️ Setup Instructions 1. Set Up Authentication in HubSpot 🔐 Option 1: Use a Private App Token (Simple Setup) Log in to your HubSpot account. Navigate to Settings → Integrations → Private Apps. Create a new Private App with the following scopes: crm.objects.contacts.read crm.objects.contacts.write crm.schemas.contacts.read crm.objects.companies.read (optional) Copy the Access Token. In N8N, create a credential for HubSpot App Token and paste the app token in the field. Go back to Hubspot Private App settings to setup a webhook. Copy the url in your workflow's Webhook node and paste it here. 🔁 Option 2: Use OAuth (Advanced + Secure) In HubSpot, go to Settings → Integrations → Apps → Create App. Set your Redirect URL to match your N8N OAuth2 redirect path. Choose scopes like: crm.objects.companies.read crm.objects.contacts.read crm.objects.deals.read crm.schemas.companies.read crm.schemas.contacts.read crm.schemas.deals.read crm.objects.contacts.write (conditionally required) Note the Client ID and Client Secret. Copy the App ID and the developer API key In N8N, create a credential for HubSpot Developer API and paste those info from previous step. Attach these credentials to the HubSpot node in N8N. 2. Setup and obtain API token and other necessary information from Bright Data In your Bright Data account, obtain the following information: API token Web Unlocker zone name (optional) Browser API username and password string separated by colon (optional) 3. Host SSE server from STDIO command The methods below will allow you to receive SSE (Server-Sent Events) from Bright Data MCP via a local Supergateway or Smithery ** Method 1: Run Supergateway in a separate web service (Recommended) This method will work for both cloud version and self-hosted N8N. Signup to any cloud services of your choice (DigitalOcean, Heroku, Hetzner, Render, etc.). For NPM based installation: Create a new web service. Choose Node.js as runtime environment and setup a custom server without repository. In your server’s settings to define environment variables or .env file, add: `API_TOKEN=your_brightdata_api_token WEB_UNLOCKER_ZONE=optional_zone_name BROWSER_AUTH=optional_browser_auth` Paste the following text as a start command: npx -y supergateway --stdio "npx -y @brightdata/mcp" --port 8000 --baseUrl http://localhost:8000 --ssePath /sse --messagePath /message Deploy it and copy the web server URL, then append /sse into it. Your SSE server should now be accessible at: https://your_server_url/sse For Docker based installation: Create a new web service. Choose Docker as the runtime environment. Set up your Docker environment by pulling the necessary images or creating a custom Dockerfile. In your server’s settings to define environment variables or .env file, add: `API_TOKEN=your_brightdata_api_token WEB_UNLOCKER_ZONE=optional_zone_name BROWSER_ZONE=optional_browser_zone_name` Use the following Docker command to run Supergateway: `docker run -it --rm -p 8000:8000 supercorp/supergateway \ --stdio "npx -y @brightdata/mcp /" \ --port 8000` Deploy it and copy the web server URL, then append /sse into it. Your SSE server should now be accessible at: https://your_server_url/sse For more installation guides, please refer to https://github.com/supercorp-ai/supergateway.git. ** Method 2: Run Supergateway in the same web service as the N8N instance This method will only work for self-hosted N8N. a. Set Required Environment Variables In your server's settings to define environment variables or .env file, add: API_TOKEN=your_brightdata_api_token WEB_UNLOCKER_ZONE=optional_zone_name BROWSER_ZONE=optional_browser_zone_name b. Run Supergateway in Background npx -y supergateway --stdio "npx -y @brightdata/mcp" --port 8000 --baseUrl http://localhost:8000 --ssePath /sse --messagePath /message Use the command above to execute it through the cloud shell or set it as a pre-deploy command. Your SSE server should now be accessible at: http://localhost:8000/sse For more installation guides, please refer to https://github.com/supercorp-ai/supergateway.git. * *Method 3: Configure via Smithery.ai* (Easiest) If you don't want additional setup and want to test it right away, follow these instructions: Visit https://smithery.ai/server/@luminati-io/brightdata-mcp/tools to: Signup (if you are new to Smithery) Create an API key Define environment variables via a profile Retrieve your SSE server HTTP URL 4. Connect Google Sheets to N8N Ensure your Google Sheet: Contains columns like row_id, first_name, last_name, email, and status. Is shared with your N8N service account (or connected via OAuth) In N8N: Add a Google Sheets Trigger node Set it to watch for new rows in your lead sheet 5. Import and Configure the N8N Workflow Import the provided JSON workflow into N8N Update nodes with your credentials: Hubspot: Add your API key or connect it via OAuth. Google Sheets Trigger: Link to your actual sheet OpenAI Node: Add your API key Bright Data Tool Execution: Add Bright Data token and SSE URL 🔄 How It Works New contact in Hubspot or a new row is added to the Google Sheet N8N triggers the workflow AI agent classifies the task (e.g., “Find LinkedIn”, “Get company info”) The relevant MCP tool is called Results are appended back to the sheet or routed to another destination Rerun the specific record by specifying status "needs more enrichment", or leaving it blank. 🧩 Use Cases B2B Lead Enrichment** – Add missing fields (title, domain, social profiles) Email Intelligence** – Validate and enrich based on email Market Research** – Pull company or contact data on demand CRM Auto-fill** – Push enriched leads to tools like HubSpot or Salesforce 🛠️ Customization Prompt Tuning** – Adjust how the AI interprets input data Column Mapping** – Customize which fields to pull from the sheet Tool Logic** – Add retries, fallback tools, or confidence-based routing Destination Output** – Integrate with CRMs, Slack, or webhook endpoints ✅ Summary This template turns a Google Sheet into an AI-powered lead enrichment engine. By combining Bright Data’s tools with a natural language AI agent, your team can automate repetitive tasks and scale lead ops—without writing code. Just add a row, and let the workflow do the rest.
by Jimleuk
This n8n workflow builds an appointment scheduling AI agent which can Take enquiries from prospective customers and help them book an appointment by checking appointment availability Where no appointment is booked, the Agent is able to send follow-up messages to re-engage leads. After an appointment is booked, the agent is able reschedule or even cancel the booking for the user without human intervention. For small outfits, this workflow could contribute the necessary "man-power" required to increase business sales. The sample Airtable can be found here: https://airtable.com/appO2nHiT9XPuGrjN/shroSFT2yjf87XAox 2024-10-22 Updated to Cal.com API v2. How it works The customer sends an enquiry via SMS to trigger our workflow. For this trigger, we'll use a Twilio webhook. The prospective or existing customer's number is logged in an Airtable Base which we'll be using to track all our enquries. Next, the message is sent to our AI Agent who can reply to the user and decide if an appointment booking can be made. The reply is made via SMS using Twilio. A scheduled trigger which runs every day, checks our chat logs for a list of prospective customers who have yet to book an appointment but still show interest. This list is sent to our AI Agent to formulate a personalised follow-up message to each lead and ask them if they want to continue with the booking. The follow-up interaction is logged so as to not to send too many messages to the customer. Requirements A Twilio account to receive customer messages. An Airtable account and Base to use as our datastore for enquiries. Cal.com account to use as our scheduling service. OpenAI account for our AI model. Customising this workflow Not using Airtable? Swap this out for your CRM of choice such as hubspot or your own service. Not using Cal.com? Swap this out for API-enabled services such as Acuity Scheduling or your own service.
by Polina Medvedieva
This workflow automates the process of discovering and extracting APIs from various services, followed by generating custom schemas. It works in three distinct stages: research, extraction, and schema generation, with each stage tracking progress in a Google Sheet. 🙏 Jim Le deserves major kudos for helping to build this sophisticated three-stage workflow that cleverly automates API documentation processing using a smart combination of web scraping, vector search, and LLM technologies. How it works Stage 1 - Research: Fetches pending services from a Google Sheet Uses Google search to find API documentation Employs Apify for web scraping to filter relevant pages Stores webpage contents and metadata in Qdrant (vector database) Updates progress status in Google Sheet (pending, ok, or error) Stage 2 - Extraction: Processes services that completed research successfully Queries vector store to identify products and offerings Further queries for relevant API documentation Uses Gemini (LLM) to extract API operations Records extracted operations in Google Sheet Updates progress status (pending, ok, or error) Stage 3 - Generation: Takes services with successful extraction Retrieves all API operations from the database Combines and groups operations into a custom schema Uploads final schema to Google Drive Updates final status in sheet with file location Ideal for: Development teams needing to catalog multiple APIs API documentation initiatives Creating standardized API schema collections Automating API discovery and documentation Accounts required: Google account (for Sheets and Drive access) Apify account (for web scraping) Qdrant database Gemini API access Set up instructions: Prepare your Google Sheets document with the services information. Here's an example of a Google Sheet – you can copy it and change or remove the values under the columns. Also, make sure to update Google Sheets nodes with the correct Google Sheet ID. Configure Google Sheets OAuth2 credentials, required third-party services (Apify, Qdrant) and Gemini. Ensure proper permissions for Google Drive access.
by Marcial Ambriz
Remixed Backup your workflows to GitHub from Solomon's work. Check out his templates. How it works This workflow will backup your workflows to GitHub. It uses the n8n API node to export all workflows. It then loops over the data, checks in GitHub to see if a file exists that uses the credential's ID. Once checked it will: update the file on GitHub if it exists; create a new file if it doesn't exist; ignore if it's the same. In addition, it also checks if any workflows have been deleted from n8n. If a workflow no longer exists in n8n, the corresponding file will be removed from the repository to keep everything in sync. Who is this for? People wanting to backup their workflows outside the server for safety purposes or to migrate to another server.