by L Hùng
This workflow acts as an error handler, sending real-time notifications to Telegram when another workflow fails. It provides detailed error information, including workflow name, timestamp, execution URL, last executed node, and error message. Pre-Conditions A Telegram bot created via BotFather. The bot token and Telegram group/channel chatId. An active n8n instance with the Telegram and Error Trigger nodes installed. Setup Workflow Configuration: Import the workflow into n8n. Update the Telegram chatId in the Config node. Add your Telegram bot token in the Telegram node credentials. Error Workflow Setup: Set this workflow as the Error Workflow in other workflows. Testing: Trigger an error in another workflow to verify Telegram notifications. Who the Workflow is For Developers:** Monitoring workflow failures in real-time. Teams:** Managing multiple n8n workflows and needing instant error alerts. n8n Users:** Looking for a simple way to handle workflow errors via Telegram. Primary Use Automates error notifications for failed workflows. Sends detailed error reports to Telegram for quick troubleshooting. Easily customizable to fit specific monitoring needs.
by Chris Carr
Avoid Asking Redundant Questions with Dynamically Generated Forms using OpenAI Target Audience This workflow has been built for those who require a form to capture as much data as possible as well as the answers to predefined questions, whilst optimising the user experience by avoiding asking redundant questions. Use Case When creating a form to capture information, it can be useful to give the user an opportunity to input a long answer to a large, open-ended question. We then want to drill down to answer specific questions that we require the answer to. When doing this, we don't want to ask duplicate questions. This particular scenario imagines an AI consultancy capturing leads. What it Does This workflow requires users to input basic information and then answer an open ended question. The specific questions on the next page will only be those that weren't answered in the open-ended question. How it Works The open-ended question (and relevant basic information) is analysed by an LLM to determine which specific questions have not been answered. Chain-of-thought reasoning is utilised and the output structure is specified with the Structured Output Parser. Those questions that have already been answered are filtered out nodes. The remaining items are then used to generate the last page of the form. Once the user has filled in the final page of the form, they are shown a form completion page. Setup Add your OpenAI credentials Go to the Get Basic Information node and click Test Step Complete the form to test the generic use case Modify the prompt in Analyse Response to fit your use case Next Steps Add additional nodes to send an email to the form owner Add a subsequent LLM call to analyse the form response - those that are qualified should be given the opportunity to book an appointment
by Miko
The workflow performs tasks that would normally require human intervention on Google News links, transforming the RSS feeds into data that can be used by an automated system like n8n, thus creating a solid foundation for further applications Who is this for? This workflow is ideal for developers, journalists, and content aggregators who need to extract and clean Google News URLs from its RSS feed. What problem does this workflow solve? Google News RSS provides encoded URLs that contain additional tracking parameters. This workflow decodes those URLs and provides clean, direct links to news articles, making them easier to process, share, and analyze. What this workflow does Fetch Google News RSS – Retrieves news articles from Google News based on predefined parameters (language, country). Limit results – Reduces the number of requests to avoid excessive API usage. Extract encoded content – Retrieves the encoded news URLs. Decode the URLs – Uses a decoding mechanism to extract clean links. Remove unwanted characters – Cleans up the decoded URLs to ensure they are properly formatted. Aggregate results – Outputs a final list of clean, readable URLs. Setup Modify RSS parameters (hl, gl) to match your target region. Adjust the result limit to control the number of processed articles. How to customize this workflow To customize this workflow, you can add an HTTP Request node to retrieve the article's text, an HTML Extract node to process the text, an AI node to generate new content, and a WordPress node to publish it Another option is to use an AI Agent node to classify articles by category based on the title or through HTML Extract. You can then save the classified articles using a Google Sheets node, organizing them by category and creating an high quality editorial plan This workflow efficiently processes Google News RSS, removes unnecessary encoding, and delivers clean, shareable URLs. 🚀
by Solido AI
How it works: The organizer continuously monitors your Gmail inbox. It analyzes sender and subject to categorize emails (Work, Purchases, Newsletter) and automatically applies labels. Based on the category, it performs specific actions, such as marking important emails or archiving newsletters. Set up steps: The initial setup requires Gmail access permission and defining the categorization rules and desired actions for each email type. This process can be configured in approximately 5 to 10 minutes, depending on the complexity of the rules you wish to establish.
by David Ashby
Complete MCP server exposing 2 BIN Lookup API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add BIN Lookup API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the BIN Lookup API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.bintable.com/v1 • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (2 total) 🔧 Balance (1 endpoints) • GET /balance: Check Balance 🔧 {Bin} (1 endpoints) • GET /{bin}: Lookup for bin 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native BIN Lookup API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 2 Analytics API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Analytics API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Analytics API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.ebay.com{basePath} • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (2 total) 🔧 Rate_Limit (1 endpoints) • GET /rate_limit/: Retrieve Application Rate Limits 🔧 User_Rate_Limit (1 endpoints) • GET /user_rate_limit/: Retrieve User Rate Limits 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Analytics API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 3 Compliance API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Compliance API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Compliance API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.ebay.com{basePath} • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (3 total) 🔧 Listing_Violation (1 endpoints) • GET /listing_violation: Get Violation Summary Counts 🔧 Listing_Violation_Summary (1 endpoints) • GET /listing_violation_summary: This call returns listing violation counts for a seller 🔧 Suppress_Listing_Violation (1 endpoints) • POST /suppress_listing_violation: Suppress Listing Violation 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Compliance API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 3 IPQualityScore API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add IPQualityScore API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the IPQualityScore API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://ipqualityscore.com/api • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (3 total) 🔧 Json (3 endpoints) • GET /json/email/{YOUR_API_KEY_HERE}/{USER_EMAIL_HERE}: Email Validation • GET /json/phone/{YOUR_API_KEY_HERE}/{USER_PHONE_HERE}: Phone Validation • GET /json/url/{YOUR_API_KEY_HERE}/{URL_HERE}: Malicious URL Scanner 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native IPQualityScore API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 2 NPR Station Finder Service API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add NPR Station Finder Service credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the NPR Station Finder Service API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://station.api.npr.org • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (2 total) 🔧 V3 (2 endpoints) • GET /v3/stations: Get Station 1 • GET /v3/stations/{stationId}: Retrieve metadata for the station with the given numeric ID 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native NPR Station Finder Service API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by simonscrapes
Use Case Research search engine rankings for SEO analysis: You need to track keyword rankings for your website You want to analyze competitor positions in search results You need data for SEO competition analysis You want to monitor SERP changes over time What this Workflow Does The workflow uses ScrapingRobot API to fetch Google search results: Retrieves SERP data for your target keywords Captures URL rankings and page titles Processes up to 5000 searches with free account Organizes results for SEO analysis Setup Create a ScrapingRobot account and get your API key Add your ScrapingRobot API key to the HTTP Request node's GET SERP token parameter Either connect your keyword database (column name "Keyword") or use the "Set Keywords" node Configure your preferred output database connection How to Adjust it to Your Needs Modify keyword source to pull from different databases Adjust the number of SERP results to capture Customize output format for your reporting needs More templates and n8n workflows >>> @simonscrapes
by simonscrapes
Use Case Transform web pages into AI-friendly markdown format: You need to process webpage content for LLM analysis You want to extract both content and links from web pages You need clean, formatted text without HTML markup You want to respect API rate limits while crawling pages What this Workflow Does The workflow uses Firecrawl.dev API to process webpages: Converts HTML content to markdown format Extracts all links from each webpage Handles API rate limiting automatically Processes URLs in batches from your database Setup Create a Firecrawl.dev account and get your API key Add your Firecrawl API key to the HTTP Request node's Authorization header Connect your URL database to the input node (column name must be "Page") or edit the array in Example fields from data source Configure your preferred output database connection How to Adjust it to Your Needs Modify input source to pull URLs from different databases Adjust rate limiting parameters if needed Customize output format for your specific use case More templates and n8n workflows >>> @simonscrapes
by Mario
This is an add-on for the template Check if workflows contain build-in nodes that are not of the latest version Purpose This workflow highlights outdated nodes within all workflows of a single n8n instance and places an updated preconfigured node right next to it, so it can be swapped easily. How it works The parent workflow checks the entire n8n instance for outdated nodes within all workflows and passes a list of those alongside some metadata to this workflow This workflow then processes that data and updates the affected workflows Outdated nodes are renamed by prepending an emoji (default: ⚠️) - this is also used for future checks to prevent from double-processing The latest version of each outdated node is added to the workflow canvas (not wired up) behind the old one, slightly shifted in position An Email is sent with a list of modified workflows In the settings it is possible to define: which symbol/emoji should be prepended to outdated notes whether to include only major node updates or all of them whether to add the new nodes to the canvas or not Setup Clone this template to your n8n instance Update the Settings node by setting at least the base URL of your n8n instance Set a recipient in the Gmail node Clone the parent template to your n8n instance and configure it as described in it’s description Add an “Execute Workflow” node to the end of the parent workflow and configure it, so it calls this workflow How to use Execute the parent workflow and check your Email Inbox. All linked workflows should contain one or more updated nodes with an emoji prepended to their names. Disclaimer Beware, that major updates can cause migrations of nodes to fail, since their structure can differ. So always compare the old nodes with the newly created, if all parameters still meet the requirements. Be careful when executing this workflow on a production environment, since it directly modifies your workflows. It is advisable to run this on your testing environment and migrate successfully tested workflows to your production environment using git or manually.