by Mario
Purpose This workflow snippet allows for advanced error catching during retry attempts. There are cases, where you want to check if an item exists first, so you can determine the following actions. Some API’s do not support an endpoint (e.g. Todoist: completed tasks) to do so, which is why you would work with the error branch, only that this does not work well in combination with the retry functionality. How it works Instead of the builtin retry function of a Node a custom loop is used, to get more granular control in between the iterations If the main executed node fails, the error can be filtered for an expected error, which can trigger a separate action The retries only happen, if an unexpected error happened The workflow only stops, if the defined amount of retries exceeded Setup Copy the nodes into your existing workflow Replace the “Replace me” placeholder with the Node you want to apply the retry logic on Follow the sticky notes for more instructions and optional settings
by Tiartyos
Voice Cloning Workflow - Zyphra Zonos API Who is this for? This workflow is designed for developers, content creators, and businesses looking to automate high-quality voice synthesis using AI voice cloning technology. What problem does this solve? It automates the process of generating natural-sounding speech from text using a sample voice file, eliminating the need for manual voice recording and providing consistent voice output for applications like audiobooks, virtual assistants, or content localization. What this workflow does The workflow receives text and voice cloning parameters via webhook, reads a sample voice file from your storage, sends the data to Zyphra's Zonos API for voice synthesis, and saves the generated audio file to your specified output location. Prerequisites You'll need: API key from Zyphra (obtain from https://playground.zyphra.com/settings/api-keys) Account registration at https://playground.zyphra.com Sample voice file stored on accessible disk/cloud storage n8n instance running with webhook capabilities Setup Configure your Zyphra API key in the "Call Zyphra Clone API" node under Header Parameters (Name: X-API-Key, Value: your-api-key) Ensure your sample voice files are accessible at the paths you'll specify Test the webhook endpoint is accessible Supported Audio Formats The API supports multiple output formats through the mime_type parameter: WebM** (default) - audio/webm Ogg** - audio/ogg WAV** - audio/wav MP3** - audio/mp3 or audio/mpeg MP4/AAC** - audio/mp4 or audio/aac Usage Example Endpoint: POST http://localhost:5678/webhook-test/voice-clone Headers: Content-Type: application/json Request Body: { "text": "Hello there! This voice sounds just like the sample!", "speaking_rate": 18, "sample_voice_path": "/data/output/sampleVoice.wav", "output_path": "/data/output/", "language_iso_code": "en-us", "mime_type": "audio/wav", "model": "zonos-v0.1-transformer", "emotion": { "happiness": 0.8, "neutral": 0.3, "sadness": 0.05, "disgust": 0.05, "fear": 0.05, "surprise": 0.05, "anger": 0.05, "other": 0.5 } } Parameters Required Parameters text**: Text to synthesize into speech sample_voice_path**: Path to your voice sample file output_path**: Directory where generated audio will be saved Optional Parameters (with defaults) speaking_rate**: 15 - Speech speed language_iso_code**: "en-us" - Language code mime_type**: "audio/wav" - Output audio format model**: "zonos-v0.1-transformer" - AI model to use emotion**: Object with emotion levels (0-1 scale)
by David Olusola
⚙️ How It Works: LocalRAG.AI ⚠️ Note: This system only works for self-hosted n8n instances. It will not function on n8n.cloud or other remote setups. LocalRAG.AI is a private, on-prem AI assistant that uses your own documents to answer questions intelligently. It combines LangChain, Ollama, Qdrant, and Postgres into a powerful AI pipeline — all running locally for maximum data privacy. 🔄 What It Does Monitors Your Google Drive Folders for new or updated files. Downloads the file, extracts the text, and prepares it. Generates Embeddings using your local Ollama model (e.g., LLaMA 3). Stores them in Qdrant, your local vector database. During a chat, it: Uses vector search to retrieve relevant chunks. Combines them with chat history stored in Postgres. Responds via a LangChain AI agent using your local model. 🛠️ Setup Steps (Self-hosted Only) Install and Self-host n8n (e.g., via Docker). Set up your Ollama instance locally and load your desired LLM (e.g., llama3). Deploy Qdrant locally for vector storage. Connect a Postgres DB to store chat history. Create and import the workflow in n8n. Authenticate Google Drive to monitor folders. Connect credentials for Ollama, Qdrant, Postgres in the n8n workflow. Start chatting through the Webhook Trigger or custom UI. 🧠 Perfect For: Research teams handling confidential data Internal documentation Q&A AI chatbots that don’t rely on OpenAI or cloud
by David Ashby
Complete MCP server exposing 2 Wayback API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Wayback API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Wayback API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.archive.org • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (2 total) 🔧 Wayback (2 endpoints) • GET /wayback/v1/available: GET /wayback/v1/available • POST /wayback/v1/available: POST /wayback/v1/available 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Wayback API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Ranjan Dailata
Who this is for? This workflow is designed for professionals and teams who need real-time, structured insights from Perplexity Search results without manual effort. What problem is this workflow solving? This n8n workflow solves the problem of automating Perplexity Search result extraction, cleanup, summarization, and AI-enhanced formatting for downstream use like sending the results to a webhook or another system. What this workflow does Automates Perplexity Search via Bright Data Uses Bright Data’s proxy-based SERP API to run a Google Search query programmatically. Makes the process repeatable and scriptable with different search terms and regions/zones. Cleans and Extracts Useful Content The Readable Data Extractor uses LLM-based cleaning to remove HTML/CSS/JS from the response and extract pure text data. Converts messy, unstructured web content into structured, machine-readable format. Summarizes Search Results Through the Gemini Flash + Summarization Chain, it generates a concise summary of the search results. Ideal for users who don’t have time to read full pages of search results. Formats Data Using AI Agent The AI Agent acts like a virtual assistant that: - Understands search results Formats them in a readable, JSON-compatible form Prepares them for webhook delivery Delivers Results to Webhook Sends the final summary + structured search result to a webhook (could be your app, a Slack bot, Google Sheets, or CRM). Setup Sign up at Bright Data. Navigate to Proxies & Scraping and create a new Web Unlocker zone by selecting Web Unlocker API under Scraping Solutions. In n8n, configure the Header Auth account under Credentials (Generic Auth Type: Header Authentication). The Value field should be set with the Bearer XXXXXXXXXXXXXX. The XXXXXXXXXXXXXX should be replaced by the Web Unlocker Token. In n8n, configure the Google Gemini(PaLM) Api account with the Google Gemini API key (or access through Vertex AI or proxy). Update the Perplexity Search Request node with the prompt you wish to perform the search. Update the Webhook HTTP Request node with the Webhook endpoint of your choice. How to customize this workflow to your needs 1. Change the Perplexity Search Input Default: It searches a fixed query or dataset. Customize: Accept input from a Google Sheet, Airtable, or a form. Auto-trigger searches based on keywords or schedules. 2. Customize Summarization Style (LLM Output) Default: General summary using Google Gemini or OpenAI. Customize: Add tone: formal, casual, technical, executive-summary, etc. Focus on specific sections: pricing, competitors, FAQs, etc. Translate the summaries into multiple languages. Add bullet points, pros/cons, or insight tags. 3.Choose Where the Results Go Options: Email, Slack, Notion, Airtable, Google Docs, or a dashboard. Auto-create content drafts for WordPress or newsletters. Feed into CRM notes or attach to Salesforce leads.
by max e
Turn plain-language chat like “Tomorrow 9 AM: write blog post” into neatly organised Todoist tasks with GPT-4o and n8n—zero code. 🪄 Ultimate Personal Todoist Agent Turn natural-language requests into perfectly-organized Todoist tasks—all on autopilot inside n8n. > “Add Finish quarterly report by Friday afternoon” → the agent creates the task, sets the due date & priority, and even drops it into the right project. ✨ 🌟 Why this workflow rocks All-in-one Todoist super‑powers** – create, update, complete, move, archive… every major Todoist endpoint is wired up (tasks, projects, sections, labels, comments). LLM‑powered intent detection** – an OpenAI model interprets plain-English (or emoji‑filled!) messages so you don’t have to remember slash‑commands. Minimal setup** – just two credentials and you’re live. Battle‑tested building block** – use it as‑is, or plug the Todoist Agent node into your own agents & chatbots. 🛠️ What you’ll need | Credential | Where it’s used | How to set it up | | ------------------ | -------------------------------------- | --------------------------------------------------------------------------------------------- | | OpenAI API | Orchestrator & LLM nodes | Paste your OpenAI secret key into an OpenAI credential in n8n. | | Todoist OAuth2 | Todoist node and HTTP Request node | Log in Todoist from your browser to set up credential in n8n. | > That’s it—no webhooks, no extra secrets. > Tested with *gpt‑4o‑latest* – the fastest & most accurate model in our trials. ⚡ Quick‑start (5 minutes) Import the JSON template (hit ▶️ Try it out on the n8n template page or drag‑drop the file into your canvas). Select your credentials in the two credential dropdowns. Click Test workflow. In the sample Function node, tweak the message field (e.g. “Tomorrow at 9 am: write blog post”). Run → watch your new Todoist task appear. (Optional) Swap the Function node for your favourite chat trigger (Telegram, Slack, WhatsApp, Discord, you name it). Boom—your personal Todoist genie is alive! 🧞♂️ 🧩 How it works (under the hood) [Trigger / Chat message] │ ▼ [🗂️ Orchestrator Agent] ← OpenAI Chat Model + Short‑term Memory │ ↳ Parses intent & entities │ ▼ [🤖 Todoist Agent] ← 15+ Todoist endpoints │ ↳ Executes the right call (create, update, complete, etc.) ▼ [Done ✅ ] The Orchestrator is an example. In production you can drop it and simply expose the Todoist Agent as a tool for any other agent workflow. 🎛️ Customising & extending | Idea | How to do it | | ------------------------- | ---------------------------------------------------------------------------------------- | | Notion / Sheets sync | After the Todoist Agent node, add a Notion or Google Sheets node to log completed items. | | Voice commands | Swap the chat trigger for a Speech‑to‑Text node (e.g. Whisper). | 🤝 Need custom automations? Want me to build or tweak something for you? → Email maxemelyanenko@gmail.com and let’s make it happen! ⚠️ What’s not included (yet) Shared projects & other Todoist Pro/Business endpoints. File attachments in the comments. Editing comments. Pull requests welcome! 🙌
by David Ashby
Complete MCP server exposing 2 CarbonDoomsDay API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add CarbonDoomsDay credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the CarbonDoomsDay API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.carbondoomsday.com/api • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (2 total) 🔧 Co2 (2 endpoints) • GET /co2/: Get CO2 Measurement by Date • GET /co2/{date}/: CO2 measurements from the Mauna Loa observatory. This data is made available through the good work of the people at the Mauna Loa observatory. Their release notes say: These data are made freely available to the public and the scientific community in the belief that their wide dissemination will lead to greater understanding and new scientific insights. We currently scrape the following sources: [co2_mlo_weekly.csv] [co2_mlo_surface-insitu_1_ccgg_DailyData.txt] [weekly_mlo.csv] We have daily CO2 measurements as far back as 1958. Learn about using pagination via [the 3rd party documentation]. [co2_mlo_weekly.csv]: https://www.esrl.noaa.gov/gmd/webdata/ccgg/trends/co2_mlo_weekly.csv [co2_mlo_surface-insitu_1_ccgg_DailyData.txt]: ftp://aftp.cmdl.noaa.gov/data/trace_gases/co2/in-situ/surface/mlo/co2_mlo_surface-insitu_1_ccgg_DailyData.txt [weekly_mlo.csv]: http://scrippsco2.ucsd.edu/sites/default/files/data/in_situ_co2/weekly_mlo.csv [the 3rd party documentation]: http://www.django-rest-framework.org/api-guide/pagination/#pagenumberpagination 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native CarbonDoomsDay API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Ferenc Erb
Use Case Extend Bitrix24 tasks with custom widgets that display relevant task information and enable seamless interaction through a custom tab interface. What This Workflow Does Processes incoming webhook requests from Bitrix24 task interfaces Handles authentication and secure token validation Manages application installation and placement registration Displays task data in a custom formatted view Stores and retrieves configuration settings persistently Provides user-friendly HTML interfaces for task information Setup Instructions Configure Bitrix24 webhook endpoints for the task widget Set up authentication credentials in your Bitrix24 account Install the application and register the task view tab placement Customize the task data display format as needed Deploy and test the application functionality within Bitrix24 tasks
by David Ashby
Complete MCP server exposing 3 Background Removal API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Background Removal API credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Background Removal API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to https://api.remove.bg/v1.0 • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (3 total) 🔧 Account (1 endpoints) • GET /account: Fetch Account Balance 🔧 Improve (1 endpoints) • POST /improve: Submit Image for Improvement 🔧 Removebg (1 endpoints) • POST /removebg: Remove Image Background 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Background Removal API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Abhishek Patoliya
This workflow allows you to scrape website content, clean the HTML, extract structured information using GPT-4o-mini, and store the results along with SEO keywords into Airtable. Ideal for building keyword lists and organizing web content for SEO research. Setup Instructions 1. Prerequisites n8n Community or Cloud instance Airtable account with a base and table ready OpenAI API Key with access to GPT-4o-mini 2. Airtable Structure Ensure your Airtable table has the following fields: | Field Name | Type | Notes | | ------------ | ------- | ------------------------------- | | Website Name | String | Name or URL of the website | | Data | String | Cleaned website text | | Keyword | String | Extracted SEO keyword list | | Status | Options | Values: Todo, In progress, Done | 3. Node Setup ✅ Form Trigger: Collects website URL from the user. ✅ HTTP Request: Fetches the website content. ✅ HTML Cleaner (Code Node): Strips out styles, tags, and whitespace to get clean text. ✅ Topic Extractor (AI Agent + GPT-4o-mini): Extracts topic-wise information from the cleaned website content. ✅ Text Cleaner (Code Node): Removes unwanted symbols like ### and **. ✅ Keyword Extractor (AI Agent + GPT-4o-mini): Generates a list of 90 important SEO keywords. ✅ Airtable Upsert: Stores the cleaned data, keywords, and status in Airtable. 4. Key Features ✅ Automatic website content scraping ✅ Clean HTML and extract plain text ✅ Use GPT-4o-mini for topic-wise information extraction ✅ Generate 90-keyword SEO lists ✅ Store and manage data in Airtable 5. Use Cases SEO Keyword Research Competitor Website Content Analysis Structured Website Data Collection Additional Workflow Recommendations ✅ Rename Nodes for Clarity | Current Name | Suggested Name | | ------------ | ------------------------------- | | Website Name | Website URL Input Form | | HTTP Request | Fetch Website Content | | Code | HTML to Plain Text Cleaner | | Split Out1 | Clean Text Splitter | | AI Agent1 | Topic Extractor (GPT-4o-mini) | | Code1 | Text Cleanup Formatter | | Split Out2 | Final Text Splitter | | AI Agent | Keyword Extractor (GPT-4o-mini) | | Airtable | Airtable Data Upsert | | Wait1 | Delay Before Merge | | Merge | Combine Data for Airtable |
by Tom Cao
🔐 Advanced SSL Health Monitor 👤 Who is this for? This workflow is designed for DevOps engineers, IT administrators, and security professionals who need comprehensive SSL certificate monitoring and health assessment across multiple domains — featuring dual verification and professional reporting without relying on expensive monitoring services. 🧩 What It Does Daily Trigger runs the workflow every morning for proactive monitoring. URL Collection fetches the list of website URLs to monitor from your data source. Dual SSL Analysis: Free SSL Assessment Script — Get from sysadmin-toolkit on Github SSL-Checker.io API — External verification for cross-validation Comprehensive Health Check: Certificate expiration monitoring (customizable threshold) SSL configuration security assessment Protocol support analysis (TLS 1.3, 1.2, deprecated protocols) Cipher suite strength evaluation Vulnerability scanning (POODLE, BEAST, etc.) Compliance checking (PCI DSS, NIST, FIPS) Smart Alert System sends Discord notifications when: Certificates expire within threshold (default: 30 days) SSL configuration issues detected (weak ciphers, deprecated protocols) Security vulnerabilities found Compliance standards not met Grade drops below acceptable level (configurable) 🎯 Key Features 🔄 Dual Verification**: Cross-checks results between internal scanner and external API 📊 SSL Labs-Style Grading**: A+ to F rating system with detailed analysis 🛡️ Security Assessment**: Vulnerability detection and compliance checking 📱 Discord Integration**: Rich embed notifications with color-coded alerts ⚙️ Setup Instructions Data Source: Configure your URL source from Notion Ensure it contains a URL column with domains to monitor Credentials: Set up Discord webhook for alert notifications Configure any required API credentials for data sources Customize Thresholds: Expiration Alert: Days before expiry (default: 30 days) Grade Threshold: Minimum acceptable SSL grade (default: B) Alert Severity: Choose which issues trigger notifications Advanced Configuration: Modify vulnerability checks based on your security requirements Adjust compliance standards for your industry needs Customize Discord message formatting and alert channels 🧠 Technical Notes Dual-Check Reliability**: Combines custom Bubobot scanner with ssl-checker.io for maximum accuracy No Vendor Lock-in**: Uses free public APIs and open-source tools Professional Reporting**: Generates SSL Labs-quality assessments Security-First Approach**: Comprehensive vulnerability and compliance checking Flexible Alerting**: Discord integration with rich formatting and conditional logic This workflow provide a comprehensive SSL security monitoring solution that rivals enterprise-grade tools while remaining completely open-source and free.
by DataAnts
Dynamically Run SuiteQL Queries in NetSuite via HTTP Webhook in n8n > Important: This template uses a NetSuite community node, so it only works on self-hosted n8n. Cloud-based n8n instances currently do not support community nodes. Summary This workflow template allows you to dynamically run SuiteQL queries in NetSuite by sending an HTTP request to an n8n Webhook node. Once triggered, the workflow uses token-based authentication to execute your SuiteQL query and returns the results as JSON. This makes it easy to integrate real-time NetSuite data into dashboards, reporting tools, or other applications. Who Is This For? Developers & Integrators**: Easily embed NetSuite data retrieval into custom apps or internal tools. Enterprises & Consultants**: Integrate dynamic reporting or data enrichment from NetSuite without manual exports. System Administrators**: Automate routine queries and reduce manual intervention. Use Cases & Benefits 1. Dynamic Data Access Send any SuiteQL query on demand instead of hardcoding queries or manually running reports. 2. Seamless Integration Quickly pull NetSuite data into front-end systems (like Excel dashboards, custom web apps, or internal tools) by calling the webhook endpoint. 3. Simplified Reporting Automate data extraction and formatting, reducing the need for manual exports and improving efficiency. How It Works Trigger: An HTTP request to the webhook node initiates the workflow. Input Processing: The workflow reads the SuiteQL query from the incoming request parameter (suiteql). Query Execution: The NetSuite node uses your token-based authentication credentials to run the SuiteQL query. Response: Results are returned as JSON in the HTTP response, ready for further processing or immediate consumption. Prerequisites & Setup NetSuite Community Node This workflow requires the NetSuite community node. Make sure your self-hosted n8n instance supports community nodes. NetSuite Token-Based Authentication Enable TBA in NetSuite. Obtain the required consumer key, consumer secret, token ID, and token secret. n8n Webhook Copy the auto-generated webhook URL (e.g. http://<your-n8n-domain>/webhook/unique-id) from the Webhook node. Usage Send an HTTP GET or POST request to the webhook with your SuiteQL query. For example: curl "http://<your-n8n-domain>/webhook/unique-id?suiteql=SELECT%20*%20FROM%20account%20LIMIT%2010" The workflow will execute the query and return JSON data. Customization Change the Query**: Simply adjust the suiteql parameter in your HTTP request to run different SuiteQL statements. Data Transformation**: Insert nodes (e.g., Function, Set, or Format) to modify or reformat the data before returning it. Extend Integration**: Chain additional nodes to push the retrieved data to other services (Google Sheets, Slack, custom dashboards, etc.). Additional Notes Remember that this template is only compatible with self-hosted n8n because it uses a community node. - [netsuite community node](https://www.npmjs.com/package/n8n-nodes-netsuite ) If you have questions, suggestions, or need support, contact us at support@dataants.org.