by David Ashby
Complete MCP server exposing all Mailcheck Tool operations to AI agents. Zero configuration needed - 1 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Mailcheck Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Mailcheck Tool tool with full error handling 📋 Available Operations (1 total) Every possible Mailcheck Tool operation is included: 🔧 Email (1 operations) • Check an email 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Mailcheck Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Mailcheck Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Juan Carlos Cavero Gracia
Description This n8n automation template provides an end-to-end solution for generating a series of themed images for Instagram and TikTok carousels using OpenAI's GPT Image (via the image generation API) and automatically publishing them to both platforms. It uses a sequence of prompts to create a narrative or themed carousel, generating each image based on the previous one, and then posts them with an AI-generated caption. Who Is This For? Social Media Managers:** Quickly create and schedule engaging image carousels for Instagram and TikTok. Content Creators:** Automate the visual content creation process for thematic posts or storytelling carousels. Digital Marketers:** Efficiently produce visual assets for campaigns that require sequential imagery. Small Businesses:** Generate unique promotional content for social media without needing advanced design skills. What Problem Does This Workflow Solve? Manually creating a series of related images for a carousel and then publishing them across multiple platforms can be repetitive and time-consuming. This workflow addresses these issues by: Automating Image Generation:** Uses OpenAI to generate a sequence of 5 images, where each new image is an evolution based on the previous one and a new prompt. Automating Caption Generation:** Leverages OpenAI (GPT) to create a suitable description/caption for the carousel based on the image prompts. Streamlining Multi-Platform Publishing:** Automatically uploads the generated image carousel and caption to both Instagram and TikTok. Reducing Manual Effort:** Significantly cuts down the time spent on designing individual images and manually uploading them. Ensuring Visual Cohesion:** The sequential image generation method (editing the previous image) helps maintain a consistent style or narrative across the carousel. How It Works Trigger: The workflow is initiated manually (can be adapted to a schedule or webhook). Define Prompts: Five distinct prompts are pre-set within the workflow to guide the generation of each image in the carousel. AI Caption Generation: OpenAI (GPT-4.1) generates a concise (≤ 90 characters for TikTok) description for the social media posts based on all five image prompts. Sequential AI Image Generation: Image 1: OpenAI's image generation API (specified as gpt-image-1) creates the first image based on prompt1. Image 2-5: For each subsequent image, the workflow uses the OpenAI image edits API. It takes the previously generated image and a new prompt (prompt2 for image 2, prompt3 for image 3, and so on) to create the next image in the sequence. Images are converted from base64 JSON to binary format. Content Aggregation: The five generated binary image files (named photo1 through photo5) are merged. Multi-Platform Distribution: The merged images and the AI-generated description are sent to api.upload-post.com for publishing as a carousel to Instagram. The same content is sent to api.upload-post.com for publishing as a carousel to TikTok, with an option to automatically add music. The TikTok description is truncated if it exceeds 90 characters. Setup Accounts & API Keys: You will need: An n8n instance. An OpenAI API key. An API key for upload-post.com. Configure Credentials: Add your OpenAI API key to the "OpenAI" credentials in n8n. This will be used by the "Generate Description for Tiktok and Instagram" node and the HTTP Request nodes calling the OpenAI image generation/edit APIs. In the "POST TO INSTAGRAM" and "POST TO TIKTOK" nodes, replace "Apikey add_api_key_here" with your actual upload-post.com API key. Update the user field in the "POST TO INSTAGRAM" and "POST TO TIKTOK" nodes if "upload_post" is not your user identifier for that service. Customize Prompts: Modify the five prompts (prompt1 to prompt5) in the "Set All Prompts" node to define the story or theme of your image carousel. Review Image Generation Parameters: In the "Set API Variables" node, you can adjust: size_of_image (e.g., "1024x1536" for vertical carousels). openai_image_model (ensure this matches a valid OpenAI model identifier for image generation/edits, like dall-e-2 or dall-e-3 if gpt-image-1 is a placeholder). response_format_image (should generally remain b64_json for this workflow). (Optional) TikTok Auto Music: The "POST TO TIKTOK" node has an auto_add_music parameter set to true. Change this to false if you prefer to add music manually or not at all. Requirements Accounts:** n8n, OpenAI, upload-post.com. API Keys & Credentials:** API Keys for OpenAI and https://upload-post.com. (Potentially) Paid Plans:** OpenAI and upload-post.com usage may incur costs depending on your volume and their respective pricing models. This template empowers you to automate the creation and distribution of visually consistent image carousels, saving time and enhancing your social media presence.
by Boriwat Chanruang
Who is this for? This workflow is designed for: Content creators**, artists, or hobbyists looking to experiment with AI-generated art. Small business owners* or *marketers** using LEGO-style designs for branding or promotions. Developers* or *AI enthusiasts** wanting to automate image transformations through messaging platforms like LINE. What problem is this workflow solving? Simplifies the process of creating custom AI-generated LEGO-style images. Automates the manual effort of transforming user-uploaded images into AI-generated artwork. Bridges the gap between messaging platforms (LINE) and advanced AI tools (DALL·E). Provides a seamless system for users to upload an image and receive an AI-transformed output without technical expertise. What this workflow does Image Upload via LINE: Users send an image to the LINE chatbot. AI-Powered Prompt Creation: GPT generates a prompt to describe the uploaded image for LEGO-style conversion. AI Image Generation: DALL·E 3 processes the prompt and creates a LEGO-style isometric image. Image Delivery: The generated image is returned to the user in LINE. Setup Prerequisites LINE Developer Account** with API credentials. Access to OpenAI API with DALL·E and GPT-4 capabilities. A configured n8n instance to run this workflow. Steps Environment Setup: Add your LINE API Token and OpenAI credentials as environment variables (LINE_API_TOKEN, OPENAI_API_KEY) in n8n. Configure LINE Webhook: Point the LINE webhook to your n8n instance. Connect OpenAI: Set up OpenAI API credentials in the workflow nodes for GPT-4 and DALL·E. Test Workflow: Upload a sample image in LINE and ensure it returns the LEGO-style AI image. How to customize this workflow to your needs Localization**: Modify response messages in LINE to fit your audience's language and tone. Integration**: Add nodes to send notifications through other platforms like Slack or email. Image Style**: Replace the LEGO-style image prompt with other artistic styles or themes. Advanced Use Cases Art Contests: Users upload images and receive AI-enhanced outputs for community voting or branding. Marketing Campaigns: Quickly generate creative visual content for ads and promotions using customer-submitted photos. Education: Use the workflow to teach students about AI-generated art and automation through a hands-on approach. Tips for Optimization Error Handling**: Add fallback nodes to handle invalid images or API errors gracefully. Logging**: Implement a logging mechanism to track requests and outputs for debugging and analytics. Scalability**: Use queue-based systems or cloud scaling to handle large volumes of image requests. Enhancements Add sticky notes in n8n to provide inline instructions for configuring each node. Create a tutorial video or documentation for first-time users to set up and customize the workflow. Include advanced filters to allow users to select from multiple styles beyond LEGO (e.g., pixel art, watercolor). This workflow enables seamless interaction between messaging platforms and advanced AI capabilities, making it highly versatile for various creative and business applications.
by David Ashby
Complete MCP server exposing all LinkedIn Tool operations to AI agents. Zero configuration needed - all 1 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every LinkedIn Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n LinkedIn Tool tool with full error handling 📋 Available Operations (1 total) Every possible LinkedIn Tool operation is included: 🔧 Post (1 operations) • Create a post 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native LinkedIn Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every LinkedIn Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by David Ashby
Complete MCP server exposing 4 Transportation Laws and Incentives API operations to AI agents. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Credentials Add Transportation Laws and Incentives credentials Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works This workflow converts the Transportation Laws and Incentives API into an MCP-compatible interface for AI agents. • MCP Trigger: Serves as your server endpoint for AI agent requests • HTTP Request Nodes: Handle API calls to http://developer.nrel.gov/api/transportation-incentives-laws • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Returns responses directly to the AI agent 📋 Available Operations (4 total) 🔧 V1.{Output_Format} (1 endpoints) • GET /v1.{output_format}: Return a full list of laws and incentives that match your query. 🔧 V1 (3 endpoints) • GET /v1/category-list.{output_format}: Return the law categories for a given category type. • GET /v1/pocs.{output_format}: Get the points of contact for a given jurisdiction. • GET /v1/{id}.{output_format}: Fetch the details of a specific law given the law's ID. 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Path parameters and identifiers • Query parameters and filters • Request body data • Headers and authentication Response Format: Native Transportation Laws and Incentives API responses with full data structure Error Handling: Built-in n8n HTTP request error management 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Cursor: Add MCP server SSE URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n HTTP request handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Mario
Purpose This solution enables you to manage all your Notion and Todoist tasks from different workspaces as well as your calendar events in a single place. All tasks can be managed in Todoist and additionally Fantastical can be used to manage scheduled tasks & events all together. Demo & Explanation How it works The realtime sync consists of two workflows, both triggered by a registered webhook from either Notion or Todoist To avoid overwrites by lately arriving webhook calls, every time the current task is retrieved from both sides. Redis is used to prevent from endless loops, since an update in one system triggers another webhook call again. Using the ID of the task, the trigger is being locked down for 15 seconds. Depending on the detected changes, the other side is updated accordingly. Generally Notion is treaded as the main source. Using an "Obsolete" Status, it is guaranteed, that tasks never get deleted entirely by accident. The Todoist ID is stored in the Notion task, so they stay linked together An additional full sync workflow daily fixes inconsistencies, if any of them occurred, since webhooks cannot be trusted entirely. Since Todoist requires a more complex setup, a tiny workflow helps with activating the webhook. Another tiny workflow helps generating a global config, which is used by all workflows for mapping purposes. Mapping (Notion >> Todoist) Name: Task Name Priority: Priority (1: do first, 2: urgent, 3: important, 4: unset) Due: Date Status: Section (Done: completed, Obsolete: deleted) <page_link>: Description (read-only) Todoist ID: <task_id> Current limitations Changes on the same task cannot be made simultaneously in both systems within a 15-20 second time frame Subtasks are not linked automatically to their parent yet Recurring tasks are not supported yet Tasks names do not support URL’s yet Prerequisites Notion A database must already exist (get a basic template here) with the following properties (case matters!): Text: "Name" Status: "Status", containing at least the options "Backlog", "In progress", "Done", "Obsolete" Select: "Priority", containing the options "do first", "urgent", "important" Date: "Due" Checkbox: "Focus" Text: "Todoist ID" Todoist A project must already exist with the same sections like defined as Status in Notion (except Done and Obsolete) Redis Create a Free Redis Cloud instance or self-host Setup The setup involves quite a lot of steps, yet many of them can be automated for business internal purposes. Just follow the video or do the following steps: Setup credentials for Notion (access token), Todoist (access token) and Redis - you can also create empty credentials and populate these later during further setup Clone this workflow by clicking the "Use workflow" button and then choosing your n8n instance - otherwise you need to map the credentials of many nodes. Follow the instructions described within the bundle of sticky notes on the top left of the workflow How to use You can apply changes (create, update, delete) to tasks both in Notion and Todoist which then get synced over within a couple of seconds (this is handled by the differential realtime sync) The daily running full sync, resolves possible discrepancies in Todoist and sends a summary via email, if anything needed to be updated. In case that contains an unintended change, you can jump to the Task from the email directly to fix it manually.
by Satyam Tripathi
Try It Out! This n8n template demonstrates how to build an autonomous AI news agent using Decodo MCP that automatically finds, scrapes, and delivers fresh industry news to your team via Slack. Use cases are many – automated news monitoring for your industry, competitive intelligence gathering, startup monitoring, regulatory updates, research automation, or daily briefings for your organization. How it works Define your news topics using the Set node – AI, MCP, web scraping, whatever matters to your business. The AI Agent processes those topics using the Gemini Chat Model, determining which tools to use and when. Here's where it gets interesting: Decodo MCP gives your AI agent the tools to search Google, scrape websites, and parse content automatically – all while bypassing geo-restrictions and anti-bot measures. The agent hunts for fresh articles from the last 48 hours, extracts clean data, and returns structured JSON results. Format Results cleans up the AI's messy output and removes duplicates. Your polished news digest gets delivered to Slack with clickable links and summaries. How to use Schedule trigger runs daily at 9 AM – adjust timing or swap for webhook triggers as needed. Customize topics in the Set node to match your industry. Scales effortlessly: add more topics, tweak search criteria, done. Requirements Decodo MCP credentials (free trial available) – grab the Smithery connection URL with keys and paste it straight into your n8n MCP node. Done. Gemini API key for the AI processing – drop it into the Google Gemini Chat Model node and pick whichever Gemini model fits your needs. Slack workspace for delivery – n8n's Slack integration docs have you covered. What the final output looks like Here's what your team receives in Slack every morning: Need help? Join the Discord or email support@decodo.com for questions. Happy Automating!
by PUQcloud
Setting up n8n workflow Overview The Docker MinIO WHMCS module uses a specially designed workflow for n8n to automate deployment processes. The workflow provides an API interface for the module, receives specific commands, and connects via SSH to a server with Docker installed to perform predefined actions. Prerequisites You must have your own n8n server. Alternatively, you can use the official n8n cloud installations available at: n8n Official Site Installation Steps Install the Required Workflow on n8n You have two options: Option 1: Use the Latest Version from the n8n Marketplace The latest workflow templates for our modules are available on the official n8n marketplace. Visit our profile to access all available templates: PUQcloud on n8n Option 2: Manual Installation Each module version comes with a workflow template file. You need to manually import this template into your n8n server. n8n Workflow API Backend Setup for WHMCS/WISECP Configure API Webhook and SSH Access Create a Basic Auth Credential for the Webhook API Block in n8n. Create an SSH Credential for accessing a server with Docker installed. Modify Template Parameters In the Parameters block of the template, update the following settings: server_domain – Must match the domain of the WHMCS/WISECP Docker server. clients_dir – Directory where user data related to Docker and disks will be stored. mount_dir – Default mount point for the container disk (recommended not to change). Do not modify the following technical parameters: screen_left screen_right Deploy-docker-compose In the Deploy-docker-compose element, you have the ability to modify the Docker Compose configuration, which will be generated in the following scenarios: When the service is created When the service is unlocked When the service is updated nginx In the nginx element, you can modify the configuration parameters of the web interface proxy server. The main section allows you to add custom parameters to the server block in the proxy server configuration file. The main\_location section contains settings that will be added to the location / block of the proxy server configuration. Here, you can define custom headers and other parameters specific to the root location. Bash Scripts Management of Docker containers and all related procedures on the server is carried out by executing Bash scripts generated in n8n. These scripts return either a JSON response or a string. All scripts are located in elements directly connected to the SSH element. You have full control over any script and can modify or execute it as needed.
by Rudi Afandi
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow allows you to create and manage custom short URLs directly via Telegram, with all data stored in MongoDB, and redirects handled efficiently via Nginx. How it works This flow provides a seamless URL shortening experience: Create via Telegram: Send a long URL to your bot. It will ask if you want a custom short code. Store in MongoDB: All long URLs and their corresponding short codes are securely stored in your MongoDB instance. Fast Redirects: When a user accesses a short URL, Nginx forwards the request to a dedicated n8n webhook, which then quickly redirects them to the original long URL. Set up steps This setup is straightforward, especially if you already have a running n8n instance and a VPS. Difficulty: Medium (Basic n8n/VPS knowledge required) Estimated Time: 15-30 minutes n8n Instance & VPS: Ensure you have n8n running on your VPS (e.g., 2 core 2GB, as you have). Telegram Bot: Create a new bot via @BotFather and get your Bot Token. Add this as a Telegram credential in n8n. MongoDB Database: Set up a MongoDB instance (either on your VPS or a cloud service like MongoDB Atlas). Create a database and a collection (e.g., url or short_urls). Add your MongoDB credentials in n8n. Here's MongoDB data structure JSON: >[ {"_id": "686a11946a48b580d72d0397", "longUrl": "https://longurl.com/abcdefghijklm/", "shortUrl": "short-code"} ] Domain/Subdomain: Point a domain or subdomain (e.g., s.yourdomain.com) to your VPS IP address. This will be your short URL base. Nginx/Caddy Configuration: Configure your web server (Nginx or Caddy) on the VPS to proxy requests from your short URL domain to the n8n webhook for redirects. (Detailed Nginx config is provided as sticky notes in the redirect workflow) Workflow Setup: Import both provided n8n workflows (Telegram URL Shortener Creator and URL Redirect Handler). Activate both workflows. Crucial: Set an environment variable in your n8n instance (or .env file) named SHORTENER_DOMAIN with the value of your short URL domain (e.g., https://s.yourdomain.com). Refer to sticky notes inside the workflows for detailed node configurations and expressions.
by David Olusola
How It Works The workflow is an automated appointment reminder system built on n8n. Here is a step-by-step breakdown of its process: Reminder Webhook This node acts as the entry point for the workflow. It's a unique URL that waits for data to be sent to it from an external application, such as a booking or scheduling platform. When a new appointment is created in that system, it sends a JSON payload to this webhook. Extract Appointment Data This is a Code node that processes the incoming data. It's a critical step that: Extracts the customer's name, phone number, appointment time, and service from the webhook's JSON payload. Includes validation to ensure a phone number is present, throwing an error if it's missing. Formats the raw appointment time into a human-readable string for the SMS message. Send SMS Reminder This node uses your Twilio credentials to send an SMS message. It dynamically constructs the message using the data extracted in the previous step. The message is personalized with the customer's name and includes the formatted appointment details. Setup Instructions Import the Workflow Copy the JSON code from the Canvas and import it into your n8n instance. Connect Your Twilio Account Click on the "Send SMS Reminder" node. In the "Credentials" section, you will need to either select your existing Twilio account or add new credentials by providing your Account SID and Auth Token from your Twilio console. Find the Webhook URL Click on the "Reminder Webhook" node. The unique URL for this workflow will be displayed. Copy this URL. Configure Your Booking System Go to your booking or scheduling platform (e.g., Calendly, Acuity). In the settings or integrations section, find where you can add a new webhook. Paste the URL you copied from n8n here. You'll need to map the data fields from your booking system (like customer name, phone, etc.) to match the expected format shown in the comments of the "Extract Appointment Data" node. Once these steps are complete, your workflow will be ready to automatically send SMS reminders whenever a new appointment is created.
by David Ashby
Complete MCP server exposing all Google Translate Tool operations to AI agents. Zero configuration needed - all 1 operations pre-built. ⚡ Quick Setup Need help? Want access to more workflows and even live Q&A sessions with a top verified n8n creator.. All 100% free? Join the community Import this workflow into your n8n instance Activate the workflow to start your MCP server Copy the webhook URL from the MCP trigger node Connect AI agents using the MCP URL 🔧 How it Works • MCP Trigger: Serves as your server endpoint for AI agent requests • Tool Nodes: Pre-configured for every Google Translate Tool operation • AI Expressions: Automatically populate parameters via $fromAI() placeholders • Native Integration: Uses official n8n Google Translate Tool tool with full error handling 📋 Available Operations (1 total) Every possible Google Translate Tool operation is included: 🔧 Language (1 operations) • Translate a language 🤖 AI Integration Parameter Handling: AI agents automatically provide values for: • Resource IDs and identifiers • Search queries and filters • Content and data payloads • Configuration options Response Format: Native Google Translate Tool API responses with full data structure Error Handling: Built-in n8n error management and retry logic 💡 Usage Examples Connect this MCP server to any AI agent or workflow: • Claude Desktop: Add MCP server URL to configuration • Custom AI Apps: Use MCP URL as tool endpoint • Other n8n Workflows: Call MCP tools from any workflow • API Integration: Direct HTTP calls to MCP endpoints ✨ Benefits • Complete Coverage: Every Google Translate Tool operation available • Zero Setup: No parameter mapping or configuration needed • AI-Ready: Built-in $fromAI() expressions for all parameters • Production Ready: Native n8n error handling and logging • Extensible: Easily modify or add custom logic > 🆓 Free for community use! Ready to deploy in under 2 minutes.
by Calistus Christian
What this template does Sends you an email (via Gmail) whenever any workflow that references this one fails. The message includes the workflow name/ID, execution URL, last node executed, and the error message. Why it’s useful Centralizes error notifications so you notice failures immediately and can jump straight to the failed execution. Prerequisites A Gmail account connected through n8n’s Gmail node credentials. This workflow set as the Error Workflow inside the workflows you want to monitor. How it works Error Trigger starts this workflow whenever a linked workflow fails. Gmail (Send → Message) composes and sends an email using details from the Error Trigger. Notes Error workflows don’t need to be activated to work. You can’t test them by running manually—errors must occur in an automatically run workflow (cron, webhook, etc.).