by Daniel Nolde
What this does Show you how to us XMLRPC APIs via the generic HTTP-Request-node, by the example of posting to a wordpress blog This is also a feasible workaround if a specific n8n integration does not work or stops working (which happens e.g. with the Wordpress node) How it works First, the XML payload for the request is being prepared (in a code node, which also properly escapes special character in the values that you want to send to the XMLRPC endpoint) Then, the HTTP Request node sends the request using the HTTP post method Last, the returned XML response is converted to JSON which a conditional node uses to determine whether th operation was successful or not Setup steps: Import workflow Ensure you have a wordpress blog with a user that has an app-Password Edit the "Settings"-node and enter your individual values for url/user/app-pw
by Fan Luo
Daily Company News Bot This n8n template demonstrates how to use Free FinnHub API to retrieve the company news from a list stock tickers and post messages in Slack channel with a pre-scheduled time. How it works We firstly define the list of stock tickers you are interested Loop over items to call FinnHub API to get the latest company news for the ticker Then we format the company news as a markdown text content which could be sent to Slack Post a new message in Slack channel Wait for 5 seconds, then move to the next ticker How to use Simply setup a scheduler trigger to automatically trigger the workflow Requirements FinnHub API Key Slack channel webhook Need Help? Contact me via My Blog or ask in the Forum! Happy Hacking!
by Lucas Peyrin
How it works This workflow is a robust and forgiving JSON parser designed to handle malformed or "dirty" JSON strings often returned by AI models or scraped from web pages. It takes a text string as input and attempts to extract and parse a valid JSON object from it. Cleans Input: It starts by trimming whitespace and removing common Markdown code fences (like ` Applies Multiple Fixes: It systematically attempts to correct common JSON errors in a specific order: Escapes unescaped control characters (like newlines) within strings. Fixes invalid backslash escape sequences. Removes trailing commas. Intelligently attempts to fix unescaped double quotes inside string values. Parses Strategically: If a direct parse fails, it tries to extract a potential JSON object from the text (e.g., finding a {...} block inside a larger sentence) and then re-applies the cleaning logic to that extracted portion. Outputs Clean Data: If successful, it outputs the parsed JSON fields. By default, it removes the detailed parsing_status object, but you can deactivate the final "Set" node to keep it for debugging. Set up steps Setup time: ~1 minute This workflow is designed to be used as a sub-workflow and requires no internal setup. In your main workflow, add an Execute Sub-Workflow node where you need to parse a messy JSON string. In the Workflow parameter, select this "Robust JSON Parser" workflow. Ensure the data you send to the node is a JSON object containing a text field, where the value of text is the string you want to parse. For example: { "text": "{\\\"key\\\": \\\"some broken json...\\\"}" }. The workflow will return the successfully parsed data. To see a detailed log of the cleaning process, simply deactivate the final Remove parsing_status node inside this workflow.
by SamirLiu
📝 Overview This workflow leverages Google Gemini 2.0 Flash multimodal AI to automatically generate detailed descriptions of video content from any public URL. It streamlines video understanding, making it ideal for content cataloging, accessibility, and content moderation. 💡 Use Cases ♿ Accessibility: Automatically generate detailed video descriptions for visually impaired users. 🛡️ Content Moderation: Detect inappropriate or off-brand material without manual watching. 🗂️ Media Cataloging: Enrich your media library with automatically extracted metadata. 📈 Marketing & Branding: Gain fast insights into key elements, tone, and branding in video content. ⚙️ Setup Instructions 🔑 Get a Gemini API Key Register at ai.google.dev and create an API key. Before running the workflow, set your Gemini API key as an environment variable named GeminiKey for secure access within the workflow. In the Set Input node, reference this environment variable instead of hardcoding the key. 🌐 Configure Video URL Replace the sample URL in the Set Input node with your desired public video URL. Ensure the video is directly accessible (no login or special permissions required). 📝 Optional: Customize the Analysis Edit the prompt in the Analyze video Gemini node to focus on the most relevant video details for your use case (e.g., branding, key actions, visual elements). 🔒 Security Tip Use n8n's credentials manager or environment variables (like GeminiKey) to store your API key securely. Avoid hardcoding API keys directly in workflow nodes, especially in production environments. 🔄 How It Works 📥 Download the video from the provided URL. ☁️ Upload the video to Gemini’s server for processing. ⏳ Wait for Gemini to complete processing. 🤖 Analyze the video with Gemini AI using your customized prompt. 📄 Output a comprehensive description of the video as videoDescription. ⚡ Technical Details Uses HTTP Request nodes to interact with Gemini API endpoints. Handles file download, upload, status checking, and result retrieval. Customizable Gemini AI parameters for fine-tuned response. Main output: videoDescription (detailed text describing video content). 🚀 Quickstart Set your Gemini API key as the GeminiKey environment variable and configure your video URL in the workflow. Execute the workflow. Retrieve your rich, AI-generated video description for downstream use such as automation, tagging, or reporting.
by Joseph LePage
This n8n workflow demonstrates multiple ways to harness DeepSeek's AI models in your automation pipeline! 🌟 Core Features Multiple Integration Methods 🔌 Local deployment using Ollama for DeepSeek-R1 Direct API integration with DeepSeek Chat V3 Conversational agent with memory buffer HTTP request implementation with both raw and JSON formats Model Options 🧠 DeepSeek Chat V3 for general conversation DeepSeek-R1 for advanced reasoning Memory-enabled agent for persistent context Quick Setup 🛠️ API Configuration Base URL: https://api.deepseek.com Get your API key from platform.deepseek.com/api_keys Local Setup 💻 Install Ollama for local deployment Set up DeepSeek-R1 via Ollama Configure local credentials in n8n Implementation Details 🔧 Conversational Agent Window Buffer Memory for context Customizable system messages Built-in error handling with retries API Endpoints 🌐 Chat completions for V3 and R1 models OpenAI API format compatibles
by Damian Karzon
This workflow randomly select recipes from a Mealie instance (can use a specific category) and then creates a meal plan in Mealie with those recipes. How it works: Workflow has a scheduled trigger (set to run weekly on a Friday) Config node sets a few properties to configure the workflow A call to the Mealie API to get the list of recipes The code node holds most of the logic, this will loop through the number of recipes defined in the config node and randomly select a recipe from the list (making sure not to double up any recipes) Once all the recipes are selected it will call the Mealie API to set up the meal plan on the days Setup Add your Mealie API token as a credential and set it on the Http Request nodes Set the relevant schedule trigger to run when you like Update the Config node with the config you want numberOfRecipes - Number of recipes to populate for the meal plan offsetPlanDays - Number of days in the future to start the plan (0 will start it today, 1 tomorrow, etc.) mealieCategoryId - A category id of the category you want to pull in recipes from (default to select from all recipes) mealieBaseUrl - The base url of your Mealie instance
by Jimleuk
This n8n demonstrates how to build a simple PostgreSQL MCP server to manage your PostgreSQL database such as HR, Payroll, Sale, Inventory and More! This MCP example is based off an official MCP reference implementation which can be found here -https://github.com/modelcontextprotocol/servers/tree/main/src/postgres How it works A MCP server trigger is used and connected to 5 tools: 2 postgreSQL and 3 custom workflow. The 2 postgreSQL tools are simple read-only queries and as such, the postgreSQL tool can be simply used. The 3 custom workflow tools are used for select, insert and update queries as these are operations which require a bit more discretion. Whilst it may be easier to allow the agent to use raw SQL queries, we may find it a little safer to just allow for the parameters instead. The custom workflow tool allows us to define this restricted schema for tool input which we'll use to construct the SQL statement ourselves. All 3 custom workflow tools trigger the same "Execute workflow" trigger in this very template which has a switch to route the operation to the correct handler. Finally, we use our standard PostgreSQL node to handle select, insert and update operations. The responses are then sent back to the the MCP client. How to use This PostgreSQL MCP server allows any compatible MCP client to manage a PostgreSQL database by supporting select, create and update operations. You will need to have a database available before you can use this server. Connect your MCP client by following the n8n guidelines here - https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-langchain.mcptrigger/#integrating-with-claude-desktop Try the following queries in your MCP client: "Please help me check if Alex has an entry in the users table. If not, please help me create a record for her." "What was the top selling product in the last week?" "How many high priority support tickets are still open this morning?" Requirements PostgreSQL for database. This can be an external database such as Supabase or one you can host internally. MCP Client or Agent for usage such as Claude Desktop - https://claude.ai/download Customising this workflow If the scope of schemas or tables is too open, try restrict it so the MCP serves a specific purpose for business operations. eg. Confine the querying and editing to HR only tables before providing access to people in that department. Remember to set the MCP server to require credentials before going to production and sharing this MCP server with others!
by JPres
A Discord bot that responds to mentions by sending messages to n8n workflows and returning the responses. Connects Discord conversations with custom automations, APIs, and AI services through n8n. Full guide on: https://github.com/JimPresting/AI-Discord-Bot/blob/main/README.md Discord Bot Summary Overview The Discord bot listens for mentions, forwards questions to an n8n workflow, processes responses, and replies in Discord. This workflow is intended for all Discord users who want to offer AI interactions with their respective channels. What do you need? You need a Discord account as well as a Google Cloud Project Key Features 1. Listens for Mentions The bot monitors Discord channels for messages that mention it. Optional Configuration**: Can be set to respond only in a specific channel. 2. Forwards Questions to n8n When a user mentions the bot and asks a question: The bot extracts the question. Sends the question, along with channel and user information, to an n8n webhook URL. 3. Processes Data in n8n The n8n workflow receives the question and can: Interact with AI services (e.g., generating responses). Access databases or external APIs. Perform custom logic. n8n formats the response and sends it back to the bot. 4. Replies to Discord with n8n's Response The bot receives the response from n8n. It replies to the user's message in the Discord channel with the answer. Long Responses**: Handles responses exceeding Discord's 2000-character limit by chunking them into multiple messages. 5. Error Handling Includes error handling for: Issues with n8n communication. Response formatting problems. Manages cases where: No question is asked. An invalid response is received from n8n. 6. Typing Indicator While waiting for n8n's response, the bot sends a "typing..." indicator to the Discord channel. 7. Status Update For lengthy n8n processes, the bot sends a message to the Discord channel to inform the user that it is still processing their request. Step-by-Step Setup Guide as per Github Instructions Key Takeaways You’ll configure an n8n webhook to receive Discord messages, process them with your workflow, and respond. You’ll set up a Discord application and bot, grant the right permissions/intents, and invite it to your server. You’ll prepare your server environment (Node.js), scaffold the project, and wire up environment variables. You’ll implement message‐chunking, “typing…” indicators, and robust error handling in your bot code. You’ll deploy with PM2 for persistence and know how to test and troubleshoot common issues. 1. n8n: Create & Expose Your Webhook New Workflow Log into your n8n instance. Click Create Workflow (➕), name it e.g. Discord Bot Handler. Webhook Trigger Add a node (➕) → search Webhook. Set: Authentication: None (or your choice) HTTP Method: POST Path: e.g. /discord-bot Click Execute Node to activate. Copy Webhook URL After execution, copy the Production Webhook URL. You’ll paste this into your bot’s .env. Build Your Logic Chain additional nodes (AI, database lookups, etc.) as required. Format the JSON Response Insert a Function node before the end: return { json: { answer: "Your processed reply" } }; Respond to Webhook Add Respond to Webhook as the final node. Point it at your Function node’s output (with the answer field). Activate Toggle Active in the top‐right and Save. 2. Discord Developer Portal: App & Bot New Application Visit the Discord Developer Portal. Click New Application, name it. Go to Bot → Add Bot. Enable Intents & Permissions Under Privileged Gateway Intents, toggle Message Content Intent. Under Bot Permissions, check: Read Messages/View Channels Send Messages Read Message History Grab Your Token In Bot → click Copy (or Reset Token). Store it securely. Invite Link (OAuth2 URL) Go to OAuth2 → URL Generator. Select scopes: bot, applications.commands. Under Bot Permissions, select the same permissions as above. Copy the generated URL, open it in your browser, and invite your bot. 3. Server Prep: Node.js & Project Setup Install Node.js v20.x sudo apt purge nodejs npm sudo apt autoremove curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash - sudo apt install -y nodejs node -v # Expect v20.x.x npm -v # Expect 10.x.x Project Folder mkdir discord-bot cd discord-bot Initialize & Dependencies npm init -y npm install discord.js axios dotenv 4. Bot Code & Configuration Environment Variables Create .env: nano .env Populate: DISCORD_BOT_TOKEN=your_bot_token N8N_WEBHOOK_URL=https://your-n8n-instance.com/webhook/discord-bot Optional: restrict to one channel TARGET_CHANNEL_ID=123456789012345678 Bot Script Create index.js: nano index.js Implement: Import dotenv, discord.js, axios. Set up client with MessageContent intent. On messageCreate: Ignore bots or non‐mentions. (Optional) Filter by channel ID. Extract and validate the user’s question. Send “typing…” every 5 s; after 20 s send a status update if still processing. POST to your n8n webhook with question, channelId, userId, userName. Parse various response shapes to find answer. If answer.length ≤ 2000, message.reply(answer). Else, split into ~1900‑char chunks at sentence/paragraph breaks and send sequentially. On errors, clear intervals, log details, and reply with an error message. Login client.login(process.env.DISCORD_BOT_TOKEN); 5. Deployment: Keep It Alive with PM2 Install PM2 npm install -g pm2 Start & Monitor pm2 start index.js --name discord-bot pm2 status pm2 logs discord-bot Auto‐Start on Boot pm2 startup Follow the printed command (e.g. sudo env PATH=$PATH:/usr/bin pm2 startup systemd -u your_user --hp /home/your_user) pm2 save 6. Test & Troubleshoot Functional Test In your Discord server: @YourBot What’s the weather like? Expect a reply from your n8n workflow. Common Pitfalls No reply → check pm2 logs discord-bot. Intent Errors → verify Message Content Intent in Portal. Webhook failures → ensure workflow is active and URL is correct. Formatting issues → confirm your Function node returns json.answer. Inspect Raw Data Search your logs for Complete response from n8n: to debug payload shapes. `
by Jimleuk
This n8n template demonstrates how to get started with Gemini 2.0's new Bounding Box detection capabilities in your workflows. The key difference being this enables prompt-based object detection for images which is pretty powerful for things like contextual search over an image. eg. "Put a bounding box around all adults with children in this image" or "Put a bounding box around cars parked out of bounds of a parking space". How it works An image is downloaded via the HTTP node and an "Edit Image" node is used to extract the file's width and height. The image is then given to the Gemini 2.0 API to parse and return coordinates of the bounding box of the requested subjects. In this demo, we've asked for the AI to identify all bunnies. The coordinates are then rescaled with the original image's width and height to correctl align them. Finally to measure the accuracy of the object detection, we use the "Edit Image" node to draw the bounding boxes onto the original image. How to use Really up to the imagination! Perhaps a form of grounding for evidence based workflows or a higher form of image search can be built. Requirements Google Gemini for LLM Customising the workflow This template is just a demonstration of an experimental version of Gemini 2.0. It is recommended to wait for Gemini 2.0 to come out of this stage before using in production.
by Agent Studio
This workflow is a experiment to build HTML pages from a user input using the new Structured Output from OpenAI. How it works: Users add what they want to build as a query parameter The OpenAI node generate an interface following a structured output defined in the body The JSON output is then converted to HTML along with a title The HTML is encapsulated in an HTML node (where the Tailwind css script is added) The HTML is rendered to the user via the Webhook response. Set up steps Create an OpenAI API Key Create the OpenAI credentials Use the credentials for both nodes HTTP Request (as Predefined Credential type) and OpenAI Activate your workflow Once active, go to the production URL and add what you'd like to build as the parameter "query" Example: https://production_url.com?query=a%20signup%20form Example of generated page
by Dave Bernier
This n8n workflow template uses community nodes and is only compatible with the self-hosted version of n8n. This template aims to ease the process of deploying workflows from github. It has a companion repository that developers might find useful{. See below for more details How it works Automatically import and deploy n8n workflows from your GitHub repository to your production n8n instance using a secured webhook-based approach. This template enables teams to maintain version control of their workflows while ensuring seamless deployment through a CI/CD pipeline. Receives webhook notifications from GitHub when changes are pushed to your repository Lists all files in the repository and filters for .json workflow files Downloads each workflow file and saves it locally Imports all workflows into n8n using the CLI import command Cleans up temporary files after successful import To trigger the deployment, send a POST request to your webhook with the set up credentials (basic auth) with the following body: { "owner": "GITHUB_REPO_OWNER_NAME", "repository": "GITHUB_REPOSITORY_NAME" } Set up steps Once importing this template in n8n : Setup the webhook basic auth credentials Setup the github credentials Activate the workflow ! Companion repository There is a companion repository located at https://github.com/dynamicNerdsSolutions/n8n-git-flow-template that has a Github action already setup to work with this workflow. It provides a complete development environment with: Local n8n instance via Docker Automated workflow export and commit scripts Version control integration CI/CD pipeline setup This setup allows teams to maintain a clean separation between development and production environments while ensuring reliable workflow deployment.
by Yang
Who is this for? This workflow is perfect for operations teams, accountants, e-commerce businesses, or finance managers who regularly process digital invoices and need to automate data extraction and record-keeping. What problem is this workflow solving? Manually reading invoice PDFs, extracting relevant data, and entering it into spreadsheets is time-consuming and error-prone. This workflow automates that process—watching a Google Drive folder, extracting structured invoice data using Dumpling AI, and saving the results into Google Sheets. What this workflow does Watches a specific Google Drive folder for new invoices. Downloads the uploaded invoice file. Converts the file into a Base64 format. Sends the file to Dumpling AI’s extract-document endpoint with a detailed parsing prompt. Parses Dumpling AI’s JSON response using a Code node. Splits the items array into individual rows using the Split Out node. Appends each invoice item to a preformatted Google Sheet along with the full header metadata (order number, PO, addresses, etc.). Setup Google Drive Setup Create or select a folder in Google Drive and place the folder ID in the trigger node. Make sure your n8n Google Drive credentials are authorized for access. Google Sheets Create a Google Sheet with the following headers: Order number, Document Date, Po_number, Sold to name, Sold to address, Ship to name, Ship to address, Model, Description, Quantity, Unity price, Total price Paste the Sheet ID and sheet name (Sheet1) into the Google Sheets node. Dumpling AI Sign up at Dumpling AI Go to your account settings and generate your API key. Paste this key into the HTTP header of the Dumpling AI request node. The endpoint used is: https://app.dumplingai.com/api/v1/extract-document Prompt (already included) This prompt extracts: order number, document date, PO number, shipping/billing details, and detailed line items (model, quantity, unit price, total). How to customize this workflow to your needs Adjust the Google Sheet fields to fit your invoice structure. Modify the Dumpling AI prompt if your invoices have additional or different data points. Add filtering logic if you want to handle different invoice types differently. Replace Google Sheets with Airtable or a database if preferred. Use a different trigger like an email attachment if invoices come via email.