by Gareth B. Davies
An automated backup solution designed for self-hosted n8n users to automatically backup their workflows to Bitbucket, leveraging Bitbucket's free private repository offering. Perfect for maintaining version control of your n8n workflows without additional costs. How it works: Runs on a regular schedule to check all workflows in your n8n instance Compares each workflow with its version in Bitbucket Only uploads workflows that are new or have changed Uses basic rate limiting to stay within Bitbucket's API limits Formats filenames for easy tracking and includes timestamps in commit messages Handles errors gracefully with automatic retries Set up steps (10-15 minutes): Create a free Bitbucket account and private repository Create a Bitbucket App Password with repository write access Add Bitbucket credentials to n8n (using your username and app password) Set up n8n API access (generate API key in your n8n instance) Configure your Bitbucket workspace and repository names in the Set node Optional: Adjust the backup schedule (default: 2 AM daily) Perfect for n8n self-hosters who want: Version control for their workflows Automated daily backups Free private repository storage Easy workflow recovery Change tracking over time The workflow includes basic error handling and rate limiting to ensure reliable backups even with larger numbers of workflows. Adjust your timing based on https://support.atlassian.com/bitbucket-cloud/docs/api-request-limits/.
by Daniel Nolde
What this does Show you how to us XMLRPC APIs via the generic HTTP-Request-node, by the example of posting to a wordpress blog This is also a feasible workaround if a specific n8n integration does not work or stops working (which happens e.g. with the Wordpress node) How it works First, the XML payload for the request is being prepared (in a code node, which also properly escapes special character in the values that you want to send to the XMLRPC endpoint) Then, the HTTP Request node sends the request using the HTTP post method Last, the returned XML response is converted to JSON which a conditional node uses to determine whether th operation was successful or not Setup steps: Import workflow Ensure you have a wordpress blog with a user that has an app-Password Edit the "Settings"-node and enter your individual values for url/user/app-pw
by Fan Luo
Daily Company News Bot This n8n template demonstrates how to use Free FinnHub API to retrieve the company news from a list stock tickers and post messages in Slack channel with a pre-scheduled time. How it works We firstly define the list of stock tickers you are interested Loop over items to call FinnHub API to get the latest company news for the ticker Then we format the company news as a markdown text content which could be sent to Slack Post a new message in Slack channel Wait for 5 seconds, then move to the next ticker How to use Simply setup a scheduler trigger to automatically trigger the workflow Requirements FinnHub API Key Slack channel webhook Need Help? Contact me via My Blog or ask in the Forum! Happy Hacking!
by Alex Kim
Automate Video Creation with Luma AI Dream Machine and Airtable (Part 2) Description This is the second part of the Luma AI Dream Machine automation. It captures the webhook response from Luma AI after video generation is complete, processes the data, and automatically updates Airtable with the video and thumbnail URLs. This completes the end-to-end automation for video creation and tracking. 👉 Airtable Base Template 👉 Tutorial Video Setup 1. Luma AI Setup Ensure you’ve created an account with Luma AI and generated an API key. Confirm that the API key has permission to manage video requests. 2. Airtable Setup Make sure your Airtable base includes the following fields (set up in Part 1): Use the Airtable Base Template linked above to simplify setup. Generation ID** – To match incoming webhook data. Status** – Workflow status (e.g., "Done"). Video URL** – Stores the generated video URL. Thumbnail URL** – Stores the thumbnail URL. 3. n8n Setup Ensure that the n8n workflow from Part 1 is set up and configured. Import this workflow and connect it to the webhook callback from Luma AI. How It Works 1. Webhook Trigger The Webhook node listens for a POST response from Luma AI once video generation is finished. The response includes: Video URL – Direct link to the video. Thumbnail URL – Link to the video thumbnail. Generation ID – Used to match the record in Airtable. 2. Process Webhook Data The Set node extracts the video data from the webhook response. The If node checks if the video URL is valid before proceeding. 3. Store in Airtable The Airtable node updates the record with: Video URL – Direct link to the video. Thumbnail URL – Link to the video thumbnail. Status – Marked as "Done." Uses the Generation ID to match and update the correct record. Why This Workflow is Useful ✅ Automates the completion step for video creation ✅ Ensures accurate record-keeping by matching generation IDs ✅ Simplifies the process of managing and organizing video content ✅ Reduces manual effort by automating the update process Next Steps Future Enhancements** – Adding more complex post-processing, video trimming, and multi-platform publishing.
by Shahrear
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. Transform your expense tracking with automated AI receipt processing that extracts data and organizes it instantly. What this workflow does Monitors Google Drive for new receipt uploads (images/PDFs) Downloads and processes files automatically Extracts key data using VLM Run community node (merchant, amount, currency, date) Saves structured data to Google Sheets for easy tracking Setup Prerequisites: Google Drive/Sheets accounts, VLM Run API credentials, n8n instance. You need to install VLM Run community node. To install Community nodes you need to follow steps, Settings -> Community Nodes -> Install -> Search with name @vlm-run/n8n-nodes-vlmrun Quick Setup: Configure Google Drive OAuth2 and create receipt upload folder Add VLM Run API credentials Create Google Sheets with columns: Customer, Merchant, Amount, Currency, Date Update folder/sheet IDs in workflow nodes Test and activate How to customize this workflow to your needs Extend functionality by: Adding expense categories and approval workflows Connecting to accounting software (QuickBooks, Xero) Including Slack notifications for processed receipts Adding data validation and duplicate detection This workflow transforms manual receipt processing into an automated system that saves hours while improving accuracy.
by Lucas Peyrin
How it works This workflow is a robust and forgiving JSON parser designed to handle malformed or "dirty" JSON strings often returned by AI models or scraped from web pages. It takes a text string as input and attempts to extract and parse a valid JSON object from it. Cleans Input: It starts by trimming whitespace and removing common Markdown code fences (like ` Applies Multiple Fixes: It systematically attempts to correct common JSON errors in a specific order: Escapes unescaped control characters (like newlines) within strings. Fixes invalid backslash escape sequences. Removes trailing commas. Intelligently attempts to fix unescaped double quotes inside string values. Parses Strategically: If a direct parse fails, it tries to extract a potential JSON object from the text (e.g., finding a {...} block inside a larger sentence) and then re-applies the cleaning logic to that extracted portion. Outputs Clean Data: If successful, it outputs the parsed JSON fields. By default, it removes the detailed parsing_status object, but you can deactivate the final "Set" node to keep it for debugging. Set up steps Setup time: ~1 minute This workflow is designed to be used as a sub-workflow and requires no internal setup. In your main workflow, add an Execute Sub-Workflow node where you need to parse a messy JSON string. In the Workflow parameter, select this "Robust JSON Parser" workflow. Ensure the data you send to the node is a JSON object containing a text field, where the value of text is the string you want to parse. For example: { "text": "{\\\"key\\\": \\\"some broken json...\\\"}" }. The workflow will return the successfully parsed data. To see a detailed log of the cleaning process, simply deactivate the final Remove parsing_status node inside this workflow.
by Kunsh
A streamlined AI-powered tool that extracts actionable technical insights from HackerOne security reports for advanced bug bounty hunters. How It Works Send any HackerOne report URL (e.g., https://hackerone.com/reports/123456) to the chat interface. The AI agent will: Fetch the report JSON automatically Analyze for unique techniques, payloads, and root causes Extract reusable insights in a structured format Summarize with practical pentesting value Setup Requirements Google Gemini API credentials configured Chat interface deployed and accessible HackerOne report URLs Output Format Summary: One-liner impact statement Techniques: Payloads, code snippets, exploitation steps Pro Tips: Reusable insights for future hunts Perfect for rapid triage and building your personal exploit knowledge base.
by SamirLiu
📝 Overview This workflow leverages Google Gemini 2.0 Flash multimodal AI to automatically generate detailed descriptions of video content from any public URL. It streamlines video understanding, making it ideal for content cataloging, accessibility, and content moderation. 💡 Use Cases ♿ Accessibility: Automatically generate detailed video descriptions for visually impaired users. 🛡️ Content Moderation: Detect inappropriate or off-brand material without manual watching. 🗂️ Media Cataloging: Enrich your media library with automatically extracted metadata. 📈 Marketing & Branding: Gain fast insights into key elements, tone, and branding in video content. ⚙️ Setup Instructions 🔑 Get a Gemini API Key Register at ai.google.dev and create an API key. Before running the workflow, set your Gemini API key as an environment variable named GeminiKey for secure access within the workflow. In the Set Input node, reference this environment variable instead of hardcoding the key. 🌐 Configure Video URL Replace the sample URL in the Set Input node with your desired public video URL. Ensure the video is directly accessible (no login or special permissions required). 📝 Optional: Customize the Analysis Edit the prompt in the Analyze video Gemini node to focus on the most relevant video details for your use case (e.g., branding, key actions, visual elements). 🔒 Security Tip Use n8n's credentials manager or environment variables (like GeminiKey) to store your API key securely. Avoid hardcoding API keys directly in workflow nodes, especially in production environments. 🔄 How It Works 📥 Download the video from the provided URL. ☁️ Upload the video to Gemini’s server for processing. ⏳ Wait for Gemini to complete processing. 🤖 Analyze the video with Gemini AI using your customized prompt. 📄 Output a comprehensive description of the video as videoDescription. ⚡ Technical Details Uses HTTP Request nodes to interact with Gemini API endpoints. Handles file download, upload, status checking, and result retrieval. Customizable Gemini AI parameters for fine-tuned response. Main output: videoDescription (detailed text describing video content). 🚀 Quickstart Set your Gemini API key as the GeminiKey environment variable and configure your video URL in the workflow. Execute the workflow. Retrieve your rich, AI-generated video description for downstream use such as automation, tagging, or reporting.
by Joseph LePage
This n8n workflow demonstrates multiple ways to harness DeepSeek's AI models in your automation pipeline! 🌟 Core Features Multiple Integration Methods 🔌 Local deployment using Ollama for DeepSeek-R1 Direct API integration with DeepSeek Chat V3 Conversational agent with memory buffer HTTP request implementation with both raw and JSON formats Model Options 🧠 DeepSeek Chat V3 for general conversation DeepSeek-R1 for advanced reasoning Memory-enabled agent for persistent context Quick Setup 🛠️ API Configuration Base URL: https://api.deepseek.com Get your API key from platform.deepseek.com/api_keys Local Setup 💻 Install Ollama for local deployment Set up DeepSeek-R1 via Ollama Configure local credentials in n8n Implementation Details 🔧 Conversational Agent Window Buffer Memory for context Customizable system messages Built-in error handling with retries API Endpoints 🌐 Chat completions for V3 and R1 models OpenAI API format compatibles
by Jimleuk
This n8n demonstrates how to build a simple PostgreSQL MCP server to manage your PostgreSQL database such as HR, Payroll, Sale, Inventory and More! This MCP example is based off an official MCP reference implementation which can be found here -https://github.com/modelcontextprotocol/servers/tree/main/src/postgres How it works A MCP server trigger is used and connected to 5 tools: 2 postgreSQL and 3 custom workflow. The 2 postgreSQL tools are simple read-only queries and as such, the postgreSQL tool can be simply used. The 3 custom workflow tools are used for select, insert and update queries as these are operations which require a bit more discretion. Whilst it may be easier to allow the agent to use raw SQL queries, we may find it a little safer to just allow for the parameters instead. The custom workflow tool allows us to define this restricted schema for tool input which we'll use to construct the SQL statement ourselves. All 3 custom workflow tools trigger the same "Execute workflow" trigger in this very template which has a switch to route the operation to the correct handler. Finally, we use our standard PostgreSQL node to handle select, insert and update operations. The responses are then sent back to the the MCP client. How to use This PostgreSQL MCP server allows any compatible MCP client to manage a PostgreSQL database by supporting select, create and update operations. You will need to have a database available before you can use this server. Connect your MCP client by following the n8n guidelines here - https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-langchain.mcptrigger/#integrating-with-claude-desktop Try the following queries in your MCP client: "Please help me check if Alex has an entry in the users table. If not, please help me create a record for her." "What was the top selling product in the last week?" "How many high priority support tickets are still open this morning?" Requirements PostgreSQL for database. This can be an external database such as Supabase or one you can host internally. MCP Client or Agent for usage such as Claude Desktop - https://claude.ai/download Customising this workflow If the scope of schemas or tables is too open, try restrict it so the MCP serves a specific purpose for business operations. eg. Confine the querying and editing to HR only tables before providing access to people in that department. Remember to set the MCP server to require credentials before going to production and sharing this MCP server with others!
by JPres
A Discord bot that responds to mentions by sending messages to n8n workflows and returning the responses. Connects Discord conversations with custom automations, APIs, and AI services through n8n. Full guide on: https://github.com/JimPresting/AI-Discord-Bot/blob/main/README.md Discord Bot Summary Overview The Discord bot listens for mentions, forwards questions to an n8n workflow, processes responses, and replies in Discord. This workflow is intended for all Discord users who want to offer AI interactions with their respective channels. What do you need? You need a Discord account as well as a Google Cloud Project Key Features 1. Listens for Mentions The bot monitors Discord channels for messages that mention it. Optional Configuration**: Can be set to respond only in a specific channel. 2. Forwards Questions to n8n When a user mentions the bot and asks a question: The bot extracts the question. Sends the question, along with channel and user information, to an n8n webhook URL. 3. Processes Data in n8n The n8n workflow receives the question and can: Interact with AI services (e.g., generating responses). Access databases or external APIs. Perform custom logic. n8n formats the response and sends it back to the bot. 4. Replies to Discord with n8n's Response The bot receives the response from n8n. It replies to the user's message in the Discord channel with the answer. Long Responses**: Handles responses exceeding Discord's 2000-character limit by chunking them into multiple messages. 5. Error Handling Includes error handling for: Issues with n8n communication. Response formatting problems. Manages cases where: No question is asked. An invalid response is received from n8n. 6. Typing Indicator While waiting for n8n's response, the bot sends a "typing..." indicator to the Discord channel. 7. Status Update For lengthy n8n processes, the bot sends a message to the Discord channel to inform the user that it is still processing their request. Step-by-Step Setup Guide as per Github Instructions Key Takeaways You’ll configure an n8n webhook to receive Discord messages, process them with your workflow, and respond. You’ll set up a Discord application and bot, grant the right permissions/intents, and invite it to your server. You’ll prepare your server environment (Node.js), scaffold the project, and wire up environment variables. You’ll implement message‐chunking, “typing…” indicators, and robust error handling in your bot code. You’ll deploy with PM2 for persistence and know how to test and troubleshoot common issues. 1. n8n: Create & Expose Your Webhook New Workflow Log into your n8n instance. Click Create Workflow (➕), name it e.g. Discord Bot Handler. Webhook Trigger Add a node (➕) → search Webhook. Set: Authentication: None (or your choice) HTTP Method: POST Path: e.g. /discord-bot Click Execute Node to activate. Copy Webhook URL After execution, copy the Production Webhook URL. You’ll paste this into your bot’s .env. Build Your Logic Chain additional nodes (AI, database lookups, etc.) as required. Format the JSON Response Insert a Function node before the end: return { json: { answer: "Your processed reply" } }; Respond to Webhook Add Respond to Webhook as the final node. Point it at your Function node’s output (with the answer field). Activate Toggle Active in the top‐right and Save. 2. Discord Developer Portal: App & Bot New Application Visit the Discord Developer Portal. Click New Application, name it. Go to Bot → Add Bot. Enable Intents & Permissions Under Privileged Gateway Intents, toggle Message Content Intent. Under Bot Permissions, check: Read Messages/View Channels Send Messages Read Message History Grab Your Token In Bot → click Copy (or Reset Token). Store it securely. Invite Link (OAuth2 URL) Go to OAuth2 → URL Generator. Select scopes: bot, applications.commands. Under Bot Permissions, select the same permissions as above. Copy the generated URL, open it in your browser, and invite your bot. 3. Server Prep: Node.js & Project Setup Install Node.js v20.x sudo apt purge nodejs npm sudo apt autoremove curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash - sudo apt install -y nodejs node -v # Expect v20.x.x npm -v # Expect 10.x.x Project Folder mkdir discord-bot cd discord-bot Initialize & Dependencies npm init -y npm install discord.js axios dotenv 4. Bot Code & Configuration Environment Variables Create .env: nano .env Populate: DISCORD_BOT_TOKEN=your_bot_token N8N_WEBHOOK_URL=https://your-n8n-instance.com/webhook/discord-bot Optional: restrict to one channel TARGET_CHANNEL_ID=123456789012345678 Bot Script Create index.js: nano index.js Implement: Import dotenv, discord.js, axios. Set up client with MessageContent intent. On messageCreate: Ignore bots or non‐mentions. (Optional) Filter by channel ID. Extract and validate the user’s question. Send “typing…” every 5 s; after 20 s send a status update if still processing. POST to your n8n webhook with question, channelId, userId, userName. Parse various response shapes to find answer. If answer.length ≤ 2000, message.reply(answer). Else, split into ~1900‑char chunks at sentence/paragraph breaks and send sequentially. On errors, clear intervals, log details, and reply with an error message. Login client.login(process.env.DISCORD_BOT_TOKEN); 5. Deployment: Keep It Alive with PM2 Install PM2 npm install -g pm2 Start & Monitor pm2 start index.js --name discord-bot pm2 status pm2 logs discord-bot Auto‐Start on Boot pm2 startup Follow the printed command (e.g. sudo env PATH=$PATH:/usr/bin pm2 startup systemd -u your_user --hp /home/your_user) pm2 save 6. Test & Troubleshoot Functional Test In your Discord server: @YourBot What’s the weather like? Expect a reply from your n8n workflow. Common Pitfalls No reply → check pm2 logs discord-bot. Intent Errors → verify Message Content Intent in Portal. Webhook failures → ensure workflow is active and URL is correct. Formatting issues → confirm your Function node returns json.answer. Inspect Raw Data Search your logs for Complete response from n8n: to debug payload shapes. `
by Agent Studio
This workflow is a experiment to build HTML pages from a user input using the new Structured Output from OpenAI. How it works: Users add what they want to build as a query parameter The OpenAI node generate an interface following a structured output defined in the body The JSON output is then converted to HTML along with a title The HTML is encapsulated in an HTML node (where the Tailwind css script is added) The HTML is rendered to the user via the Webhook response. Set up steps Create an OpenAI API Key Create the OpenAI credentials Use the credentials for both nodes HTTP Request (as Predefined Credential type) and OpenAI Activate your workflow Once active, go to the production URL and add what you'd like to build as the parameter "query" Example: https://production_url.com?query=a%20signup%20form Example of generated page