by Jordan Lee
This n8n template demonstrates how to use AI as a comprehensive personal assistant with multiple specialized agents. Use cases include email management, scheduling, web search, calculations, and more - all automated through AI coordination. Good to know This template integrates multiple AI services through OpenRouter Each agent specializes in different tasks (Gmail, Calendar, Search, etc.) Memory persistence maintains context across interactions How it works The workflow is triggered by Telegram messages (can be replaced with other triggers) A router node directs requests to the appropriate specialized agent Agents include: Gmail for email management Calculator for math operations Google Search for information retrieval Calendar for scheduling Contacts for CRM functions The OpenRouter Chat Model coordinates responses Final responses are sent back through Telegram How to use Connect your Telegram bot credentials Configure each service with appropriate API keys The system will automatically route requests to the right agent Requirements OpenRouter account for AI services Telegram bot token Google API credentials for relevant services Customising this workflow Add more specialized agents as needed Replace Telegram with other communication channels Adjust routing logic for different use cases
by SamirLiu
📝 Overview This workflow leverages Google Gemini 2.0 Flash multimodal AI to automatically generate detailed descriptions of video content from any public URL. It streamlines video understanding, making it ideal for content cataloging, accessibility, and content moderation. 💡 Use Cases ♿ Accessibility: Automatically generate detailed video descriptions for visually impaired users. 🛡️ Content Moderation: Detect inappropriate or off-brand material without manual watching. 🗂️ Media Cataloging: Enrich your media library with automatically extracted metadata. 📈 Marketing & Branding: Gain fast insights into key elements, tone, and branding in video content. ⚙️ Setup Instructions 🔑 Get a Gemini API Key Register at ai.google.dev and create an API key. Before running the workflow, set your Gemini API key as an environment variable named GeminiKey for secure access within the workflow. In the Set Input node, reference this environment variable instead of hardcoding the key. 🌐 Configure Video URL Replace the sample URL in the Set Input node with your desired public video URL. Ensure the video is directly accessible (no login or special permissions required). 📝 Optional: Customize the Analysis Edit the prompt in the Analyze video Gemini node to focus on the most relevant video details for your use case (e.g., branding, key actions, visual elements). 🔒 Security Tip Use n8n's credentials manager or environment variables (like GeminiKey) to store your API key securely. Avoid hardcoding API keys directly in workflow nodes, especially in production environments. 🔄 How It Works 📥 Download the video from the provided URL. ☁️ Upload the video to Gemini’s server for processing. ⏳ Wait for Gemini to complete processing. 🤖 Analyze the video with Gemini AI using your customized prompt. 📄 Output a comprehensive description of the video as videoDescription. ⚡ Technical Details Uses HTTP Request nodes to interact with Gemini API endpoints. Handles file download, upload, status checking, and result retrieval. Customizable Gemini AI parameters for fine-tuned response. Main output: videoDescription (detailed text describing video content). 🚀 Quickstart Set your Gemini API key as the GeminiKey environment variable and configure your video URL in the workflow. Execute the workflow. Retrieve your rich, AI-generated video description for downstream use such as automation, tagging, or reporting.
by Ankur Pata
✨ What It Does Mello is a Claude-powered Slack assistant that helps you stay on top of unread messages across all your channels. It: Summarizes conversations contextually using Claude AI. Generates reply suggestions and sends them as private (ephemeral) Slack messages. Lets you respond instantly with one-click AI-suggested replies. Perfect for busy teams, founders, and anyone looking to reduce Slack noise and save hours each week. 🔧 Setup Instructions Create a Slack App Go to Slack API → Your Apps Click Create New App and set it up for your workspace Under OAuth & Permissions, add: Bot Token Scopes: commands, chat:write, channels:history, users:read User Token Scopes: channels:history, chat:write Enable Interactivity, and point the Request URL to your n8n webhook (e.g. /slash-summarize) Add Claude API Get an API key from Claude (Anthropic) In n8n, set up the Claude API credential (or switch to OpenAI) Import This Workflow Go to your n8n instance, click Import, and paste this template Update any placeholders (Slack app, Claude key, webhook URLs) Follow the inline sticky notes for guidance Test It Type /summarize in any Slack channel Mello will fetch unread messages, summarize them, and show reply buttons in a private message ⏱ Setup time: ~10 minutes 🛠 Workflow Highlights Slash command trigger (/summarize) Slack API integration to fetch messages Claude AI for contextual summaries Reply suggestions with smart buttons Private Slack delivery (ephemeral messages) Designed to be easily extended (e.g. add support for OpenAI, custom storage) 🔒 Note This is a lite preview of the full Mello workflow. ✅ The full version includes: Slack reply buttons with thread context Full OAuth flow with token storage MongoDB integration Custom Claude/OpenAI configuration Hosted version with onboarding, branding & support 💡 Want access to the complete version? 📩 Email nina@baloon.dev
by Joseph LePage
This n8n workflow demonstrates multiple ways to harness DeepSeek's AI models in your automation pipeline! 🌟 Core Features Multiple Integration Methods 🔌 Local deployment using Ollama for DeepSeek-R1 Direct API integration with DeepSeek Chat V3 Conversational agent with memory buffer HTTP request implementation with both raw and JSON formats Model Options 🧠 DeepSeek Chat V3 for general conversation DeepSeek-R1 for advanced reasoning Memory-enabled agent for persistent context Quick Setup 🛠️ API Configuration Base URL: https://api.deepseek.com Get your API key from platform.deepseek.com/api_keys Local Setup 💻 Install Ollama for local deployment Set up DeepSeek-R1 via Ollama Configure local credentials in n8n Implementation Details 🔧 Conversational Agent Window Buffer Memory for context Customizable system messages Built-in error handling with retries API Endpoints 🌐 Chat completions for V3 and R1 models OpenAI API format compatibles
by Damian Karzon
This workflow randomly select recipes from a Mealie instance (can use a specific category) and then creates a meal plan in Mealie with those recipes. How it works: Workflow has a scheduled trigger (set to run weekly on a Friday) Config node sets a few properties to configure the workflow A call to the Mealie API to get the list of recipes The code node holds most of the logic, this will loop through the number of recipes defined in the config node and randomly select a recipe from the list (making sure not to double up any recipes) Once all the recipes are selected it will call the Mealie API to set up the meal plan on the days Setup Add your Mealie API token as a credential and set it on the Http Request nodes Set the relevant schedule trigger to run when you like Update the Config node with the config you want numberOfRecipes - Number of recipes to populate for the meal plan offsetPlanDays - Number of days in the future to start the plan (0 will start it today, 1 tomorrow, etc.) mealieCategoryId - A category id of the category you want to pull in recipes from (default to select from all recipes) mealieBaseUrl - The base url of your Mealie instance
by JPres
A Discord bot that responds to mentions by sending messages to n8n workflows and returning the responses. Connects Discord conversations with custom automations, APIs, and AI services through n8n. Full guide on: https://github.com/JimPresting/AI-Discord-Bot/blob/main/README.md Discord Bot Summary Overview The Discord bot listens for mentions, forwards questions to an n8n workflow, processes responses, and replies in Discord. This workflow is intended for all Discord users who want to offer AI interactions with their respective channels. What do you need? You need a Discord account as well as a Google Cloud Project Key Features 1. Listens for Mentions The bot monitors Discord channels for messages that mention it. Optional Configuration**: Can be set to respond only in a specific channel. 2. Forwards Questions to n8n When a user mentions the bot and asks a question: The bot extracts the question. Sends the question, along with channel and user information, to an n8n webhook URL. 3. Processes Data in n8n The n8n workflow receives the question and can: Interact with AI services (e.g., generating responses). Access databases or external APIs. Perform custom logic. n8n formats the response and sends it back to the bot. 4. Replies to Discord with n8n's Response The bot receives the response from n8n. It replies to the user's message in the Discord channel with the answer. Long Responses**: Handles responses exceeding Discord's 2000-character limit by chunking them into multiple messages. 5. Error Handling Includes error handling for: Issues with n8n communication. Response formatting problems. Manages cases where: No question is asked. An invalid response is received from n8n. 6. Typing Indicator While waiting for n8n's response, the bot sends a "typing..." indicator to the Discord channel. 7. Status Update For lengthy n8n processes, the bot sends a message to the Discord channel to inform the user that it is still processing their request. Step-by-Step Setup Guide as per Github Instructions Key Takeaways You’ll configure an n8n webhook to receive Discord messages, process them with your workflow, and respond. You’ll set up a Discord application and bot, grant the right permissions/intents, and invite it to your server. You’ll prepare your server environment (Node.js), scaffold the project, and wire up environment variables. You’ll implement message‐chunking, “typing…” indicators, and robust error handling in your bot code. You’ll deploy with PM2 for persistence and know how to test and troubleshoot common issues. 1. n8n: Create & Expose Your Webhook New Workflow Log into your n8n instance. Click Create Workflow (➕), name it e.g. Discord Bot Handler. Webhook Trigger Add a node (➕) → search Webhook. Set: Authentication: None (or your choice) HTTP Method: POST Path: e.g. /discord-bot Click Execute Node to activate. Copy Webhook URL After execution, copy the Production Webhook URL. You’ll paste this into your bot’s .env. Build Your Logic Chain additional nodes (AI, database lookups, etc.) as required. Format the JSON Response Insert a Function node before the end: return { json: { answer: "Your processed reply" } }; Respond to Webhook Add Respond to Webhook as the final node. Point it at your Function node’s output (with the answer field). Activate Toggle Active in the top‐right and Save. 2. Discord Developer Portal: App & Bot New Application Visit the Discord Developer Portal. Click New Application, name it. Go to Bot → Add Bot. Enable Intents & Permissions Under Privileged Gateway Intents, toggle Message Content Intent. Under Bot Permissions, check: Read Messages/View Channels Send Messages Read Message History Grab Your Token In Bot → click Copy (or Reset Token). Store it securely. Invite Link (OAuth2 URL) Go to OAuth2 → URL Generator. Select scopes: bot, applications.commands. Under Bot Permissions, select the same permissions as above. Copy the generated URL, open it in your browser, and invite your bot. 3. Server Prep: Node.js & Project Setup Install Node.js v20.x sudo apt purge nodejs npm sudo apt autoremove curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash - sudo apt install -y nodejs node -v # Expect v20.x.x npm -v # Expect 10.x.x Project Folder mkdir discord-bot cd discord-bot Initialize & Dependencies npm init -y npm install discord.js axios dotenv 4. Bot Code & Configuration Environment Variables Create .env: nano .env Populate: DISCORD_BOT_TOKEN=your_bot_token N8N_WEBHOOK_URL=https://your-n8n-instance.com/webhook/discord-bot Optional: restrict to one channel TARGET_CHANNEL_ID=123456789012345678 Bot Script Create index.js: nano index.js Implement: Import dotenv, discord.js, axios. Set up client with MessageContent intent. On messageCreate: Ignore bots or non‐mentions. (Optional) Filter by channel ID. Extract and validate the user’s question. Send “typing…” every 5 s; after 20 s send a status update if still processing. POST to your n8n webhook with question, channelId, userId, userName. Parse various response shapes to find answer. If answer.length ≤ 2000, message.reply(answer). Else, split into ~1900‑char chunks at sentence/paragraph breaks and send sequentially. On errors, clear intervals, log details, and reply with an error message. Login client.login(process.env.DISCORD_BOT_TOKEN); 5. Deployment: Keep It Alive with PM2 Install PM2 npm install -g pm2 Start & Monitor pm2 start index.js --name discord-bot pm2 status pm2 logs discord-bot Auto‐Start on Boot pm2 startup Follow the printed command (e.g. sudo env PATH=$PATH:/usr/bin pm2 startup systemd -u your_user --hp /home/your_user) pm2 save 6. Test & Troubleshoot Functional Test In your Discord server: @YourBot What’s the weather like? Expect a reply from your n8n workflow. Common Pitfalls No reply → check pm2 logs discord-bot. Intent Errors → verify Message Content Intent in Portal. Webhook failures → ensure workflow is active and URL is correct. Formatting issues → confirm your Function node returns json.answer. Inspect Raw Data Search your logs for Complete response from n8n: to debug payload shapes. `
by Jimleuk
This n8n template demonstrates how to get started with Gemini 2.0's new Bounding Box detection capabilities in your workflows. The key difference being this enables prompt-based object detection for images which is pretty powerful for things like contextual search over an image. eg. "Put a bounding box around all adults with children in this image" or "Put a bounding box around cars parked out of bounds of a parking space". How it works An image is downloaded via the HTTP node and an "Edit Image" node is used to extract the file's width and height. The image is then given to the Gemini 2.0 API to parse and return coordinates of the bounding box of the requested subjects. In this demo, we've asked for the AI to identify all bunnies. The coordinates are then rescaled with the original image's width and height to correctl align them. Finally to measure the accuracy of the object detection, we use the "Edit Image" node to draw the bounding boxes onto the original image. How to use Really up to the imagination! Perhaps a form of grounding for evidence based workflows or a higher form of image search can be built. Requirements Google Gemini for LLM Customising the workflow This template is just a demonstration of an experimental version of Gemini 2.0. It is recommended to wait for Gemini 2.0 to come out of this stage before using in production.
by Agent Studio
This workflow is a experiment to build HTML pages from a user input using the new Structured Output from OpenAI. How it works: Users add what they want to build as a query parameter The OpenAI node generate an interface following a structured output defined in the body The JSON output is then converted to HTML along with a title The HTML is encapsulated in an HTML node (where the Tailwind css script is added) The HTML is rendered to the user via the Webhook response. Set up steps Create an OpenAI API Key Create the OpenAI credentials Use the credentials for both nodes HTTP Request (as Predefined Credential type) and OpenAI Activate your workflow Once active, go to the production URL and add what you'd like to build as the parameter "query" Example: https://production_url.com?query=a%20signup%20form Example of generated page
by Artur
Overview This automated workflow fetches Upwork job postings using Apify, removes duplicate job listings via MongoDB, and sends new job opportunities to Slack. Key Features: Automated job retrieval** from Upwork via Apify API Duplicate filtering** using MongoDB to store only unique jobs Slack notifications** for new job postings Runs every 20 minutes** during working hours (9 AM - 5 PM) This workflow requires an active Apify subscription to function, as it uses the Apify Upwork API to fetch job listings. Who is This For? This workflow is ideal for: Freelancers looking to track Upwork jobs in real time Recruiters automating job collection for analytics Developers who want to integrate Upwork job data into their applications What Problem Does This Solve? Manually checking Upwork for jobs is time-consuming and inefficient. This workflow: Automates job discovery based on your keywords Filters out duplicate listings, ensuring only new jobs are stored Notifies you on Slack when new jobs appear How the Workflow Works 1. Schedule Trigger (Every 20 Minutes) Triggers the workflow at 20-minute intervals Ensures job searches are only executed during working hours (9 AM - 5 PM) 2. Query Upwork for Jobs Uses Apify API to scrape Upwork job posts for specific keywords (e.g., "n8n", "Python") 3. Find Existing Jobs in MongoDB Searches MongoDB to check if a job (based on title and budget) already exists 4. Filter Out Duplicate Jobs The Merge Node compares Upwork jobs with MongoDB data The IF Node filters out jobs that are already stored in the database 5. Save Only New Jobs in MongoDB The Insert Node adds only new job listings to the MongoDB collection 6. Send a Slack Notification If a new job is found, a Slack message is sent with job details Setup Guide Required API Keys Upwork Scraper (Apify Token) – Get your token from Apify MongoDB Credentials – Set up MongoDB in n8n using your connection string Slack API Token – Connect Slack to n8n and set the channel ID (default: #general) Configuration Steps Modify search keywords in the 'Assign Parameters' node (startUrls) Adjust the Working Hours in the 'If Working Hours' node Set your Slack channel in the Slack node Ensure MongoDB is connected properly Adjust the 'If Working Hours' node to match your timezone and hours, or remove it altogether to receive notifications and updates constantly. How to Customize the Workflow Change keywords: update the startUrls in the 'Assign Parameters' node to track different job categories Change 'If Working Hours': Modify conditions in the IF Node to filter times based on your needs Modify Slack Notifications: Adjust the Slack message format to include additional job details Why Use This Workflow? Automated job tracking without manual searches Prevents duplicate entries in MongoDB Instant Slack notifications for new job opportunities Customizable – adapt the workflow to different job categories Next Steps Run the workflow and test with a small set of keywords Expand job categories for better coverage Enhance notifications by integrating Telegram, Email, or a dashboard This workflow ensures real-time job tracking, prevents duplicates, and keeps you updated effortlessly.
by felipe biava cataneo
What this template does This template uses GROQ LLAVA V1.5 7B API that offers fast inference for multimodal models with vision capabilities for understanding and interpreting visual data from images. . The users send a image and get a description of the image from the model. Setup Open the Telegram app and search for the BotFather user (@BotFather) Start a chat with the BotFather Type /newbot to create a new bot Follow the prompts to name your bot and get a unique API token Save your access token and username Once you set your bot, you can send the image, and get the descriptions.
by Dr. Firas
Who Is This For This workflow is ideal for content creators, bloggers, marketers, and professionals seeking to automate the creation and publication of SEO-optimized articles. It's particularly beneficial for those utilizing Notion for content management and WordPress for publishing. What Problem Does This Workflow Solve Manually creating SEO-friendly articles is time-consuming and requires consistent effort. This workflow streamlines the entire process—from detecting updates in Notion to publishing on WordPress—by leveraging AI for content generation, thereby reducing the time and effort involved. What This Workflow Does Monitor Notion Updates: Detects changes in a specified Notion database. AI Content Generation: Utilizes an AI model to produce an SEO-optimized article based on Notion data. Publish to WordPress: Automatically posts the generated article to a WordPress site. Email Notification: Sends an email containing the article's title and URL. Update Notion Database: Updates the corresponding entry in the Notion database with the article details. Setup Guide Prerequisites WordPress account with API access. API key for the AI model used. Notion integration with the relevant database ID. Credentials for the email service used (e.g., Gmail). Community Node Requirement: This workflow utilizes the n8n-nodes-mcp community node, which is only compatible with self-hosted instances of n8n. For more information on installing and managing community nodes, refer to the n8n documentation. n8n Docs Steps Import the workflow into your self-hosted n8n instance. Install the required community node (n8n-nodes-mcp). Configure API credentials for WordPress, the AI service, Notion, and the email service. Define necessary variables, such as the notification email address and Notion database IDs. Activate the workflow to automate the process. How to Customize This Workflow AI Prompt: Adjust the prompt used for content generation to align with your preferred tone and style. Article Structure: Modify the structure of the generated article by tweaking settings in the content generation node. Notifications: Customize the content and recipients of the emails sent post-publication. Notion Updates: Tailor the fields updated in Notion to suit your specific requirements.
by Agentick AI
This n8n workflow automatically scrapes the latest posts from a specified Reddit subreddit every day at 9 AM and sends a neatly formatted HTML email summary to your inbox. It highlights new community posts, including post details like title, author, flair, upvotes, comments, and a brief preview — making it ideal for content curators, community managers, or Reddit enthusiasts who want daily updates. How It Works Trigger: The schedule node runs the workflow once every 24 hours at 9:00 AM. Reddit Scrape: A request is made to the desired subreddit (defined in the HTTP Request node) to pull post data. Filter & Format: JavaScript code filters posts created in the last 24 hours and transforms the data into structured summaries. Email Composition: A dynamic HTML email is generated summarizing the post details. If no new posts are found, a fallback message is displayed. Email Delivery: Gmail node sends the email with subject, content, and timestamp. Use Cases ✅ Stay informed about the latest subreddit activity. ✅ Automate daily newsletters for Reddit topics. ✅ Monitor niche communities for engagement trends. Requirements Reddit subreddit link (set in the HTTP Request node). Gmail account with OAuth2 credentials set up in n8n. User-Agent string customized for your Reddit scraping. Adjust schedule as per your preferred timezone. Google Sheet Setup (Not required for this workflow) No sheet integration is involved here. Customizing the Workflow You can personalize this workflow by: Replacing the User-Agent value with a meaningful identifier to avoid Reddit rate-limiting. Updating the subreddit URL in the HTTP Request node. Changing the Gmail recipient address in the Send Gmail node. Tweaking the HTML email styling in the Prepare Email Content node. Adjusting schedule time/frequency in the Trigger node.