by Dhruv Dalsaniya
Description: This n8n workflow automates a Discord bot to fetch messages from a specified channel and send AI-generated responses in threads. It ensures smooth message processing and interaction, making it ideal for managing community discussions, customer support, or AI-based engagement. This workflow leverages Redis for memory persistence, ensuring that conversation history is maintained even if the workflow restarts, providing a seamless user experience. How It Works The bot listens for new messages in a specified Discord channel. It sends the messages to an AI model for response generation. The AI-generated reply is posted as a thread under the original message. The bot runs on an Ubuntu server and is managed using PM2 for uptime stability. The Discord bot (Python script) acts as the bridge, capturing messages from Discord and sending them to the n8n webhook. The n8n workflow then processes these messages, interacts with the AI model, and sends the AI's response back to Discord via the bot. Prerequisites to host Bot Sign up on Pella, which is a managed hosting service for Discord Bots. (Easy Setup) A Redis instance for memory persistence. Redis is an in-memory data structure store, used here to store and retrieve conversation history, ensuring that the AI can maintain context across multiple interactions. This is crucial for coherent and continuous conversations. Set Up Steps 1️⃣ Create a Discord Bot Go to the Discord Developer Portal. Click “New Application”, enter a name, and create it. Navigate to Bot > Reset Token, then copy the Bot Token. Enable Privileged Gateway Intents (Presence, Server Members, Message Content). Under OAuth2 > URL Generator, select bot scope and required permissions. Copy the generated URL, open it in a browser, select your server, and click Authorize. 2️⃣ Deploy the Bot on Pella Create a new folder discord-bot and navigate into it: Create and configure an .env file to store your bot token: Copy the code to .env: (You can copy the webhook URL from the n8n workflow) TOKEN=your-bot-token-here WEBHOOK_URL=https://your-domain.tld/webhook/getmessage Create file main.py copy the below code and save it: Copy this Bot script to main.py: import discord import requests import json import os from dotenv import load_dotenv Load environment variables from .env file load_dotenv() TOKEN = os.getenv("TOKEN") WEBHOOK_URL = os.getenv("WEBHOOK_URL") Bot Configuration LISTEN_CHANNELS = ["YOUR_CHANNEL_ID_1", "YOUR_CHANNEL_ID_2"] # Replace with your target channel IDs Intents setup intents = discord.Intents.default() intents.messages = True # Enable message event intents.guilds = True intents.message_content = True # Required to read messages client = discord.Client(intents=intents) @client.event async def on_ready(): print(f'Logged in as {client.user}') @client.event async def on_message(message): if message.author == client.user: return # Ignore bot's own messages if str(message.channel.id) in LISTEN_CHANNELS: try: fetched_message = await message.channel.fetch_message(message.id) # Ensure correct fetching payload = { "channel_id": str(fetched_message.channel.id), # Ensure it's string "chat_message": fetched_message.content, "timestamp": str(fetched_message.created_at), # Ensure proper formatting "message_id": str(fetched_message.id), # Ensure ID is a string "user_id": str(fetched_message.author.id) # Ensure user ID is also string } headers = {'Content-Type': 'application/json'} response = requests.post(WEBHOOK_URL, data=json.dumps(payload), headers=headers) if response.status_code == 200: print(f"Message sent successfully: {payload}") else: print(f"Failed to send message: {response.status_code}, Response: {response.text}") except Exception as e: print(f"Error fetching message: {e}") client.run(TOKEN) Create requirements.txt and copy: discord python-dotenv 3️⃣ Follow the video to set up the bot which will run 24/7 Tutorial - https://www.youtube.com/watch?v=rNnK3XlUtYU Note: Free Plan will expire after 24 hours, so please opt for the Paid Plan in Pella to keep your bot running. 4️⃣ n8n Workflow Configuration The n8n workflow consists of the following nodes: Get Discord Messages (Webhook):** This node acts as the entry point for messages from the Discord bot. It receives the channel_id, chat_message, timestamp, message_id, and user_id from Discord when a new message is posted in the configured channel. Its webhook path is /getmessage and it expects a POST request. Chat Agent (Langchain Agent):** This node processes the incoming Discord message (chat_message). It is configured as a conversational agent, integrating the language model and memory to generate an appropriate response. It also has a prompt to keep the reply concise, under 1800 characters. OpenAI -4o-mini (Langchain Language Model):** This node connects to the OpenAI API and uses the gpt-4o-mini-2024-07-18 model for generating AI responses. It is the core AI component of the workflow. Message History (Redis Chat Memory):** This node manages the conversation history using Redis. It stores and retrieves chat messages, ensuring the Chat Agent maintains context for each user based on their user_id. This is critical for coherent multi-turn conversations. Calculator (Langchain Tool):** This node provides a calculator tool that the AI agent can utilize if a mathematical calculation is required within the conversation. This expands the capabilities of the AI beyond just text generation. Response fromAI (Discord):** This node sends the AI-generated response back to the Discord channel. It uses the Discord Bot API credentials and replies in a thread under the original message (message_id) in the specified channel_id. Sticky Note1, Sticky Note2, Sticky Note3, Sticky Note4, Sticky Note5, Sticky Note:** These are informational nodes within the workflow providing instructions, code snippets for the Discord bot, and setup guidance for the user. These notes guide the user on setting up the .env file, requirements.txt, the Python bot code, and general recommendations for channel configuration and adding tools. 5️⃣ Setting up Redis Choose a Redis Hosting Provider: You can use a cloud provider like Redis Labs, Aiven, or set up your own Redis instance on a VPS. Obtain Redis Connection Details: Once your Redis instance is set up, you will need the host, port, and password (if applicable). Configure n8n Redis Nodes: In your n8n workflow, configure the "Message History" node with your Redis connection details. Ensure the Redis credential ✅ redis-for-n8n is properly set up with your Redis instance details (host, port, password). 6️⃣ Customizing the Template AI Model:** You can easily swap out the "OpenAI -4o-mini" node with any other AI service supported by n8n (e.g., Cohere, Hugging Face) to use a different language model. Ensure the new language model node is connected to the ai_languageModel input of the "Chat Agent" node. Agent Prompt:** Modify the text parameter in the "Chat Agent" node to change the AI's persona, provide specific instructions, or adjust the response length. Additional Tools:** The "Calculator" node is an example of an AI tool. You can add more Langchain tool nodes (e.g., search, data lookup) and connect them to the ai_tool input of the "Chat Agent" node to extend the AI's capabilities. Refer to the "Sticky Note5" in the workflow for a reminder. Channel Filtering:** Adjust the LISTEN_CHANNELS list in the main.py file of your Discord bot to include or exclude specific Discord channel IDs where the bot should listen for messages. Thread Management:** The "Response fromAI" node can be modified to change how threads are created or managed, or to send responses directly to the channel instead of a thread. The current setup links the response to the original message ID (message_reference). 7️⃣ Testing Instructions Start the Discord Bot: Ensure your main.py script is running on Pella. Activate the n8n Workflow: Make sure your n8n workflow is active and listening for webhooks. Send a Message in Discord: Go to one of the LISTEN_CHANNELS in your Discord server and send a message. Verify Response: The bot should capture the message, send it to n8n, receive an AI-generated response, and post it as a thread under your original message. Check Redis: Verify that the conversation history is being stored and updated correctly in your Redis instance. Look for keys related to user IDs. ✅ Now your bot is running in the background! 🚀
by Srinivasan KB
This n8n workflow provides a ready-to-use API endpoint for extracting structured data from images. It processes an image URL using an AI-powered OCR model and returns the extracted details in a structured JSON format. Use Cases Document OCR** – Extract details from ID cards, invoices, receipts, etc. Text Extraction from Images** – Process screenshots, scanned documents, and photos. Automated Form Processing** – Digitize and capture information from paper forms. Business Card Data Extraction** – Extract names, emails, and phone numbers from business cards. How It Works Send a GET request with an image URL and define the required extraction parameters. The image is converted to base64 for processing. The AI model (Gemini API - Flash Lite) extracts relevant text. The response returns structured JSON data containing only the requested fields. Features ✔️ No-Code API Setup – Easily integrate into any application. ✔️ Customizable Extraction – Modify the request parameters to fit your needs. ✔️ AI-Powered OCR – Uses advanced models for accurate text recognition. ✔️ Automated Processing – Ideal for document processing and digitization. Integration Works with any frontend/backend system that supports API calls. Can be used for workflow automation in CRM, ERP, and document management solutions. Supports further customization based on specific OCR requirements.
by Aurélien P.
🌤️ Daily Weather Forecast Bot A comprehensive n8n workflow that fetches detailed weather forecasts from OpenWeatherMap and sends beautifully formatted daily summaries to Telegram. 📋 Features 📊 Daily Overview**: Complete temperature range, rainfall totals, and wind conditions ⏰ Hourly Forecast**: Weather predictions at key times (9AM, 12PM, 3PM, 6PM, 9PM) 🌡️ Smart Emojis**: Context-aware weather icons and temperature indicators 💡 Smart Recommendations**: Contextual advice (umbrella alerts, clothing suggestions, sun protection) 🌪️ Enhanced Details**: Feels-like temperature, humidity levels, wind speed, UV warnings 📱 Rich Formatting**: HTML-formatted messages with emojis for excellent readability 🕐 Timezone-Aware**: Proper handling of Luxembourg timezone (CET/CEST) 🛠️ What This Workflow Does Triggers daily at 7:50 AM to send morning weather updates Fetches 5-day forecast from OpenWeatherMap API with 3-hour intervals Processes and analyzes weather data with smart algorithms Formats comprehensive report with HTML styling and emojis Sends to Telegram with professional formatting and actionable insights ⚙️ Setup Instructions 1. OpenWeatherMap API Sign up at OpenWeatherMap Get your free API key (1000 calls/day included) Replace API_KEY in the HTTP Request node URL 2. Telegram Bot Message @BotFather on Telegram Send /newbot command and follow instructions Copy the bot token to n8n credentials Get your chat ID by messaging the bot, then visiting: https://api.telegram.org/bot<YOUR_BOT_TOKEN>/getUpdates Update the chatId parameter in the Telegram node 3. Location Configuration Default location: Strassen, Luxembourg To change: modify q=Strassen in the HTTP Request URL Format: q=CityName,CountryCode (e.g., q=Paris,FR) 🎯 Technical Details API Source**: OpenWeatherMap 5-day forecast Schedule**: Daily at 7:50 AM (configurable) Format**: HTML with rich emoji formatting Error Handling**: 3 retry attempts with 5-second delays Rate Limits**: Uses only 1 API call per day Timezone**: Europe/Luxembourg (handles CET/CEST automatically) 📊 Weather Data Analyzed Temperature ranges and "feels like" temperatures Precipitation forecasts and accumulation Wind speed and conditions Humidity levels and comfort indicators Cloud coverage and visibility UV index recommendations Time-specific weather patterns 💡 Smart Features Conditional Recommendations**: Only shows relevant advice Night/Day Awareness**: Different emojis for time of day Temperature Context**: Color-coded temperature indicators Weather Severity**: Appropriate icons for weather intensity Humidity Comfort**: Comfort level indicators Wind Analysis**: Descriptive wind condition text 🔧 Customization Options Schedule**: Modify trigger time in the Schedule node Location**: Change city in HTTP Request URL Forecast Hours**: Adjust desiredHours array in the code Temperature Thresholds**: Modify emoji temperature ranges Recommendation Logic**: Customize advice triggers 📱 Sample Output 🌤️ Weather Forecast for Strassen, LU 📅 Monday, 2 June 2025 📊 Daily Overview 🌡️ Range: 12°C - 22°C 💧 Comfortable (65%) ⏰ Hourly Forecast 🕒 09:00 ☀️ 15°C 🕒 12:00 🌤️ 20°C 🕒 15:00 ☀️ 22°C (feels 24°C) 🕒 18:00 ⛅ 19°C 🕒 21:00 🌙 16°C 📡 Data from OpenWeatherMap | Updated: 07:50 CET 🚀 Getting Started Import this workflow to your n8n instance Add your OpenWeatherMap API key Set up Telegram bot credentials Test manually first Activate for daily automated runs 📋 Requirements n8n instance (cloud or self-hosted) Free OpenWeatherMap API account Telegram bot token Basic understanding of n8n workflows Perfect for: Daily weather updates, team notifications, personal weather tracking, smart home automation triggers.
by Jimleuk
This n8n template demonstrates how to get started with Gemini 2.0's new Bounding Box detection capabilities in your workflows. The key difference being this enables prompt-based object detection for images which is pretty powerful for things like contextual search over an image. eg. "Put a bounding box around all adults with children in this image" or "Put a bounding box around cars parked out of bounds of a parking space". How it works An image is downloaded via the HTTP node and an "Edit Image" node is used to extract the file's width and height. The image is then given to the Gemini 2.0 API to parse and return coordinates of the bounding box of the requested subjects. In this demo, we've asked for the AI to identify all bunnies. The coordinates are then rescaled with the original image's width and height to correctl align them. Finally to measure the accuracy of the object detection, we use the "Edit Image" node to draw the bounding boxes onto the original image. How to use Really up to the imagination! Perhaps a form of grounding for evidence based workflows or a higher form of image search can be built. Requirements Google Gemini for LLM Customising the workflow This template is just a demonstration of an experimental version of Gemini 2.0. It is recommended to wait for Gemini 2.0 to come out of this stage before using in production.
by AdrianWang
How it works This workflow automates the conversion of various document formats (such as PDF, Word, and PPT) into Markdown. It connects to the MinerU API service, which leverages OCR, formula, and table recognition to produce high-quality output. Users can initiate the process by simply uploading a document through an n8n chat interface. Set up steps Ensure you have a local n8n instance running. Set up and run the MinerU MCP (MinerU Computing Platform) server locally. Import this workflow into your n8n instance. Configure your AI model credentials (e.g., for OpenAI, add your API Key and Base URL). Click the "Write Files from Disk" node and edit the file path to your desired local save location. Click the "MCP Client" node and input your MinerU MCP server address (e.g., http://localhost:8000/sse). Click the "Open Chat" button to upload a file, send a message, and test the workflow.
by Yaron Been
Automated system to track and analyze technology stacks used by target companies, helping identify decision-makers and technology trends. 🚀 What It Does Tracks technology stack of target companies Identifies key decision-makers (CTOs, Tech Leads) Monitors technology changes and updates Provides competitive intelligence Generates actionable insights 🎯 Perfect For B2B SaaS companies Technology vendors Sales and business development teams Competitive intelligence analysts Market researchers ⚙️ Key Benefits ✅ Identify potential customers ✅ Stay ahead of technology trends ✅ Target decision-makers effectively ✅ Monitor competitor technology stacks ✅ Data-driven sales strategies 🔧 What You Need BuiltWith API key n8n instance CRM integration (optional) Email/Slack for alerts 📊 Data Tracked Company technologies Hosting providers Frameworks and libraries Analytics tools Marketing technologies 🛠️ Setup & Support Quick Setup Deploy in 20 minutes with our step-by-step guide 📺 Watch Tutorial 💼 Get Expert Support 📧 Direct Help Gain a competitive edge by understanding the technology landscape of your target market.
by Manuel
Effortlessly optimize your workflow by automatically importing hundreds of manufacturers from a Google Sheet into your Shopware online store, saving countless hours of manual work. How it works Retrieve all manufactures from a Google Sheet Add each manufacture via Shopware sync API Endpoint to Shopware Upload a logo for each manufacture from a provided public URL to Shopware Set Up Steps Add your Shopware url to first node called Settings Create a Google Sheet in your Google account with the following columns (Demo Sheet) name (the name of the manufacturer which has to be unique and is required) website (url to the manufacturer website) description logo_url (public manufcaturer logo url. Have to be a png, jpg or svg file) translation_language_code_1 (optional. Language Code of your language. For example 'es-ES' for spanish. You have to make sure a language with this code exists in your Shopware shop.) translation_name_1 (optional. Manufacturer name translated to the language you defined at translation_language_code_1) translation_description_1 (optional. Manufacturer description translated to the language you defined at translation_language_code_1) translation_language_code_2 (optional. Same as translation_language_code_1 for another language) translation_name_2 (optional. Same as translation_name_1 for another language) translation_description_2 (optional. Same as translation_description_1 for another language) translation_language_code_3 (optional. Same as translation_language_code_1 for another language) translation_name_3 (optional. Same as translation_name_1 for another language) translation_description_3 (optional. Same as translation_description_1 for another language) Connect to your Google account Connect to your Shopware account Create a Shopware Integration Connect to Shopware at the nodes "Import Manufacturer" and "Upload Manufacturer Logo" using a Generic OAuth2 API Authentication with Grant Type "Client Credentials". The Access Token URL is https://your-shopware-domain.com/api/oauth/token. Run the workflow
by WeblineIndia
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. This workflow automates summarizing YouTube videos by accepting a YouTube URL via a form, fetching the video transcript using Apify, and then generating a concise summary with OpenAI GPT. Setup Instructions Prerequisites: Apify account with access to the YouTube Transcript actor. OpenAI API key (for GPT-4o-mini model). n8n instance with the Apify and OpenAI credentials configured. Configuration Steps Apify Setup: Configure Apify API credentials in the Apify node. Ensure the YouTube Transcript actor ID (1s7eXiaukVuOr4Ueg) is correct. OpenAI Setup: Add your OpenAI API key in the OpenAI Chat Model node. Confirm model selection is set to gpt-4o-mini. Customization Modify form field to accept additional inputs if needed. Adjust Apify actor input JSON in the Payload node for extra metadata extraction. Customize the summarization options to tweak summary length or style. Change OpenAI prompt or model parameters in the OpenAI Chat Model node for different output quality or tone. Steps 1. On Form Submission Node:** Form Trigger Purpose:** Collect the YouTube video URL from the user via a web form. 2. Prepare Payload Node:** Set Purpose:** Format the YouTube URL and options into the JSON payload for Apify input. 3. Fetch Transcript Node:** Apify Purpose:** Run the YouTube Transcript actor to retrieve video captions and metadata. 4. Extract Captions Purpose:** Isolate the captions field from the Apify response for processing. 5. Summarize Transcript Purpose:** Generate a concise summary of the video captions.
by Ahmed Saadawi
⚠️ This Workflow Requires a Community Node and a Self-Hosted n8n Instance > This workflow uses the Vtiger CRM community node. To use it, you must be running a self-hosted version of n8n with Community Nodes enabled. 🔧 How to Install the Node Go to Settings → Community Nodes Click Install Node Enter the package name: n8n-nodes-vtiger-crm Restart your n8n instance if prompted 💬 Real-time Vtiger Support Tickets to Telegram with Auto Status Updates 📌 Overview Keep your support team instantly informed when new tickets are created in Vtiger CRM. This workflow: Fetches the most recent ticket marked as Open Sends its details to a Telegram chat Updates the status in Vtiger to In Progress to prevent re-sending 🔄 What This Workflow Does 📨 Pulls the latest open ticket from Vtiger HelpDesk 📲 Sends a rich-text message to Telegram with all key ticket details 🔁 Updates the ticket’s status to "In Progress" 🧠 Workflow Preview > 📲 Telegram Output Example > New ticket with the following details: Ticketid: TT2 Title: Internet down Status: Open Priority: High Severity: Minor Category: Small Problem Description: The internet was slow from yesterday and today is down completely 🛠️ Setup Instructions 🔗 Telegram Bot Setup Open Telegram and search for @BotFather Run /newbot and follow the instructions Save the bot token Add the bot to your chat or group Use @userinfobot to get your chat_id Paste the token and chat ID in the Telegram node inside n8n 🔗 Vtiger CRM Setup Make sure your Vtiger HelpDesk module includes: ticket_no, ticket_title, ticketstatus, ticketpriorities, ticketseverities, ticketcategories, description Connect your Vtiger API credentials inside n8n 👥 Who This Is For Customer support and IT helpdesk teams using Vtiger CRM Teams that want instant alerts in Telegram Anyone syncing CRM activity with chat-based notifications 🔐 Credentials Required ✅ Vtiger CRM API credentials ✅ Telegram Bot Token 🏷 Tags vtiger, telegram, crm automation, helpdesk alerts, no-code crm, realtime notifications, n8n telegram integration, support ticket automation, self-hosted n8n, community nodes, workflow automation, vtiger crm integration, helpdesk sync, n8n crm alerts `
by Alexander Bentlund
Search music and play to Spotify from Telegram This workflow is a simple demonstration on accessing a message model from Telegram and it makes searching for songs an easy task even if you can't remember the artist or song name. An OpenAI message model tries to figure out the song and sends it to an active Spotify device**. Use case Imagine an office where you play music in the background and the employees can control the music without having to login to the playing account. How it works You describe the song in Telegram. Telegram bot sends the text to n8n. An OpenAI message model tries to find the song. Spotify gets the search query string. First match is then added to queue. -- If there is no match a message is sent to Telegram and the process ends. We change to the next track in the list. We make sure the song starts playing by trying to resume. We fetch the currently playing track. We return "now playing" information to Telegram: Song Name - Artist Name - Album Name. Error handling Every Spotify step has it's on error handler under settings where we output the error. Message parser receives the error and sends it to Telegram. Requirements Active workflow* OpenAI API key Telegram bot Spotify account and Oauth2 API Spotify active on a device** .* The Telegram trigger is activated only if this workflow is active. You can however TEST the workflow in the editor by clicking "Test step" and then it waits for the Telegram event. When event is received, just step through all steps or just clicking "Test step" on the "Fetch Now Playing" node. .** You must have a Spotify device active when trying to communicate with a device. Open Spotify and play something - not it is active.
by Ahmed Saadawi
This workflow contains community nodes that are only compatible with the self-hosted version of n8n. 🧠 Vtiger CRM – Auto-Answer FAQs with DeepSeek AI Description: This workflow automates the process of answering FAQ drafts in Vtiger CRM using DeepSeek LLM via LangChain. It's perfect for teams who want to accelerate knowledge base creation, improve support response consistency, or reduce the manual effort of writing FAQ content. Every 1 minute, this workflow: 📥 Retrieves the most recent FAQ record marked as Draft in Vtiger CRM 🧠 Sends the question to a LangChain agent powered by DeepSeek AI 📝 Receives a plain-text answer 📤 Updates the original FAQ with the generated answer and changes its status to Published ⚙️ How It Works Trigger:** Scheduled to run every 1 minute Query:** Pulls the latest FAQ from Vtiger where faqstatus = 'Draft' AI Agent:** Uses LangChain + DeepSeek to generate a natural-language answer Memory Buffer:** Keeps context using LangChain memory Update:** Pushes the answer back to Vtiger and marks it as Published 🛠️ Setup Instructions Connect Credentials for: Vtiger CRM API DeepSeek API Ensure your Vtiger CRM has a Faq module with fields: question faq_answer faqstatus Install the required Community Node: Go to Settings → Community Nodes Click Install Node and enter: n8n-nodes-vtiger-crm Restart your instance when prompted. Optionally customize the schedule or field names as needed. 👤 Who Is This For? Customer support teams building a knowledge base Businesses using Vtiger as a CRM or internal helpdesk Teams looking to automate repetitive content creation using LLMs 🔐 Credentials Required ✅ Vtiger CRM API credentials ✅ DeepSeek AI API key ✅ Highlights Fully automated LLM-powered FAQ generation Uses custom community node for Vtiger support Lightweight and runs on a short interval (1 min) Includes sticky note for clarity and onboarding Clean conditional logic and memory context built-in 🏷 Tags vtiger, crm, faq automation, ai automation, deepseek, langchain, llm, open source crm, faq generation, customer support, n8n, n8n community nodes, workflow automation, ai generated answers, vtiger integration, deepseek ai, langchain integration
by Manu
How it works Weekly triggered Fetches all previous executions of a given workflow Filter for failures and aggregate them into a single report Sends them to a given Telegram chat. Set up steps Create a new N8N api token in the settings panel. Add new N8N credentials in the credentials panel. Add new Telegram credentials in the credentials panel. Select N8N credentials and select the workflow ID in the "Get all previous executions" node. Select Telegram credentials and enter the chat-id in the "Telegram" node.