by Dhruv Dalsaniya
Description: This n8n workflow automates a Discord bot to fetch messages from a specified channel and send AI-generated responses in threads. It ensures smooth message processing and interaction, making it ideal for managing community discussions, customer support, or AI-based engagement. This workflow leverages Redis for memory persistence, ensuring that conversation history is maintained even if the workflow restarts, providing a seamless user experience. How It Works The bot listens for new messages in a specified Discord channel. It sends the messages to an AI model for response generation. The AI-generated reply is posted as a thread under the original message. The bot runs on an Ubuntu server and is managed using PM2 for uptime stability. The Discord bot (Python script) acts as the bridge, capturing messages from Discord and sending them to the n8n webhook. The n8n workflow then processes these messages, interacts with the AI model, and sends the AI's response back to Discord via the bot. Prerequisites to host Bot Sign up on Pella, which is a managed hosting service for Discord Bots. (Easy Setup) A Redis instance for memory persistence. Redis is an in-memory data structure store, used here to store and retrieve conversation history, ensuring that the AI can maintain context across multiple interactions. This is crucial for coherent and continuous conversations. Set Up Steps 1️⃣ Create a Discord Bot Go to the Discord Developer Portal. Click “New Application”, enter a name, and create it. Navigate to Bot > Reset Token, then copy the Bot Token. Enable Privileged Gateway Intents (Presence, Server Members, Message Content). Under OAuth2 > URL Generator, select bot scope and required permissions. Copy the generated URL, open it in a browser, select your server, and click Authorize. 2️⃣ Deploy the Bot on Pella Create a new folder discord-bot and navigate into it: Create and configure an .env file to store your bot token: Copy the code to .env: (You can copy the webhook URL from the n8n workflow) TOKEN=your-bot-token-here WEBHOOK_URL=https://your-domain.tld/webhook/getmessage Create file main.py copy the below code and save it: Copy this Bot script to main.py: import discord import requests import json import os from dotenv import load_dotenv Load environment variables from .env file load_dotenv() TOKEN = os.getenv("TOKEN") WEBHOOK_URL = os.getenv("WEBHOOK_URL") Bot Configuration LISTEN_CHANNELS = ["YOUR_CHANNEL_ID_1", "YOUR_CHANNEL_ID_2"] # Replace with your target channel IDs Intents setup intents = discord.Intents.default() intents.messages = True # Enable message event intents.guilds = True intents.message_content = True # Required to read messages client = discord.Client(intents=intents) @client.event async def on_ready(): print(f'Logged in as {client.user}') @client.event async def on_message(message): if message.author == client.user: return # Ignore bot's own messages if str(message.channel.id) in LISTEN_CHANNELS: try: fetched_message = await message.channel.fetch_message(message.id) # Ensure correct fetching payload = { "channel_id": str(fetched_message.channel.id), # Ensure it's string "chat_message": fetched_message.content, "timestamp": str(fetched_message.created_at), # Ensure proper formatting "message_id": str(fetched_message.id), # Ensure ID is a string "user_id": str(fetched_message.author.id) # Ensure user ID is also string } headers = {'Content-Type': 'application/json'} response = requests.post(WEBHOOK_URL, data=json.dumps(payload), headers=headers) if response.status_code == 200: print(f"Message sent successfully: {payload}") else: print(f"Failed to send message: {response.status_code}, Response: {response.text}") except Exception as e: print(f"Error fetching message: {e}") client.run(TOKEN) Create requirements.txt and copy: discord python-dotenv 3️⃣ Follow the video to set up the bot which will run 24/7 Tutorial - https://www.youtube.com/watch?v=rNnK3XlUtYU Note: Free Plan will expire after 24 hours, so please opt for the Paid Plan in Pella to keep your bot running. 4️⃣ n8n Workflow Configuration The n8n workflow consists of the following nodes: Get Discord Messages (Webhook):** This node acts as the entry point for messages from the Discord bot. It receives the channel_id, chat_message, timestamp, message_id, and user_id from Discord when a new message is posted in the configured channel. Its webhook path is /getmessage and it expects a POST request. Chat Agent (Langchain Agent):** This node processes the incoming Discord message (chat_message). It is configured as a conversational agent, integrating the language model and memory to generate an appropriate response. It also has a prompt to keep the reply concise, under 1800 characters. OpenAI -4o-mini (Langchain Language Model):** This node connects to the OpenAI API and uses the gpt-4o-mini-2024-07-18 model for generating AI responses. It is the core AI component of the workflow. Message History (Redis Chat Memory):** This node manages the conversation history using Redis. It stores and retrieves chat messages, ensuring the Chat Agent maintains context for each user based on their user_id. This is critical for coherent multi-turn conversations. Calculator (Langchain Tool):** This node provides a calculator tool that the AI agent can utilize if a mathematical calculation is required within the conversation. This expands the capabilities of the AI beyond just text generation. Response fromAI (Discord):** This node sends the AI-generated response back to the Discord channel. It uses the Discord Bot API credentials and replies in a thread under the original message (message_id) in the specified channel_id. Sticky Note1, Sticky Note2, Sticky Note3, Sticky Note4, Sticky Note5, Sticky Note:** These are informational nodes within the workflow providing instructions, code snippets for the Discord bot, and setup guidance for the user. These notes guide the user on setting up the .env file, requirements.txt, the Python bot code, and general recommendations for channel configuration and adding tools. 5️⃣ Setting up Redis Choose a Redis Hosting Provider: You can use a cloud provider like Redis Labs, Aiven, or set up your own Redis instance on a VPS. Obtain Redis Connection Details: Once your Redis instance is set up, you will need the host, port, and password (if applicable). Configure n8n Redis Nodes: In your n8n workflow, configure the "Message History" node with your Redis connection details. Ensure the Redis credential ✅ redis-for-n8n is properly set up with your Redis instance details (host, port, password). 6️⃣ Customizing the Template AI Model:** You can easily swap out the "OpenAI -4o-mini" node with any other AI service supported by n8n (e.g., Cohere, Hugging Face) to use a different language model. Ensure the new language model node is connected to the ai_languageModel input of the "Chat Agent" node. Agent Prompt:** Modify the text parameter in the "Chat Agent" node to change the AI's persona, provide specific instructions, or adjust the response length. Additional Tools:** The "Calculator" node is an example of an AI tool. You can add more Langchain tool nodes (e.g., search, data lookup) and connect them to the ai_tool input of the "Chat Agent" node to extend the AI's capabilities. Refer to the "Sticky Note5" in the workflow for a reminder. Channel Filtering:** Adjust the LISTEN_CHANNELS list in the main.py file of your Discord bot to include or exclude specific Discord channel IDs where the bot should listen for messages. Thread Management:** The "Response fromAI" node can be modified to change how threads are created or managed, or to send responses directly to the channel instead of a thread. The current setup links the response to the original message ID (message_reference). 7️⃣ Testing Instructions Start the Discord Bot: Ensure your main.py script is running on Pella. Activate the n8n Workflow: Make sure your n8n workflow is active and listening for webhooks. Send a Message in Discord: Go to one of the LISTEN_CHANNELS in your Discord server and send a message. Verify Response: The bot should capture the message, send it to n8n, receive an AI-generated response, and post it as a thread under your original message. Check Redis: Verify that the conversation history is being stored and updated correctly in your Redis instance. Look for keys related to user IDs. ✅ Now your bot is running in the background! 🚀
by Aurélien P.
🌤️ Daily Weather Forecast Bot A comprehensive n8n workflow that fetches detailed weather forecasts from OpenWeatherMap and sends beautifully formatted daily summaries to Telegram. 📋 Features 📊 Daily Overview**: Complete temperature range, rainfall totals, and wind conditions ⏰ Hourly Forecast**: Weather predictions at key times (9AM, 12PM, 3PM, 6PM, 9PM) 🌡️ Smart Emojis**: Context-aware weather icons and temperature indicators 💡 Smart Recommendations**: Contextual advice (umbrella alerts, clothing suggestions, sun protection) 🌪️ Enhanced Details**: Feels-like temperature, humidity levels, wind speed, UV warnings 📱 Rich Formatting**: HTML-formatted messages with emojis for excellent readability 🕐 Timezone-Aware**: Proper handling of Luxembourg timezone (CET/CEST) 🛠️ What This Workflow Does Triggers daily at 7:50 AM to send morning weather updates Fetches 5-day forecast from OpenWeatherMap API with 3-hour intervals Processes and analyzes weather data with smart algorithms Formats comprehensive report with HTML styling and emojis Sends to Telegram with professional formatting and actionable insights ⚙️ Setup Instructions 1. OpenWeatherMap API Sign up at OpenWeatherMap Get your free API key (1000 calls/day included) Replace API_KEY in the HTTP Request node URL 2. Telegram Bot Message @BotFather on Telegram Send /newbot command and follow instructions Copy the bot token to n8n credentials Get your chat ID by messaging the bot, then visiting: https://api.telegram.org/bot<YOUR_BOT_TOKEN>/getUpdates Update the chatId parameter in the Telegram node 3. Location Configuration Default location: Strassen, Luxembourg To change: modify q=Strassen in the HTTP Request URL Format: q=CityName,CountryCode (e.g., q=Paris,FR) 🎯 Technical Details API Source**: OpenWeatherMap 5-day forecast Schedule**: Daily at 7:50 AM (configurable) Format**: HTML with rich emoji formatting Error Handling**: 3 retry attempts with 5-second delays Rate Limits**: Uses only 1 API call per day Timezone**: Europe/Luxembourg (handles CET/CEST automatically) 📊 Weather Data Analyzed Temperature ranges and "feels like" temperatures Precipitation forecasts and accumulation Wind speed and conditions Humidity levels and comfort indicators Cloud coverage and visibility UV index recommendations Time-specific weather patterns 💡 Smart Features Conditional Recommendations**: Only shows relevant advice Night/Day Awareness**: Different emojis for time of day Temperature Context**: Color-coded temperature indicators Weather Severity**: Appropriate icons for weather intensity Humidity Comfort**: Comfort level indicators Wind Analysis**: Descriptive wind condition text 🔧 Customization Options Schedule**: Modify trigger time in the Schedule node Location**: Change city in HTTP Request URL Forecast Hours**: Adjust desiredHours array in the code Temperature Thresholds**: Modify emoji temperature ranges Recommendation Logic**: Customize advice triggers 📱 Sample Output 🌤️ Weather Forecast for Strassen, LU 📅 Monday, 2 June 2025 📊 Daily Overview 🌡️ Range: 12°C - 22°C 💧 Comfortable (65%) ⏰ Hourly Forecast 🕒 09:00 ☀️ 15°C 🕒 12:00 🌤️ 20°C 🕒 15:00 ☀️ 22°C (feels 24°C) 🕒 18:00 ⛅ 19°C 🕒 21:00 🌙 16°C 📡 Data from OpenWeatherMap | Updated: 07:50 CET 🚀 Getting Started Import this workflow to your n8n instance Add your OpenWeatherMap API key Set up Telegram bot credentials Test manually first Activate for daily automated runs 📋 Requirements n8n instance (cloud or self-hosted) Free OpenWeatherMap API account Telegram bot token Basic understanding of n8n workflows Perfect for: Daily weather updates, team notifications, personal weather tracking, smart home automation triggers.
by Derek Cheung
Purpose of workflow: The purpose of this workflow is to automate scraping of a website, transforming it into a structured format, and loading it directly into a Google Sheets spreadsheet. How it works: Web Scraping: Uses the Jina AI service to scrape website data and convert it into LLM-friendly text. Information Extraction: Employs an AI node to extract specific book details (title, price, availability, image URL, product URL) from the scraped data. Data Splitting: Splits the extracted information into individual book entries. Google Sheets Integration: Automatically populates a Google Sheets spreadsheet with the structured book data. Step by step setup: Set up Jina AI service: Sign up for a Jina AI account and obtain an API key. Configure the HTTP Request node: Enter the Jina AI URL with the target website. Add the API key to the request headers for authentication. Set up the Information Extractor node: Use Claude AI to generate a JSON schema for data extraction. Upload a screenshot of the target website to Claude AI. Ask Claude AI to suggest a JSON schema for extracting required information. Copy the generated schema into the Information Extractor node. Configure the Split node: Set it up to separate the extracted data into individual book entries. Set up the Google Sheets node: Create a Google Sheets spreadsheet with columns for title, price, availability, image URL, and product URL. Configure the node to map the extracted data to the appropriate columns.
by Pat
Who is this for? This workflow template is perfect for content creators, researchers, students, or anyone who regularly works with audio files and needs to transcribe and summarize them for easy reference and organization. What problem does this workflow solve? Transcribing audio files and summarizing their content can be time-consuming and tedious when done manually. This workflow automates the process, saving users valuable time and effort while ensuring accurate transcriptions and concise summaries. What this workflow does This template automates the following steps: Monitors a specified Google Drive folder for new audio files Sends the audio file to OpenAI's Whisper API for transcription Passes the transcribed text to GPT-4 for summarization Creates a new page in Notion with the summary Setup To set up this workflow: Connect your Google Drive, OpenAI, and Notion accounts to n8n Configure the Google Drive node with the folder you want to monitor for new audio files Set up the OpenAI node with your API key and desired parameters for Whisper and GPT-4 Specify the Notion database where you want the summaries to be stored How to customize this workflow Adjust the Google Drive folder being monitored Modify the OpenAI node parameters to fine-tune the transcription and summarization process Change the Notion database or page properties to match your preferred structure With this AI-powered workflow, you can effortlessly transcribe audio files, generate concise summaries, and store them in a structured manner within Notion. Streamline your audio content processing and organization with this automated template.
by Yaron Been
Automated system to track and analyze technology stacks used by target companies, helping identify decision-makers and technology trends. 🚀 What It Does Tracks technology stack of target companies Identifies key decision-makers (CTOs, Tech Leads) Monitors technology changes and updates Provides competitive intelligence Generates actionable insights 🎯 Perfect For B2B SaaS companies Technology vendors Sales and business development teams Competitive intelligence analysts Market researchers ⚙️ Key Benefits ✅ Identify potential customers ✅ Stay ahead of technology trends ✅ Target decision-makers effectively ✅ Monitor competitor technology stacks ✅ Data-driven sales strategies 🔧 What You Need BuiltWith API key n8n instance CRM integration (optional) Email/Slack for alerts 📊 Data Tracked Company technologies Hosting providers Frameworks and libraries Analytics tools Marketing technologies 🛠️ Setup & Support Quick Setup Deploy in 20 minutes with our step-by-step guide 📺 Watch Tutorial 💼 Get Expert Support 📧 Direct Help Gain a competitive edge by understanding the technology landscape of your target market.
by Manuel
Effortlessly optimize your workflow by automatically importing hundreds of manufacturers from a Google Sheet into your Shopware online store, saving countless hours of manual work. How it works Retrieve all manufactures from a Google Sheet Add each manufacture via Shopware sync API Endpoint to Shopware Upload a logo for each manufacture from a provided public URL to Shopware Set Up Steps Add your Shopware url to first node called Settings Create a Google Sheet in your Google account with the following columns (Demo Sheet) name (the name of the manufacturer which has to be unique and is required) website (url to the manufacturer website) description logo_url (public manufcaturer logo url. Have to be a png, jpg or svg file) translation_language_code_1 (optional. Language Code of your language. For example 'es-ES' for spanish. You have to make sure a language with this code exists in your Shopware shop.) translation_name_1 (optional. Manufacturer name translated to the language you defined at translation_language_code_1) translation_description_1 (optional. Manufacturer description translated to the language you defined at translation_language_code_1) translation_language_code_2 (optional. Same as translation_language_code_1 for another language) translation_name_2 (optional. Same as translation_name_1 for another language) translation_description_2 (optional. Same as translation_description_1 for another language) translation_language_code_3 (optional. Same as translation_language_code_1 for another language) translation_name_3 (optional. Same as translation_name_1 for another language) translation_description_3 (optional. Same as translation_description_1 for another language) Connect to your Google account Connect to your Shopware account Create a Shopware Integration Connect to Shopware at the nodes "Import Manufacturer" and "Upload Manufacturer Logo" using a Generic OAuth2 API Authentication with Grant Type "Client Credentials". The Access Token URL is https://your-shopware-domain.com/api/oauth/token. Run the workflow
by Airtop
Extracting LinkedIn Profile Information Use Case Manually copying data from LinkedIn profiles is time-consuming and error-prone. This automation helps you extract structured, detailed information from any public LinkedIn profile—enabling fast enrichment, hiring research, or lead scoring. What This Automation Does This automation extracts profile details from a LinkedIn URL using the following input parameters: airtop_profile**: The name of your Airtop Profile connected to LinkedIn. linkedin_url**: The URL of the LinkedIn profile you want to extract data from. How It Works Starts with a form trigger or via another workflow. Assigns the LinkedIn URL and Airtop profile variables. Opens the LinkedIn profile in a real browser session using Airtop. Uses an AI prompt to extract structured information, including: Name, headline, location Current company and position About section, experience, and education history Skills, certifications, languages, connections, and recommendations Returns structured JSON ready for further use or storage. Setup Requirements Airtop API Key — free to generate. An Airtop Profile connected to LinkedIn (requires one-time login). Next Steps Sync with CRM**: Push extracted data into HubSpot, Salesforce, or Airtable for lead enrichment. Combine with Search Automation**: Use with a LinkedIn search scraper to process profiles in bulk. Adapt to Other Platforms**: Customize the prompt to extract structured data from GitHub, Twitter, or company sites. Read more about the Extract Linkedin Profile Information automation.
by Martijn Smit
This workflow template helps Todoist users get a weekly overview of their completed tasks via email, making it easier to review their past week. Why use this workflow? Todoist doesn’t provide completed task reports or filters in its built-in reports or n8n app. This workflow solves that by using Todoist’s public API to fetch your completed tasks. How it works Runs every Friday afternoon (or manually). Uses the Todoist public API to retrieve completed tasks. Excludes specific projects you set (e.g., a grocery list). Sends an email summary, grouping tasks by the day they were completed. Set up steps Copy your Todoist API token (found here). Create a Todoist API credential in n8n. Create an SMTP credential in n8n. Alternatively, use a preferred email service like Brevo, Mailjet, etc. Import this workflow template. In the Get completed tasks via Todoist API step, select your Todoist API credential. In the Send Email step: Select your SMTP credential. Set the sender and recipient email addresses. Run the workflow manually and check your inbox! Ignoring specific projects If you do not want your grocery list, workouts, or other tasks from specific Todoist projects showing up in your weekly summary, modify the step called Optional: Ignore specific projects and change this line: const ignoredProjects = ['2335544024']; This should be an array with the id of each project you'd like to ignore. You can find a list of your projects (inc. their Ids) by visiting this link: https://api.todoist.com/rest/v2/projects
by Yaron Been
Workflow Overview This cutting-edge n8n automation is a sophisticated market research and intelligence gathering tool designed to transform web content discovery into actionable insights. By intelligently combining web crawling, AI-powered filtering, and smart summarization, this workflow: Discovers Relevant Content: Automatically crawls target websites Identifies trending topics Extracts comprehensive article details Intelligent Content Filtering: Applies custom keyword matching Filters for most relevant articles Ensures high-quality information capture AI-Powered Summarization: Generates concise, meaningful summaries Extracts key insights Provides quick, digestible information Seamless Delivery: Sends summaries directly to Slack Enables instant team communication Facilitates rapid information sharing Key Benefits 🤖 Full Automation: Continuous market intelligence 💡 Smart Filtering: Precision content discovery 📊 AI-Powered Insights: Intelligent summarization 🚀 Instant Delivery: Real-time team updates Workflow Architecture 🔹 Stage 1: Content Discovery Scheduled Trigger**: Daily market research FireCrawl Integration**: Web content crawling Comprehensive Site Scanning**: Extracts article metadata Captures full article content Identifies key information sources 🔹 Stage 2: Intelligent Filtering Keyword-Based Matching** Relevance Assessment** Custom Domain Optimization**: AI and technology focus Startup and innovation tracking 🔹 Stage 3: AI Summarization OpenAI GPT Integration** Contextual Understanding** Concise Insight Generation**: 3-point summary format Captures essential information 🔹 Stage 4: Team Notification Slack Integration** Instant Information Sharing** Formatted Insight Delivery** Potential Use Cases Market Research Teams**: Trend tracking Innovation Departments**: Technology monitoring Startup Ecosystems**: Competitive intelligence Product Management**: Industry insights Strategic Planning**: Rapid information gathering Setup Requirements FireCrawl API Web crawling credentials Configured crawling parameters OpenAI API GPT model access Summarization configuration API key management Slack Workspace Channel for insights delivery Appropriate app permissions Webhook configuration n8n Installation Cloud or self-hosted instance Workflow configuration API credential management Future Enhancement Suggestions 🤖 Multi-source crawling 📊 Advanced sentiment analysis 🔔 Customizable alert mechanisms 🌐 Expanded topic tracking 🧠 Machine learning refinement Technical Considerations Implement robust error handling Use exponential backoff for API calls Maintain flexible crawling strategies Ensure compliance with website terms of service Ethical Guidelines Respect content creator rights Use data for legitimate research Maintain transparent information gathering Provide proper attribution Workflow Visualization [Daily Trigger] ⬇️ [Web Crawling] ⬇️ [Content Filtering] ⬇️ [AI Summarization] ⬇️ [Slack Delivery] Connect With Me Ready to revolutionize your market research? 📧 Email: Yaron@nofluff.online 🎥 YouTube: @YaronBeen 💼 LinkedIn: Yaron Been Transform your information gathering with intelligent, automated workflows! #AIResearch #MarketIntelligence #AutomatedInsights #TechTrends #WebCrawling #AIMarketing #InnovationTracking #BusinessIntelligence #DataAutomation #TechNews
by Yang
Who is this for? This workflow is for social media agencies, influencer marketers, and brand managers who need to automatically qualify TikTok creators based on their follower metrics. It’s especially useful for teams managing influencer outreach campaigns or building talent databases. What problem is this workflow solving? Manually tracking TikTok user stats is time-consuming and inconsistent. This automation instantly pulls TikTok profile data and only saves creators who meet a defined follower threshold. It removes manual vetting, reduces spreadsheet work, and makes influencer qualification scalable. What this workflow does This workflow uses Airtable as the trigger, Dumpling AI to scrape TikTok profile information, and a logic condition to check if the profile has more than 100k followers. Qualified profiles are updated with full metrics and stored back in Airtable. Setup Airtable Setup Create a table with a field named Tik tok username Connect your Airtable account to n8n using a Personal Access Token Set up a trigger to run when a new TikTok username is added Dumpling AI Sign up at Dumpling AI Create a Dumpling AI credential in n8n using your API key The HTTP node sends the TikTok handle to Dumpling’s /get-tiktok-profile endpoint Configure Filter The IF node checks if followerCount is greater than or equal to 100000 Airtable Update If qualified, the record is updated with: ID (TikTok ID) followerCount followingCount heartCount videoCount How to customize this workflow to your needs Change the follower count threshold to fit your campaign (e.g. 10K, 500K, 1M) Add fields like engagement rate, niche tags, or scraped bio Chain additional steps like sending approved creators to your CRM or triggering outreach messages Add another filter to exclude private or inactive accounts
by Ahmed Saadawi
⚠️ This Workflow Requires a Community Node and a Self-Hosted n8n Instance > This workflow uses the Vtiger CRM community node. To use it, you must be running a self-hosted version of n8n with Community Nodes enabled. 🔧 How to Install the Node Go to Settings → Community Nodes Click Install Node Enter the package name: n8n-nodes-vtiger-crm Restart your n8n instance if prompted 💬 Real-time Vtiger Support Tickets to Telegram with Auto Status Updates 📌 Overview Keep your support team instantly informed when new tickets are created in Vtiger CRM. This workflow: Fetches the most recent ticket marked as Open Sends its details to a Telegram chat Updates the status in Vtiger to In Progress to prevent re-sending 🔄 What This Workflow Does 📨 Pulls the latest open ticket from Vtiger HelpDesk 📲 Sends a rich-text message to Telegram with all key ticket details 🔁 Updates the ticket’s status to "In Progress" 🧠 Workflow Preview > 📲 Telegram Output Example > New ticket with the following details: Ticketid: TT2 Title: Internet down Status: Open Priority: High Severity: Minor Category: Small Problem Description: The internet was slow from yesterday and today is down completely 🛠️ Setup Instructions 🔗 Telegram Bot Setup Open Telegram and search for @BotFather Run /newbot and follow the instructions Save the bot token Add the bot to your chat or group Use @userinfobot to get your chat_id Paste the token and chat ID in the Telegram node inside n8n 🔗 Vtiger CRM Setup Make sure your Vtiger HelpDesk module includes: ticket_no, ticket_title, ticketstatus, ticketpriorities, ticketseverities, ticketcategories, description Connect your Vtiger API credentials inside n8n 👥 Who This Is For Customer support and IT helpdesk teams using Vtiger CRM Teams that want instant alerts in Telegram Anyone syncing CRM activity with chat-based notifications 🔐 Credentials Required ✅ Vtiger CRM API credentials ✅ Telegram Bot Token 🏷 Tags vtiger, telegram, crm automation, helpdesk alerts, no-code crm, realtime notifications, n8n telegram integration, support ticket automation, self-hosted n8n, community nodes, workflow automation, vtiger crm integration, helpdesk sync, n8n crm alerts `
by Alexander Bentlund
Search music and play to Spotify from Telegram This workflow is a simple demonstration on accessing a message model from Telegram and it makes searching for songs an easy task even if you can't remember the artist or song name. An OpenAI message model tries to figure out the song and sends it to an active Spotify device**. Use case Imagine an office where you play music in the background and the employees can control the music without having to login to the playing account. How it works You describe the song in Telegram. Telegram bot sends the text to n8n. An OpenAI message model tries to find the song. Spotify gets the search query string. First match is then added to queue. -- If there is no match a message is sent to Telegram and the process ends. We change to the next track in the list. We make sure the song starts playing by trying to resume. We fetch the currently playing track. We return "now playing" information to Telegram: Song Name - Artist Name - Album Name. Error handling Every Spotify step has it's on error handler under settings where we output the error. Message parser receives the error and sends it to Telegram. Requirements Active workflow* OpenAI API key Telegram bot Spotify account and Oauth2 API Spotify active on a device** .* The Telegram trigger is activated only if this workflow is active. You can however TEST the workflow in the editor by clicking "Test step" and then it waits for the Telegram event. When event is received, just step through all steps or just clicking "Test step" on the "Fetch Now Playing" node. .** You must have a Spotify device active when trying to communicate with a device. Open Spotify and play something - not it is active.