by Karam Ghazzi
Description 📄 Turn your Slack workspace into a smart AI-powered HelpDesk using this workflow. This automation listens to Slack messages and uses an AI assistant (powered by OpenAI or any other LLM) to respond to employee questions about HR, IT, or internal policies by referencing your internal documentation (such as the Policy Handbook). If the answer isn't available, it can optionally email the relevant department (HR or IT) and ask them to update the handbook. It remembers recent messages per user, cleans up intermediate responses to keep Slack threads tidy, and ensures your team gets consistent and helpful answers—without manually searching docs or escalating simple questions. Perfect for growing teams who want to streamline internal support using n8n, Slack, and AI. How it works 🛠️ This workflow turns n8n into a Slack-based HelpDesk assistant powered by AI. It listens to Slack messages using the Events API, detects whether a real user is asking a question, and responds using OpenAI (or another LLM of your choice). Here's how it works step-by-step: Webhook Trigger: The workflow starts when a message is posted in Slack via the Events API. It filters out any messages from bots to avoid loops. Identify the User: It fetches the full Slack profile of the user who posted the message and stores their name. Send Receipt Message: An initial message is sent to the user saying, “I’m on it!”, confirming their request is being processed. AI Response Handling: The message is processed using the OpenAI Chat model (GPT-4o by default). Before responding, it checks if the query matches any HR or IT policy from the Policy Handbook. If the question can’t be answered based on internal data, it can optionally alert the HR or IT department via Gmail (after user confirmation). Memory Retention: It keeps track of the last 5 interactions per user using Simple Memory, so it remembers previous context in a Slack conversation. Cleanup and Final Reply: It deletes the initial receipt message and sends a final, clean response to the user. How to use 🚀 Clone the Workflow: Download or import the JSON workflow into your n8n instance. Connect Your Credentials: Slack API (for messaging) Google Sheets API (for department contact info) Google Docs API (for the Policy Handbook) Gmail API (optional, for notifying departments) OpenAI or another AI model Slack Setup: Set up a Slack App and enable the Events API. Subscribe to message events and point them to the Webhook URL generated by the workflow. Customize Responses: Edit the initial and final Slack message nodes if you want to personalize the wording. Swap out the LLM (ChatGPT) with your preferred model in the AI Agent node. Adjust AI Behavior: Tune the prompt logic in the “AI Agent” node if you want the AI to behave differently or access different data sources. Expand Memory or Integrations: Use external databases to store longer histories. Integrate with tools like Asana, Notion, or CRM platforms for further automation. Requirements 📋 n8n (self-hosted or cloud) Slack Developer Account & App OpenAI (or any LLM provider) Google Sheets with department contact details Google Docs containing the policy Handbook Gmail account (optional, for email alerts) Knowledge of Slack Events API setup
by Jimleuk
This n8n template is one of a 3-part series exploring use-cases for clustering vector embeddings: Survey Insights Customer Insights Community Insights This template demonstrates the Community Insights scenario where HN commments can be quickly grouped by similarity and an AI agent can generate insights on those groupings. With this workflow, Researchers or HN users can quickly breakdown community consensus on a particular topic and identify frequently mentioned positives and negatives. Sample Output: https://docs.google.com/spreadsheets/d/e/2PACX-1vQXaQU9XxsxnUIIeqmmf1PuYRuYtwviVXTv6Mz9Vo6_a4ty-XaJHSeZsptjWXS3wGGDG8Z4u16rvE7l/pubhtml How it works HN comments are imported via the Hacknews API node. Comments are then inserted into a Qdrant collection carefully tagged with the Hackernews API metadata. Comments are then fetched and are put through a clustering algorithm using the Python Code node. The Qdrant points are returned in clustered groups. Each group is looped to fetch the payloads of the points and feed them to the AI agent to summarise and generate insights for. The resulting insights and raw responses are then saved to the Google Spreadsheet for further analysis by the researcher or the HN user. Requirements Works best with lots of comments! Qdrant Vectorstore for storing embeddings. OpenAI account for embeddings and LLM. Customising the Template Adjust clustering parameters which make sense for your data. Adjust sentimentality setting if comments are overwhelmingly negative at times.
by Jimleuk
This n8n template demonstrates how you can automate community moderation using human-in-the-loop functionality for Discord. The use-case is for detecting and dealing with spam messages in a predefined and consistent way. Human-in-the-loop allows for a balance between overly aggressive bots and time and effort from the moderation team. How it works A scheduled trigger is used to scan the most recent messages in a Discord Channel. Messages are tagged via the "Remove Duplicates" node so they don't get processed again in the future. Messages are grouped by user to allow for minimising of number of notifications sent. An AI text classifier node is then used to detect for spam in each user's message. When detected, a notification is sent to a moderation channel using the Send-and-wait mode for Discord. This notification comes with an n8n form and dropdown list of predefined actions to take in dealing with the spam messages. Once sent the workflow waits until a response is received. Once a moderator selects an action, the workflow continues and carries out a predefined moderation action. How to use Depending on how busy your community is and subject to spammers, you may need to increase the scheduled interval. Add as many or few moderation actions as required. Remember to activate the workflow to get it started. Requirements Discord channel for messages to moderate OpenAI for text classification Customising this template It is possible to cover multiple channels. Add as many as your community needs. Not using Discord. The template can also work in slack or other services which offer the same bot functionality.
by RedOne
🎙️ AI Audio Assistant with Voice-to-Voice Response Who is this for? Businesses, customer service teams, content creators, and organizations who want to provide intelligent voice-based interactions through Telegram. Perfect for accessibility-focused services, multilingual support, or hands-free customer assistance. What problem does this solve? Enables natural voice conversations with AI Breaks down language and accessibility barriers Provides instant voice responses to customer queries Reduces typing requirements for users Offers 24/7 voice-based customer support Maintains conversation context across voice interactions What this workflow does: Receives voice messages via Telegram bot Transcribes audio using Deepgram's advanced speech-to-text Processes transcribed text through AI agent with knowledge base access Generates intelligent responses based on conversation context Converts AI response to natural-sounding speech using Deepgram TTS Sends audio response back to user via Telegram Maintains conversation memory for contextual interactions 🔧 Technical Architecture Core Components: Telegram Bot**: Receives and sends voice messages Deepgram STT**: Transcribes voice to text with high accuracy OpenAI GPT**: Processes queries and generates responses Supabase Knowledge Base**: Stores and retrieves business information Memory Management**: Maintains conversation context Deepgram TTS**: Converts text responses to natural speech Data Flow: Voice Message → Telegram API → File Download Audio File → Deepgram STT → Transcript Transcript → AI Agent → Response Generation Response → Deepgram TTS → Audio File Audio Response → Telegram → User 🛠️ Setup Instructions Prerequisites Telegram Bot Token Create bot via @BotFather Get bot token and configure webhook Deepgram API Key Sign up at deepgram.com Get API key for STT and TTS services Note: Currently hardcoded in workflow OpenAI API Key OpenAI account with API access Configure in OpenAI Chat Model node Supabase Database Create Supabase project Set up knowledge_base table Configure API credentials Step-by-Step Setup Configure Telegram Bot Update telegramToken in "Prepare Voice Message Data" node Set correct bot token in Telegram nodes Test bot connectivity Set Up Deepgram Integration Replace API key in "Transcribe with Deepgram" node Update TTS endpoint in "HTTP Request" node Test voice transcription accuracy Configure Knowledge Base -- Create knowledge_base table in Supabase CREATE TABLE knowledge_base ( id UUID DEFAULT gen_random_uuid() PRIMARY KEY, question TEXT NOT NULL, answer TEXT NOT NULL, category VARCHAR(100), keywords TEXT[], created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW() ); Customize AI Prompts Update system message in "Telegram AI Agent" node Adjust temperature and max tokens in OpenAI model Configure memory session keys Test End-to-End Flow Send test voice message to bot Verify transcription accuracy Check AI response quality Validate audio output clarity 🎛️ Configuration Options Voice Recognition Settings Model**: nova-2 (Deepgram's latest model) Language**: English (en) - can be changed Smart Format**: Enabled for better punctuation AI Response Settings Temperature**: 0.3 (conservative responses) Max Tokens**: 100 (adjust based on needs) Memory**: Session-based conversation context Text-to-Speech Settings Model**: aura-2-thalia-en (natural female voice) Alternative voices**: Available in Deepgram TTS API Audio Format**: Optimized for Telegram 🔒 Security Considerations API Key Management // Current implementation has hardcoded tokens // Recommended: Use environment variables const telegramToken = process.env.TELEGRAM_BOT_TOKEN; const deepgramKey = process.env.DEEPGRAM_API_KEY; Data Privacy Voice messages are processed by external APIs Consider data retention policies Implement user consent mechanisms Ensure GDPR compliance if applicable 📊 Monitoring & Analytics Key Metrics to Track Voice message processing time Transcription accuracy rates AI response quality scores User engagement metrics Error rates and failure points Recommended Logging // Add to workflow for monitoring console.log({ timestamp: new Date().toISOString(), user_id: userData.user_id, transcript_confidence: transcriptData.confidence, response_length: aiResponse.length, processing_time: processingTime }); 🚀 Customization Ideas Enhanced Features Multi-language Support Add language detection Support multiple TTS voices Translate responses Voice Commands Implement wake words Add voice shortcuts Create voice menus Advanced AI Features Sentiment analysis Intent classification Escalation triggers Integration Expansions Connect to CRM systems Add calendar scheduling Integrate with help desk tools Performance Optimizations Implement audio preprocessing Add response caching Optimize API call sequences Implement retry mechanisms 🐛 Troubleshooting Common Issues Voice Not Transcribing Check Deepgram API key validity Verify audio format compatibility Test with shorter voice messages Poor Audio Quality Adjust TTS model settings Check network connectivity Verify Telegram audio limits AI Responses Too Generic Improve knowledge base content Adjust system prompts Increase context window Memory Not Working Check session key configuration Verify user ID extraction Test conversation continuity 💡 Best Practices Voice Interface Design Keep responses concise and clear Use natural speech patterns Avoid technical jargon Provide clear next steps Knowledge Base Management Regular content updates Clear categorization Keyword optimization Quality assurance testing User Experience Fast response times (<5 seconds) Consistent voice personality Graceful error handling Clear capability communication 📈 Success Metrics Technical KPIs Response time: <3 seconds average Transcription accuracy: >95% User satisfaction: >4.5/5 Uptime: >99.5% Business KPIs Customer query resolution rate Support ticket reduction User engagement increase Cost per interaction decrease 🔄 Maintenance Schedule Daily Monitor error logs Check API rate limits Verify service uptime Weekly Review conversation quality Update knowledge base Analyze usage patterns Monthly Performance optimization Security audit Feature updates User feedback review 📚 Additional Resources Documentation Links Deepgram STT API Deepgram TTS API Telegram Bot API OpenAI API Supabase Documentation Community Support n8n Community Forum Telegram Bot Developers Group Deepgram Developer Discord OpenAI Developer Community Note: This template requires active API subscriptions for Deepgram and OpenAI services. Costs may apply based on usage volume.
by Wyeth
Learn n8n: Interactive Lesson 1 This interactive tutorial teaches you how to build in n8n from scratch, using a live walkthrough with real-time examples. Rather than static documentation, this guided workflow explains key n8n concepts while you execute each step. It is ideal for developers new to n8n but experienced with programming, JSON, and APIs. Requirements An active n8n instance (cloud or self-hosted) Basic programming experience (JavaScript or TypeScript, JSON, and APIs) Web browser with console access (for log inspection) What This Workflow Covers Triggers, Form nodes, and data flow How n8n executes nodes one step at a time How data moves between nodes (variables, context, side effects) Merge, Split, Aggregate, and Loop patterns Code nodes in single vs multiple execution modes Debugging using Logs and console output Step-by-Step Setup Manual Setup Before starting, create your n8n account and optionally enable dark mode. A video link is included with suggested background material. Form-Based Progression The tutorial uses Form Trigger and Form nodes as interactive checkpoints. You will execute the workflow, follow the browser prompts, and observe what happens in the visual editor. Live Code and Flow Examples Key concepts like branching, merging, and data references are shown in action. Sticky notes in the workflow explain what to look for and how things work. Execution Behavior You will see how multiple items affect execution count, and how to control it using options like Execute Once, batching, and aggregation. Debugging with Logs Toward the end, the workflow encourages you to inspect inputs and outputs of each node, and use console.log() inside Code nodes to understand the data being passed around. How to Use This Workflow This workflow is meant to be a long-term reference. If you get stuck building in n8n, return to it. Each section focuses on a core concept such as how data flows, how execution counts behave, or how to merge parallel branches. You can copy and paste working examples from this tutorial directly into your own workflows to solve common problems. This is not just a lesson. It's a toolbox.
by HoangSP
SEO Blog Generator with GPT-4o, Perplexity, and Telegram Integration This workflow helps you automatically generate SEO-optimized blog posts using Perplexity.ai, OpenAI GPT-4o, and optionally Telegram for interaction. 🚀 Features 🧠 Topic research via Perplexity sub-workflow ✍️ AI-written blog post generated with GPT-4o 📊 Structured output with metadata: title, slug, meta description 📩 Integration with Telegram to trigger workflows or receive outputs (optional) ⚙️ Requirements ✅ OpenAI API Key (GPT-4o or GPT-3.5) ✅ Perplexity API Key (with access to /chat/completions) ✅ (Optional) Telegram Bot Token and webhook setup 🛠 Setup Instructions Credentials: Add your OpenAI credentials (openAiApi) Add your Perplexity credentials under httpHeaderAuth Optional: Setup Telegram credentials under telegramApi Inputs: Use the Form Trigger or Telegram input node to send a Research Query Subworkflow: Make sure to import and activate the subworkflow Perplexity_Searcher to fetch recent search results Customization: Edit prompt texts inside the Blog Content Generator and Metadata Generator to change writing style or target industry Add or remove output nodes like Google Sheets, Notion, etc. 📦 Output Format The final blog post includes: ✅ Blog content (1500-2000 words) ✅ Metadata: title, slug, and meta description ✅ Extracted summary in JSON ✅ Delivered to Telegram (if connected) Need help? Reach out on the n8n community forum
by Humble Turtle
Manage Jira Issues with Natural Language via Telegram and GPT-4o Overview The Jira Agent is an AI-powered assistant that allows users to interact with Jira directly through messaging platform Telegram. It leverages OpenAI's GPT-4o model to interpret natural language commands and perform various Jira-related actions. On Telegram, it enables users to create Jira stories by triggering a guided form when prompted with "create story." Additionally, it provides more extensive functionality, including creating, updating, searching, and transitioning Jira issues through natural language commands. How it works Normal interaction Using messages as "Please give all my issues". Standardized process of creating stories: Message: "create story" Open the Form that Telegram responds back to you Fill in the essential story information in the form The story automatically gets created in your backlog. Required Connections To use the Jira Agent effectively, users need access to: A Telegram account, Telegram setup involves deploying the bot and starting a chat; story creation is triggered with a simple text command. A connected Jira workspace Permissions to create and modify Jira issue Access to GPT-4o API-key Detailed configuration instructions are provided in the workflow Setup Time <15 minutes Customising this workflow Try adding more details to the form for more complete Jira ticket creation. Try connecting a Google Calendar node to plan your work
by Julian Ivanov
How it works This workflow automates the transformation of standard product images into professional product photography featuring human models It uses AI to analyze product images, create tailored photography prompts, and generate high-quality enhanced versions Set up steps You'll need an OpenAI API key and access to gpt-image-1 (verify your organization) Set up a Google Sheets spreadsheet with columns: Image-URL, Prompt, Output Create a Google Drive folder to store the generated images Requirements: OpenAI API access (for image generation and analysis) Google Sheets and Google Drive accounts Basic product images (URLs) as input The spreadsheet must contain a column named "Image-URL" with links to the product images This workflow automatically: Reads product image URLs from your Google Sheet Downloads the images for processing Analyzes each image to understand what product it contains Creates specialized photography prompts ensuring each product is shown with a human model Generates professional product photography using OpenAI's image generation capabilities Uploads results to Google Drive and updates your spreadsheet with links Extra: You can also use the included simple image generation workflow to directly create images via prompt without product image input. This option lets you quickly generate images through the OpenAI API using just text prompts
by Mujahid Kabae
How it works This workflow scrapes the latest Artificial Intelligence articles from TechCrunch, then processes and classifies the content using OpenAI and LangChain nodes. The final result is saved to Google Sheets and sent as a summary to a Telegram group. Workflow Logic: Trigger: Schedules daily at 6AM Bangkok time. Scraper: Extracts URLs and publish dates from TechCrunch's AI category. Filter: Only continues if the article is from yesterday (to avoid duplication). Content Fetch: Downloads and extracts article body text. AI Agent: Summarizes the article in Thai. Scores it using strict journalism criteria (max 100). Categorizes the news into one of 9 predefined categories. Output: Saves all structured data to Google Sheets. Sends a summary to a Telegram group. Set up steps 🕒 Estimated setup time: 10–15 minutes Connect your credentials: Google Sheets (OAuth2) Telegram OpenAI account (via LangChain model) Update the Telegram chatId and Google Sheets documentId/sheetName values. Deploy and activate the workflow. It runs daily without manual intervention.
by darrell_tw
How it works Receive a chat input as an image prompt. Call OpenAI's gpt-image-1 API to generate an image. Split the returned images and process them one by one. Upload each generated image to Google Drive. Save image links and thumbnails to a Google Sheets document. Record token usage and estimated cost into a separate sheet. Set up steps Connect your OpenAI API credentials for image generation. Connect your Google Drive and Google Sheets accounts. Set the destination folder in Google Drive. Set the target Google Sheet and specify the correct sheet tabs. The setup usually takes around 5-10 minutes. Detailed field mappings are already pre-configured inside the workflow. Additional tips and instructions are included as sticky notes inside the workflow. Google Sheet copy url Copy Sheet Link
by Jah coozi
Universal Digital Device Support Assistant Transform any device manual into an intelligent AI assistant that provides 24/7 support for your users. This template works with ANY household appliance, electronic device, or technical equipment. 🎯 Use Cases Manufacturers**: Provide instant support for your products Support Teams**: Reduce ticket volume with AI-powered answers Smart Homes**: Centralized help for all devices Personal Use**: Never lose a manual again ✨ Features Universal Compatibility**: Works with any device type Multi-Language Support**: Serve global customers Intelligent Search**: Semantic understanding of user queries Context Awareness**: Remembers conversation history Easy Setup**: Just upload your manual and go 🛠️ What's Included Webhook Endpoint: Receive user queries via API AI Agent: Processes questions intelligently Vector Database: Stores and searches manuals Memory System: Maintains conversation context Upload Pipeline: Easy manual ingestion 📋 Setup Instructions Add Your Credentials: OpenAI API key (or alternative LLM) Pinecone API key (or alternative vector DB) Upload Device Manuals: Use the manual upload trigger Paste manual text or upload PDF System automatically indexes content Configure Webhook: Set your preferred endpoint path Enable CORS if needed Deploy and share URL Optional Customization: Adjust chunk size for your content Modify system prompts for your brand Add additional tools or integrations 🔧 Supported Devices (Examples) Kitchen Appliances (ovens, dishwashers, coffee machines) Home Entertainment (TVs, sound systems, gaming consoles) Smart Home Devices (thermostats, cameras, lights) Computer Equipment (printers, routers, monitors) Power Tools & Garden Equipment Medical Devices And many more! 🌐 Integration Options Embed in your website Connect to chat platforms Mobile app integration Voice assistant compatibility Email support automation 📈 Benefits Reduce support costs by 70% Available 24/7 in multiple languages Consistent, accurate responses Scales infinitely Improves with usage 🔐 Privacy & Security Your data stays in your control Can be deployed on-premise GDPR compliant architecture No data sharing between devices 💡 Pro Tips Upload manuals in sections for better accuracy Include troubleshooting guides and FAQs Add model numbers and specifications Regular updates keep content fresh Start providing world-class device support today!
by Brian Money
Overview This template is designed for Amazon sellers and advertisers who want to automate their campaign performance analysis and bidding strategy. It solves the common challenge of manually reviewing Sponsored Products reports and guessing how to adjust keywords, placements, and budgets. By combining Amazon Advertising reports with OpenAI's GPT-4o, this workflow delivers real-time, personalized optimization instructions — automatically. Features 📥 Automatically downloads Sponsored Products reports from Google Drive 🧠 Uses AI to analyze campaign, keyword, placement, targeting, and budget performance 📊 Supports both .csv and .xlsx report formats 🔁 Handles multiple ASINs and scales easily across ad accounts 📧 Sends structured optimization recommendations to your inbox via Gmail 🗂 Built-in logic to normalize filenames and correctly map reports 🧹 Includes error handling and formatting cleanup for AI-ready input Requirements To use this workflow, you’ll need: An Amazon Ads account with access to Sponsored Products reports A Google Drive folder where Amazon Ads reports are delivered (manually or via Gmail automation) A Gmail account (for sending summaries) An OpenAI API key with access to GPT-4o Optional: a developer account for the Amazon Ads API to fully automate report generation in the future Setup Instructions 📂 Connect your Amazon Ads reports folder in the Google Drive node 🔐 Add your credentials to the OpenAI and Gmail nodes 📝 Schedule five reports in the Amazon Ads Console: Search Term Report → Detailed Targeting Report → Detailed Campaign Report → Summary Placement Report → Summary Budget Report → Summary Use “Last 30 Days”, “Daily”, and .xlsx or .csv format 🔁 (Optional) Automate report ingestion using Gmail + Drive workflows 🧪 Test with one account, then replicate across additional ad accounts as needed ⏱️ Setup time: 15–30 minutes 📌 All field-specific guidance is included in workflow notes`